CN113934519B - Application scheduling method and electronic equipment - Google Patents

Application scheduling method and electronic equipment Download PDF

Info

Publication number
CN113934519B
CN113934519B CN202110917469.1A CN202110917469A CN113934519B CN 113934519 B CN113934519 B CN 113934519B CN 202110917469 A CN202110917469 A CN 202110917469A CN 113934519 B CN113934519 B CN 113934519B
Authority
CN
China
Prior art keywords
service
camera
application
camera application
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110917469.1A
Other languages
Chinese (zh)
Other versions
CN113934519A (en
Inventor
夏兵
张威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210898089.2A priority Critical patent/CN115705241B/en
Priority to CN202110917469.1A priority patent/CN113934519B/en
Publication of CN113934519A publication Critical patent/CN113934519A/en
Application granted granted Critical
Publication of CN113934519B publication Critical patent/CN113934519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44521Dynamic linking or loading; Link editing at or after load time, e.g. Java class loading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/448Execution paradigms, e.g. implementations of programming paradigms
    • G06F9/4488Object-oriented
    • G06F9/449Object-oriented method invocation or resolution

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • Stored Programmes (AREA)

Abstract

The application provides an application calling method and electronic equipment. The method comprises the following steps: the electronic device may start the pre-loaded service of the camera application when the camera application is started based on the pre-loaded service corresponding to the camera application. Wherein the pre-loaded service of the camera application is determined based on a service invoked each time the user uses the camera application. The application provides a preloading mode of service, can confirm the preloading service that camera application corresponds based on user's use habit to camera application to make electronic equipment when starting camera application, before the user triggers AI service, can start AI service in advance, thereby reduce the cold start time of service, shorten the response duration that the user waited for the service to start, promote user's use and experience.

Description

Application scheduling method and electronic equipment
Technical Field
The present application relates to the field of terminal devices, and in particular, to an application scheduling method and an electronic device.
Background
At present, after an electronic device starts an application program in response to a received user operation, for some optional services, in addition to some services that must be loaded, the electronic device generally loads a specified service based on the received user operation, which results in slow response of part of the services and affects user experience.
Disclosure of Invention
In order to solve the above problem, the present application provides an application scheduling method and an electronic device. In the method, the electronic equipment can determine the pre-loading service corresponding to the application based on the use habit of the user to the application, and automatically start the pre-loading service after the application is started, so that the cold start time of the service is reduced, the waiting time of the user is shortened, and the use experience of the user is improved.
In a first aspect, the present application provides an electronic device. The electronic device includes: one or more processors, memory, and a fingerprint sensor; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when executed by the one or more processors, cause the electronic device to perform the steps of: in response to the received first user operation, starting a camera application; when the camera application is operated, responding to the received operation that a user clicks a first AI option of the camera application, and starting an artificial intelligence AI service; based on AI service, carrying out AI processing on a first image acquired by a camera; after the camera application is closed, responding to the received second user operation, and starting the camera application again; when the camera application is operated, responding to the received operation that the user clicks the first AI option, and starting an AI service; based on the AI service, carrying out AI processing on the second image acquired by the camera; determining that the pre-loaded service of the camera application comprises an AI service; after the camera application is closed again, the camera application is started in response to the received third user operation, and the preloading service of the camera application is started; and responding to the received operation of clicking the first AI option by the user, and carrying out AI processing on the third image acquired by the camera based on the started AI service. In this way, the electronic device can acquire habits of a user when using the camera application based on the service called when the camera application is run each time to determine the pre-loaded service corresponding to the camera application. Correspondingly, when the camera application is started, the electronic equipment can automatically load the pre-loading service corresponding to the camera application, so that the cold start time of the service is reduced, the waiting time of a user is shortened, and the use experience of the user is improved.
According to a first aspect, the computer program, when executed by one or more processors, causes an electronic device to perform the steps of: and after the camera application is closed, sending first user habit information to a server, wherein the first user habit information is used for indicating that the AI service is called when the electronic equipment runs the camera application. In this way, the server side can count the services called by the user when the user uses the camera application based on the acquired user habit information sent by the electronic device, so as to acquire the use habits when the user uses the camera application, that is, determine the pre-loaded service corresponding to the camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: and after the camera application is closed again, sending second user habit information to the server, wherein the second user habit information is used for calling the AI service when the electronic equipment is indicated to operate the camera application again. In this way, the server side can count the services called by the user when the user uses the camera application based on the acquired user habit information sent by the electronic device, so as to acquire the use habits when the user uses the camera application, that is, determine the pre-loaded service corresponding to the camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: receiving first indication information sent by a server, wherein the first indication information is used for indicating that the preloading service of the camera application comprises an AI service; in response to the received first indication information, it is determined that the preloading service of the camera application includes an AI service. In this way, the server may instruct the electronic device to automatically load the pre-loaded service when the camera application is started after determining the pre-loaded service corresponding to the camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: after the camera application is closed, responding to the received fourth user operation, starting the camera application, and starting an AI service; when the camera application is operated, responding to the received operation that a user clicks a filter option of the camera application, and starting a filter service; rendering a fourth image acquired by the camera based on the filter service; after the camera application is closed, in response to the received fifth user operation, starting the camera application, and starting an AI service; when the camera application runs, responding to the received operation that the user clicks the filter option, and starting a filter service; rendering a fifth image acquired by the camera based on the filter service; determining that the preloaded service of the camera application includes a filter service and does not include an AI service; after the camera application is closed again, the camera application is started in response to the received sixth operation, and the preloading service of the camera application is started; and rendering a sixth image acquired by the camera based on the started filter service in response to the received operation of clicking the filter option by the user. In this way, the electronic device can periodically update the pre-loaded service corresponding to the camera application. That is, the pre-loaded service corresponding to the camera application is periodically updated according to the usage habit of the user. The electronic device may start the corresponding pre-loading service when the camera application is started based on the pre-loading service corresponding to the updated camera application.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: and after the electronic equipment renders a fourth image acquired by the camera based on the filter service, sending third user habit information to the server, wherein the third user habit information is used for indicating that the electronic equipment calls the filter service when running the camera application. In this way, the server can periodically count and update the habits of the user, so that when the electronic equipment uses different applications, the pre-loaded service can meet the requirements of the user in different periods.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: and after the electronic equipment renders a fifth image acquired by the camera based on the filter service, sending fourth user habit information to the server, wherein the fourth user habit information is used for indicating that the electronic equipment calls the filter service when running the camera application. In this way, the server can periodically count and update the habits of the user, so that when the electronic equipment uses different applications, the pre-loaded service can meet the requirements of the user in different periods.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: receiving second indication information sent by the server, wherein the second indication information is used for indicating that the preloading service of the camera application comprises a filter service; in response to the received second indication information, it is determined that the preload service of the camera application includes a filter service and does not include an AI service. In this way, the electronic device can update the pre-loaded service corresponding to the saved camera application according to the indication of the server.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: after AI processing is carried out on the first image acquired by the camera based on the AI service, the camera application is closed in response to the received seventh user operation, and the AI service is closed; responding to the received eighth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation that the user clicks a second AI option of the gallery application, and starting an AI service; based on AI service, carrying out AI processing on the images in the gallery; after AI processing is carried out on a second image acquired by the camera based on the AI service, the camera application is closed in response to the received ninth user operation, and the AI service is closed; responding to the received tenth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation that the user clicks the second AI option, and starting an AI service; based on AI service, carrying out AI processing on the images in the gallery; determining that the keep-alive services of the camera application include AI services; and after the third image acquired by the camera is subjected to AI processing based on the started AI service, responding to the received eleventh user operation, and closing the camera application, wherein the AI service is in a starting state. The application further provides a service keep-alive scheme, and the electronic device or the server can acquire the association relation between the services used by different applications based on the use habits of the user so as to determine the keep-alive application corresponding to the camera application. So that the electronic device can still retain part of the service, such as the AI service, of the camera application after the camera application is closed.
According to a first aspect, or any implementation of the first aspect above, the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of: responding to the received twelfth user operation, and starting a gallery application; and responding to the received operation that the user clicks the second AI option, and carrying out AI processing on a fifth image in the gallery based on the started AI service. Therefore, by the service keep-alive scheme provided by the application, the AI service which is not closed by the camera application can be directly called after the gallery application is started, and the duration of cold start of the service is effectively shortened. And, the overhead of repeatedly turning on and off the service can be reduced.
In a second aspect, the present application provides a method for scheduling an application. The method comprises the following steps: in response to the received first user operation, starting a camera application; when the camera application is operated, responding to the received operation that a user clicks a first AI option of the camera application, and starting an artificial intelligence AI service; based on AI service, carrying out AI processing on a first image acquired by a camera; after the camera application is closed, responding to the received second user operation, and starting the camera application again; when the camera application is operated, responding to the received operation that the user clicks the first AI option, and starting an AI service; based on AI service, AI processing is carried out on a second image acquired by the camera; determining that the pre-loaded service of the camera application comprises an AI service; after the camera application is closed again, the camera application is started in response to the received third user operation, and the preloading service of the camera application is started; and responding to the received operation of clicking the first AI option by the user, and carrying out AI processing on the third image acquired by the camera based on the started AI service.
According to a second aspect, after the camera application is closed, the method further comprises: and sending first user habit information to a server, wherein the first user habit information is used for indicating that the AI service is called when the electronic equipment runs the camera application.
According to a second aspect, or any implementation manner of the second aspect above, after the camera application is closed again, the method further includes: and sending second user habit information to the server, wherein the second user habit information is used for calling the AI service when the electronic equipment is indicated to operate the camera application again.
According to a second aspect, or any implementation form of the second aspect above, determining that the pre-load service of the camera application comprises an AI service comprises: receiving first indication information sent by a server, wherein the first indication information is used for indicating that the preloading service of the camera application comprises an AI service; in response to the received first indication information, it is determined that the preloaded service of the camera application includes an AI service.
According to a second aspect, or any implementation manner of the second aspect above, the method further includes: after the camera application is closed, responding to the received fourth user operation, starting the camera application, and starting an AI service; when the camera application is operated, responding to the received operation that a user clicks a filter option of the camera application, and starting a filter service; rendering a fourth image acquired by the camera based on the filter service; after the camera application is closed, in response to the received fifth user operation, starting the camera application, and starting an AI service; when the camera application runs, responding to the received operation that the user clicks the filter option, and starting a filter service; rendering a fifth image acquired by the camera based on the filter service; determining that the preloaded service of the camera application includes a filter service and does not include an AI service; after the camera application is closed again, the camera application is started in response to the received sixth operation, and the preloading service of the camera application is started; and rendering a sixth image acquired by the camera based on the started filter service in response to the received operation of clicking the filter option by the user.
According to a second aspect, or any implementation manner of the second aspect above, after the electronic device renders a fourth image acquired by the camera based on the filter service, the method further includes: and sending third user habit information to the server, wherein the third user habit information is used for indicating that the filter service is called when the electronic equipment operates the camera application.
According to a second aspect, or any implementation manner of the second aspect, after the electronic device renders a fifth image acquired by the camera based on a filter service, the method further includes: and sending fourth user habit information to the server, wherein the fourth user habit information is used for indicating that the filter service is called when the electronic equipment operates the camera application.
According to a second aspect, or any implementation of the second aspect above, determining that the pre-load service of the camera application includes a filter service and does not include an AI service, comprises: receiving second indication information sent by the server, wherein the second indication information is used for indicating that the preloading service of the camera application comprises a filter service; in response to the received second indication information, it is determined that the preload service of the camera application includes a filter service and does not include an AI service.
According to a second aspect, or any implementation manner of the second aspect, after performing AI processing on a first image acquired by a camera based on an AI service, the method further includes: in response to the received seventh user operation, closing the camera application, and closing the AI service; responding to the received eighth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation that the user clicks a second AI option of the gallery application, and starting an AI service; based on AI service, carrying out AI processing on the images in the gallery; after the AI processing is performed on the second image acquired by the camera based on the AI service, the method further includes: in response to the received ninth user operation, closing the camera application and closing the AI service; responding to the received tenth user operation, and starting a gallery application; when the gallery application is operated, responding to the received operation that the user clicks the second AI option, and starting an AI service; based on AI service, carrying out AI processing on the images in the gallery; determining that the keep-alive services of the camera application include AI services; after performing AI processing on the third image acquired by the camera based on the started AI service, the method further includes: in response to the received eleventh user operation, the camera application is closed, wherein the AI service is in a start state.
According to a second aspect, or any implementation manner of the second aspect above, the method further includes: responding to the received twelfth user operation, and starting a gallery application; and responding to the received operation that the user clicks the second AI option, and carrying out AI processing on a fifth image in the gallery based on the started AI service.
Any one implementation manner of the second aspect and the second aspect corresponds to any one implementation manner of the first aspect and the first aspect, respectively. For technical effects corresponding to any one implementation manner of the second aspect and the second aspect, reference may be made to the technical effects corresponding to any one implementation manner of the first aspect and the first aspect, and details are not repeated here.
In a third aspect, the present application provides a computer readable medium for storing a computer program comprising instructions for performing the method of the second aspect or any possible implementation of the second aspect.
In a fourth aspect, the present application provides a computer program comprising instructions for carrying out the method of the second aspect or any possible implementation of the second aspect.
In a fifth aspect, the present application provides a chip comprising a processing circuit, a transceiver pin. Wherein the transceiver pin and the processing circuit are in communication with each other via an internal connection path, and the processing circuit performs the method of the second aspect or any possible implementation manner of the second aspect to control the receiving pin to receive signals and to control the sending pin to send signals.
Drawings
Fig. 1 is a schematic diagram of a hardware configuration of an exemplary electronic device;
fig. 2 is a schematic diagram of a software structure of an exemplary electronic device;
FIG. 3 is an exemplary illustrative user interface diagram;
FIG. 4 is an exemplary illustrative user interface diagram;
FIG. 5 is an exemplary module interaction diagram;
FIG. 6 is an exemplary module interaction diagram;
FIG. 7 is an exemplary illustrative user interface diagram;
FIG. 8 is an exemplary module interaction diagram;
FIGS. 9 a-9 b are exemplary user interface diagrams
Fig. 10 is an exemplary interaction diagram of a mobile phone and a cloud;
FIG. 11 is an exemplary illustrative user interface diagram;
FIG. 12 is an exemplary illustrative user interface diagram;
FIG. 13 is an exemplary illustrative user interface diagram;
FIG. 14 is an exemplary module interaction diagram;
fig. 15 is an exemplary interaction diagram of a mobile phone and a cloud;
fig. 16a to 16b are schematic diagrams illustrating a result of a user habit analyzed by the cloud based on the received user habit information;
fig. 17 is an exemplary interaction diagram of a mobile phone and a cloud;
FIG. 18 is an exemplary illustrative module interaction diagram;
FIG. 19 is an exemplary illustrative user interface diagram;
fig. 20 is a schematic structural diagram of an exemplary illustrated apparatus.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first" and "second," and the like, in the description and in the claims of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first target object and the second target object, etc. are specific sequences for distinguishing different target objects, rather than describing target objects.
In the embodiments of the present application, words such as "exemplary" or "for example" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present application, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of processing units refers to two or more processing units; the plurality of systems refers to two or more systems.
Fig. 1 shows a schematic structural diagram of an electronic device 100. It should be understood that the electronic device 100 shown in fig. 1 is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: the mobile terminal includes a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller may be, among other things, a neural center and a command center of the electronic device 100. The controller can generate an operation control signal according to the instruction operation code and the time sequence signal to finish the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, audio module 170 and wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor, which processes input information quickly by referring to a biological neural network structure, for example, by referring to a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. For example, in the embodiment of the present application, the processor 110 may cause the electronic device 100 to preload a part of the application program when the application is called by executing the instructions stored in the internal memory 121. And after closing the application, keeping alive part of the application program.
Alternatively, the internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A can be of a wide variety, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also referred to as a "touch panel". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M can acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a hierarchical architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 2 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 2, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 2, the application framework layer may include a window manager, a view system, a Media service, an AI (Artificial Intelligence) service, a code scanning service, a Media service (Media Server), an Audio service (Audio Server), a Camera service (Camera Server), and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The AI service is used for AI identification. For example, the AI service may perform AI recognition on images captured by a camera to identify people in the images. The AI service may further perform AI recognition on the image captured by the camera to recognize an object, a scene, etc. in the image, and acquire image processing parameters corresponding to the object, the scene, etc. in the image to instruct the view system to process (e.g., render) the image based on the image processing parameters.
The code scanning service is used for identifying the graphic code in the image collected by the camera.
The media service is used for processing for managing audio data and image data, such as controlling the data flow of the audio data and image data and writing the audio stream and image stream to an MP4 file. It should be noted that, in the description of the embodiments of the present application, the audio data and the image data may also be referred to as an audio stream and an image stream, respectively, or audio information and image information, and the present application is not limited thereto.
The audio service is used to process the audio stream accordingly. The camera service is used for carrying out corresponding processing on the image stream.
The system library and Runtime layer comprises a system library and an Android Runtime (Android Runtime). The system library may include a plurality of functional modules. For example: a browser kernel, a 3D graphics library (e.g., OpenGL ES), a font library, and the like. The browser kernel is responsible for interpreting the web page syntax (e.g., an application HTML, JavaScript in the standard universal markup language) and rendering (displaying) the web page. The 3D graphics library is used for realizing three-dimensional graphics drawing, image rendering, composition, layer processing and the like. The font library is used for realizing the input of different fonts. The android runtime includes a core library and a virtual machine. The android runtime is responsible for scheduling and managing the android system. The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
It is to be understood that the components contained in the system framework layer, the system library and the runtime layer shown in fig. 2 do not constitute a specific limitation of the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry. HAL layers include, but are not limited to: an Audio hardware abstraction layer (Audio HAL) and a Camera hardware abstraction layer (Camera HAL). The audio hardware abstraction layer is used for processing the audio stream, for example, processing noise reduction, directional enhancement, and the like, and the camera hardware abstraction layer is used for processing the image stream.
The kernel layer is a layer between the hardware and the software layers described above. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver. The hardware may include a camera, a display, a microphone, a processor, and a memory, among other devices.
It is to be understood that the components contained in the system framework layer, the system library and the runtime layer shown in fig. 2 do not constitute a specific limitation of the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components.
Fig. 3 is an exemplary user interface diagram. Referring to FIG. 3, display interface 301 illustratively includes one or more controls therein. Controls include, but are not limited to: network controls, power controls, application icon controls, and the like. Exemplary application icon controls include, but are not limited to: video application icon controls, weather application icon controls, settings application icon controls, camera application icon controls 302, and the like. In an embodiment of the application, the user may click on the camera application icon control 302.
Referring to fig. 4, the exemplary cell phone displays a camera preview interface 401 in response to receiving a user click on the camera application icon control 302. Illustratively, the camera preview interface 401 includes, but is not limited to: a camera preview window 402 and one or more capture options. The shooting options include, but are not limited to: aperture, night scene, portrait, photograph, video 403, professional, etc. Illustratively, the camera preview window is used to display images captured by the camera. In the embodiment of the present application, the user selects the video recording option 403 as an example, that is, the camera application is in the video recording mode.
Illustratively, the recording process of the electronic device may be divided into two parts, the first part is a creation process, optionally the camera application calls a media service, and the media service calls at least one service or module to create a corresponding instance, and may also be understood as a preparation process, as shown in fig. 5, and the second part is a recording process, that is, a process in which each instance processes acquired data (audio or image), as shown in fig. 6. The creation process is mainly that each module creates a corresponding instance. The recording process is the processing of data (including audio streams and image streams) for each instance.
First part, creation Process
1. Referring to fig. 5, illustratively, the camera application starts and invokes the media service to cause the media service to create a corresponding instance. Specifically, as shown in fig. 3, after detecting that the user clicks the camera application icon control, the mobile phone starts the camera application. As shown in fig. 4, the cell phone displays a camera application preview interface 401.
Illustratively, after the camera application is started, a Media Recorder (Media recording) instance is created in the application framework layer through an interface with the application framework layer to start the recording process. The Media Recorder instance instructs the Media service to create the corresponding instance. It should be noted that "example" described in the embodiments of the present application may also be understood as program code or process code running in a process for performing corresponding processing on received data (e.g. an audio stream or an image stream). It should be noted that, in the description of the embodiments of the present application, a camera application is taken as an example for illustration, and in other embodiments, the application may also be other applications having a shooting function, for example, a camera function in a chat application, and the present application is not limited thereto.
Illustratively, the Media service creates instances corresponding to audio and images in response to an indication of a Media Recorder instance. Specifically, the media service creates a Stagefright Recorder (recording process) instance. Wherein the Stagefright Recorder instance is used to manage the initialization of audio and image data and the data flow.
The Stagefront Recorder instance creates a Camera Source instance, an Audio Recorder instance, a Video Encoder instance, an Audio Encoder instance, an Mpeg4Writer instance. In the embodiment of the present application, only the file in the MP4 format is created as an example, and in other embodiments, other video formats may be generated and corresponding examples may be created.
2. The media service instructs the camera service and the audio service to create corresponding instances.
Illustratively, the Camera Source instance instructs the Camera service to create the Camera instance, and the Audio Record instance instructs the Audio service to create the Record Thread instance. Accordingly, the Camera service creates a Camera instance and the audio service creates a Record Thread instance.
3. The camera service instructs the camera hardware abstraction layer to create a corresponding instance, and the audio service instructs the audio hardware abstraction layer to create a corresponding instance.
Illustratively, the Camera instance instructs the Camera hardware abstraction layer to create a Camera 3Device (Camera Device, where the number 3 represents the version number of the Camera service, updateable with the version) instance, and the Record Thread instance instructs the audio hardware abstraction layer to create an Input Stream instance.
4. The camera hardware abstraction layer calls a camera driver, and the audio hardware abstraction layer calls a microphone driver. Illustratively, the Camera 3Device instance triggers the Camera drive to start, and the Input Stream instance triggers the microphone drive to start.
5. The camera is called by the camera drive to collect image streams, and the microphone is called by the microphone drive to collect audio streams.
Second part, recording process
1. Referring to fig. 6, the camera illustratively outputs a captured image stream to the camera driver and the microphone outputs a captured audio stream to the microphone driver.
2. The camera driver outputs the image stream and the corresponding system time to the camera hardware abstraction layer, and the microphone driver outputs the audio stream to the audio hardware abstraction layer. Illustratively, the Camera 3Device instance takes an image Stream Input by the Camera, and the Input Stream instance takes an audio Stream Input by the microphone driver.
3. The camera hardware abstraction layer outputs the acquired image stream to the camera service, and the audio hardware abstraction layer outputs the acquired audio stream to the audio service.
Illustratively, the Camera instance takes an image Stream Input by the Camera 3Device instance, and the Record Thread instance takes an audio Stream Input by the Input Stream instance.
4. The camera service outputs each image in the image stream to the media service. And, the audio service outputs each audio stream to the media service.
Illustratively, the Camera Source instance takes images from the image stream input by the Camera instance, and the Audio Record instance takes an Audio stream input by the Record Thread instance.
5. The media service generates an MP4 file based on the acquired plurality of images and the plurality of audio streams.
Illustratively, the Camera Source instance outputs the acquired plurality of images to the video Encoder instance, and the Audio Record instance outputs the acquired plurality of Audio streams to the Audio Encoder instance.
The video Encoder instance encodes the plurality of images, generating corresponding image frames, and outputs the plurality of image frames to the Mpeg4Writer instance. And the Audio Encoder instance encodes the plurality of Audio streams to generate corresponding Audio frames, the Audio Encoder instance outputting the plurality of Audio frames and to the Mpeg4Writer instance.
Illustratively, the Mpeg4Writer instance generates an MP4 file based on the captured images and audio streams. Among them, the MP4 file includes image data (i.e., a plurality of image frames) and audio data (i.e., a plurality of audio frames). When the MP4 file is played on any platform or player, the player decodes the image frames and audio frames according to the MPEG4 standard to obtain the original images corresponding to the image frames and the original audio corresponding to the audio frames. And the player plays the decoded image and audio.
Illustratively, the media service outputs a plurality of images to the camera application. The camera application displays the image of the media service input in the camera preview window 402.
Fig. 7 is an exemplary user interface diagram. Referring to fig. 7 (1), illustratively, during the preview process, the user clicks on the AI option 404 to initiate the AI service. Referring to fig. 7 (2), the handset illustratively initiates an AI service in response to a received user action. After the AI service is started, the mobile phone can perform AI identification on the image acquired by the camera and perform image processing on the image based on the AI identification result. For example, still referring to fig. 7 (2), after the AI service is started, the AI service identifies the avatar collected by the camera, and identifies that the image collected by the camera includes a person image. The AI service may display an AI recognition result, such as a "portrait" option, in the camera preview window 402 for indicating that the current shooting scene is a portrait scene. The AI service may obtain pre-stored image processing parameters associated with the portrait scene, and perform corresponding processing on the image based on the image processing parameters, for example, in this embodiment, after the AI service recognizes that the image includes the portrait, the AI service may perform background blurring processing on a background other than the portrait. Optionally, a cancel button may be included in the "portrait" option, and if the user clicks the cancel button, the AI service cancels the current processing of the image, that is, cancels the background blurring, and restores the preview image to the original image.
FIG. 8 is an exemplary module interaction diagram. Referring to fig. 8, exemplary services or modules such as media service, camera service, and audio service are executed according to the flow described in fig. 6. In response to the received operation of the user clicking the AI option 404, the camera application sends indication information to the media service for indicating that the media service invokes the AI service. Illustratively, the media service invokes (which may also be referred to as loading) the AI service in response to an indication by the camera application.
After the AI service is started, the camera service may output the image stream to the AI service. The AI service may perform AI recognition on the image stream and, after AI processing, output the processed image to the media service. The media service outputs the image to the camera application. The camera application may display the AI-processed image in the camera preview window 402.
It should be noted that, as shown in fig. 7 and fig. 8, the AI service is loaded after the camera application is started, and there is a response time length between the start of the AI service and the operation of clicking the AI option by the user, for example, may be 1s (second). That is, from the user's perspective, the user clicks on the AI option 404 for an interval of 1s before seeing the background of the image blurred from the camera preview window (where the AI service processes the image and the latency of the interaction between the media service and the camera application is negligible).
The embodiment of the application provides a preloading mode, and at least one application or at least one service in the application can be preloaded by counting user habit information, so that the starting efficiency of the application or the service is effectively accelerated, and the user experience is improved.
Fig. 9a is an exemplary user interface diagram. Referring to fig. 9a, for example, the display interface 301 includes one or more controls therein. Controls include, but are not limited to: network controls, power controls, application icon controls, and the like. Exemplary application icon controls include, but are not limited to: video application icon controls, weather application icon controls, set application icon controls 302, and the like. In the embodiment of the present application, the user may click on the set application icon control 302. Referring to fig. 9b, for example, the mobile phone displays the setting interface 303 in response to the received user click operation. One or more options are included in the settings interface 303. Optionally, an account option 304 is included in the settings interface 303. The account option 304 is used to indicate the account that the user is logged into. The user may view account information by clicking on the account option 304. For example, after the user logs in the glory account, the cloud end can record the relevant information of the mobile phone through the glory account. For example, in the embodiment of the application, after the mobile phone sends the user habit information to the cloud, the cloud may associate the received user habit information with a glory account of the mobile phone.
The scenario in fig. 7 is still taken as an example below. Referring to fig. 10, for example, the mobile phone may send the user habit information to the cloud. Illustratively, the user habit information is used to describe user habits of a user when using a certain application. For example, as described above, the user starts the AI service while using the camera application. In the user habit information sent to the cloud by the mobile phone and used for indicating the user to use the camera application, the loaded services include but are not limited to: media services, camera services, AI services, camera hardware abstraction layers, audio hardware abstraction layers, camera drivers, microphone drivers, and the like.
Optionally, the mobile phone may send the user habit information to the cloud after the camera application is used, that is, the camera application is turned off, or the mobile phone is switched to another application.
Optionally, the mobile phone may also send the user habit information to the cloud end in the use process of the camera application. For example, after the AI service is started, the mobile phone may send the user habit information to the cloud. For another example, if the user starts the flash lamp service, the mobile phone may send user habit information to the cloud, where the user habit information is used to indicate that the user uses the camera application, and the loaded services include, but are not limited to: media services, camera services, AI services, flash services, camera hardware abstraction layers, audio hardware abstraction layers, camera drivers, microphone drivers, and the like.
Optionally, the mobile phone may send the user habit information to the cloud, and may also send the account information of the user to the cloud. And the cloud end receives the user habit information and the account information sent by the mobile phone. The cloud may determine a correspondence between the user habit information and the user account based on the account information. For example, in the embodiment of the application, the mobile phone may send the user account and the user habit information to the cloud after the user uses the camera application each time. In the embodiment of the present application, it is assumed that the AI service is started every time the user uses the camera, that is, in the user habit information sent by the mobile phone every time, the AI service is included in the service loaded in the camera application scenario.
The scenario shown in fig. 7 is a service call flow of a camera application scenario. The following describes a service invocation flow in a payment scenario, taking a scenario in which a camera is invoked in the payment scenario as an example. Fig. 11 to 13 are scene diagrams illustrating invoking a camera in an exemplary payment scene. Referring to fig. 11, a wallet application icon control 1102 is illustratively included in display 1101. The description of the other controls in the display interface 1101 can refer to the related description of fig. 3, and will not be repeated here.
Illustratively, the user clicks on the wallet application icon control 1102. Referring to fig. 12, the mobile phone starts a wallet application in response to a received user operation, and displays a wallet application interface 1201. Wallet application interface 1201 includes, but is not limited to: scan code option 1202, service option box 1203. The service options box 1203 includes, but is not limited to, one or more services. Services include, but are not limited to: payment services, ride services, key services, card package services, and the like.
The user may click on the code scanning service 1202. Referring to fig. 13, the mobile phone displays a scan interface 1301 in response to a received user operation. And a graphic code acquired by the camera is displayed in a scanning interface 1301. The code scanning service can identify the graphic code collected by the camera.
In connection with fig. 13, fig. 14 is a schematic diagram illustrating module interaction. Referring to fig. 14, illustratively, the wallet application starts and invokes the media service in response to the user clicking on the scan option 1202. The media service calls the camera service, and the camera service calls the camera hardware abstraction layer. The camera hardware abstraction layer calls a camera driver, and the camera driver calls the camera. And, the wallet application invokes the code scanning service. The specific calling process can refer to the related description in fig. 5, and is not described herein again.
Similar to the description in fig. 7, after the user clicks on the code scan option, the wallet application invokes the code scan service in response to the received user action. Accordingly, there may be a certain response delay from the user clicking to the display of the scanning interface, which may be 1s, for example.
Referring to fig. 15, in an exemplary embodiment, the mobile phone sends the user habit information and the user account information to the cloud. Wherein, the user habit information is used for indicating that when the user uses the code scanning application, the loaded service comprises: a code scanning service, a media service, a camera hardware abstraction layer, a camera driver, etc.
The cloud receives the user habit information and the user account information sent by the mobile phone, and associates the received user habit information with the user account information. That is, the cloud has received two user habit information, both of which are associated with the same user account information.
It should be noted that fig. 10 and fig. 15 only schematically illustrate one interaction between the mobile phone and the cloud. In the embodiment of the application, after the user uses the camera application and the code scanning application each time, the mobile phone sends the service loaded in the camera application scene and the service loaded in the code scanning application scene to the cloud. For example, the user may use the camera application 10 times per day, and of the 10 times that the camera application is used, 8 times may be all triggering the AI function. Correspondingly, after the camera application is closed each time, the mobile phone sends the user habit information to the cloud end, so as to indicate the related services (which can also be understood as a started service set) started by the camera application. That is, the mobile phone sends 10 pieces of user habit information corresponding to the camera application to the cloud within one day. In addition, among the 10 user habit information, 8 user habit information indicates that the camera application calls a plurality of services including an AI service in the starting process, and it can also be understood that an intersection exists in a service set called by the application indicated in the user habit information, where the intersection is the plurality of services including the AI service.
For example, the cloud may periodically count the user habit information. As described above, the cloud associates the received plurality of user habit information with the user account. Then, the cloud may analyze the user habit information received in a cycle (for example, 3 days, which may be set according to actual needs, and is not limited in this application) to obtain usage habits of the user corresponding to the user account on different applications, so as to further obtain the service items that need to be preloaded in each application scenario.
For example, fig. 16a to 16b are user habit results analyzed by the cloud based on the received user habit information. Referring to fig. 16a, as described above, the cloud optionally receives a plurality of user habit information of the user when using the camera application in a period. The cloud end can analyze a plurality of user habit information corresponding to the camera application in the period so as to obtain the use probability of each service in the camera application scene. As shown in fig. 16a, for the user, in a scenario where the user uses a camera application within a period, a probability of media service being invoked is 98%, a probability of camera service being invoked is 87%, a probability of camera hardware abstraction layer being invoked is 87%, a probability of camera driver being invoked is 87%, a probability of audio service being invoked is 50%, a probability of audio hardware abstraction layer being invoked is 59%, a probability of audio driver being invoked is 50%, a probability of AI service being invoked is 60%, a probability of service a being invoked is 20%, a probability of service c being invoked is 5%, a probability of service b being invoked is 10%, a probability of service d being invoked is 4%, and a probability of service e being invoked is 2%. It should be noted that, in the embodiment of the present application, the probability corresponding to the service (also referred to as the probability of the service being invoked, or the probability of the use, etc.) is optionally a ratio of the service being invoked to the application being invoked, for example, if the camera application is invoked 100 times in a cycle (for example, three days), the media service is invoked 98 times, and the AI service is invoked 60 times, the probability of the media service being invoked is 98%, and the probability of the AI service being invoked is 60%. The above numerical values are merely illustrative examples, and the present application is not limited thereto.
Referring to fig. 16b, for example, the cloud end analyzes a plurality of user habit information corresponding to the wallet application in a period to obtain a usage probability of each service in a scenario of the wallet application. As shown in fig. 16b, for the user, in a scenario where the user uses a wallet application in a period, a probability of media service being invoked is 80%, a probability of camera hardware abstraction layer being invoked is 80%, a probability of camera drive being invoked is 80%, a probability of code scanning service being invoked is 80%, a probability of car service being invoked is 20%, a probability of car code service being invoked is 20%, a probability of payment service being invoked is 90%, a probability of graphics code service being invoked is 90%, and a probability of service m being invoked is 10%. It should be noted that the names, the numbers, and the corresponding probabilities of the services shown in the embodiments of the present application are merely illustrative examples, and the present application is not limited thereto.
After the cloud acquires the service to be loaded and the corresponding use probability (also referred to as called probability) in each application scene of the user, the cloud may detect whether the use probability of the service corresponding to each application is greater than or equal to a set threshold. For example, the set threshold may be 60%, which is merely an illustrative example, and may be set according to actual requirements, and the present application is not limited thereto. Alternatively, in this embodiment of the present application, a service in which the usage probability exceeds a set threshold may be referred to as a preload service. For example, referring to fig. 16a, the cloud performs statistics on the probability of each service in the camera application scene, and determines that the media service, the camera hardware abstraction layer, the camera driver, and the AI service are preloaded services corresponding to the camera application. The cloud end counts the probability of each service in a wallet application scene, and determines that a media service, a camera hardware abstraction layer, a camera drive, a code scanning service, a payment service and a graphic code service are preloading services corresponding to the wallet application.
Illustratively, the cloud sends the counted pre-loading service corresponding to each application to the mobile phone. For example, referring to fig. 17, the cloud sends first preloaded service information and second preloaded service information to the mobile phone, where the first preloaded service information is used to indicate a preloaded service (may also be referred to as a preloaded service set) corresponding to a camera application scenario, and the second preloaded service information is used to indicate a preloaded service corresponding to a wallet application scenario. It should be noted that, in the embodiment of the present application, only the camera application and the wallet application are taken as examples for description. In other embodiments, other application scenarios used by the mobile phone may also refer to the schemes in the above embodiments, and a description of the present application is not repeated.
For example, in the embodiment of the application, the mobile phone receives the preloading service corresponding to the camera application scenario and the preloading service corresponding to the wallet application sent by the cloud, and records the preloading service corresponding to the camera application scenario and the preloading service corresponding to the wallet application. Optionally, in this embodiment of the application, the identification information of the service corresponding to each application scenario, which is sent to the cloud by the mobile phone, may be, for example, a service name, and this application is not limited. Correspondingly, the cloud returns identification information, such as a service name, of the optional preloaded service to the mobile phone. The mobile phone can record the identification information of the preloaded application corresponding to each application.
For example, a mobile phone may record pre-loaded services corresponding to a camera application scenario including, but not limited to: media service, camera hardware abstraction layer, camera driver, AI service. The pre-loaded services corresponding to the wallet application scenario include but are not limited to: the system comprises a media service, a camera hardware abstraction layer, a camera driver, a code scanning service, a payment service and a graphic code service.
For example, in the embodiment of the present application, after the mobile phone starts the corresponding application, the mobile phone may call the preloading service in advance based on the preloading service corresponding to the recorded application scenario, so as to improve the response speed of the application. By way of example, still referring to fig. 3, the user clicks on the camera application icon control 302, for example. As shown in fig. 4, the mobile phone displays a camera preview interface 401 in response to the received user operation, and the specific description may refer to the related contents in the foregoing, which is not repeated herein.
Fig. 18 is a schematic diagram of exemplary module interaction, which is shown in conjunction with fig. 3 and 4. Referring to fig. 18, for example, after the camera application is started, the camera application may detect a preloaded service corresponding to a camera application scene stored in the mobile phone. For example, pre-load services include, but are not limited to: media service, camera hardware abstraction layer, camera driver, AI service. Accordingly, the camera application invokes the preload service. That is, the AI service has completed loading before the user has not clicked on the AI option 404. Referring to fig. 19 (1), illustratively, the user clicks on the AI option 404. Referring to fig. 19 (2), in response to the received user operation, the mobile phone performs AI recognition and AI processing on the image through the AI service. For the detailed description, reference is made to the above description, which is not repeated herein. It should be noted that, in the embodiment of the present application, since the mobile phone has previously loaded the AI service, that is, the loading is completed before the user clicks the AI option. Therefore, when the user clicks the AI service, the AI service can directly perform AI recognition on the image. That is, from the user's perspective, the interval (i.e., the response time period) from when the user clicks the AI option until the image is AI-processed may be only 200 ms. It should be noted that the numerical values are merely exemplary, and the present application is not limited thereto.
Also, for the wallet application, the mobile phone starts the wallet application in response to the received operation of clicking the wallet application icon by the user. After the wallet application is started, a plurality of services including code scanning services are preloaded based on the preloaded application corresponding to the wallet application scene stored by the mobile phone. Correspondingly, after the user clicks the code scanning option, the code scanning service can immediately respond to the user operation and display a scanning interface, so that the response speed of the application service is improved.
The preloading schemes in the above embodiments are all illustrated by taking a single application as an example. That is, after the application is started, the application may pre-load at least one service corresponding to the application to improve the response speed of the application. The embodiment of the application also provides a linkage pre-loading service scheme so as to improve the response speed of the application switching scene. For example, the user may optionally switch to a chat application after taking a picture using the camera application. The user may share the photos taken by the camera application in the chat application. For example, in the process of sharing photos, the chat application optionally needs to call at least one service in the camera application (i.e., the service set a). In this scenario, the mobile phone may send user habit information to the cloud, where the user habit information is used to indicate user habit information corresponding to the camera application scenario (i.e., a service set called during the running of the camera application), and user habit information corresponding to the chat application (i.e., a service set called during the running of the chat application). Optionally, the user habit information may also be used to indicate a switching relationship between the camera application and the chat application. That is, the cloud may determine that the user often switches to the chat application after using the camera application based on the user habit information. In addition, it can be further determined that the chat application calls a part of applications (i.e., the service set a) of the camera application after the mobile phone is switched from the camera application to the chat application based on the user habit information.
For example, the cloud may receive a plurality of user habit information sent by the mobile phone in a period. For example, the cloud may determine a switching relationship between different applications based on a plurality of user habit information. For example, according to application scenario statistics, a user will switch to a chat application in 80% of the cases after using a camera application within three days. In addition, in the chat application running process, some services called by the camera application are services called by the camera application in the camera application running process, wherein the called service frequency exceeds the set threshold value. That is to say, in the process of determining the associated services that need to be preloaded between the applications, the cloud in the embodiment of the present application may determine whether the services are the associated services that need to be preloaded based on the following conditions:
1) in the period, the frequency of switching the application a to the application B is greater than a set threshold (which may be set according to actual requirements, and is not limited in this application).
2) The service called by the application B is partially overlapped with the service called by the application A. And, for application B, the usage probabilities of the coinciding partial services are all greater than a set threshold (e.g. the above set threshold for preloaded services, e.g. 60%).
When a part of services (for example, the service set a) in the application a satisfies the above condition, the part of services is also included in the preloaded service set corresponding to the application a scenario. Correspondingly, the cloud sends the pre-loaded service corresponding to the camera application scene to the mobile phone, which includes the service set a and other services, such as the AI service described above.
For example, after the camera application is started, services such as the service set a and the AI service may be loaded in advance. Thus, when the user takes a picture using the camera application and wants to share through the chat application, the user opens the chat application and triggers a sharing function (e.g., clicks a sharing button). The chat application can respond to the received user operation, call the service set A, and execute the picture sharing process through each service in the service set A. Because the service set A is pre-loaded, the response time from the user triggering the sharing function of the chat application to the popping up of the sharing interface is shorter, the time for the user to wait for the service to be started can be effectively shortened, and the use experience of the user is improved.
In the embodiment of the present application, only one user is taken as an example for description. In other embodiments, the technical solutions in the embodiments of the present application may be applied to any user. For example, assuming that the user is the user a, the cloud may determine the pre-loading service corresponding to each application scenario used by the user a based on the user habit information sent by the user a. For the user B, the user B optionally uses a camera application and a wallet application, and the mobile phone of the user B can send user habit information of the user B, that is, user habit information corresponding to a camera application scene (that is, a service loaded by the camera application scene) and user habit information corresponding to the wallet application scene (that is, a service loaded by a video application scene) to the cloud. The cloud end can determine the pre-loading service corresponding to the camera application scene and the pre-loading service corresponding to the wallet application scene based on the received user habit information of the user B. Alternatively, the pre-loading service corresponding to the camera application scenario of user B may be the same as or different from the pre-loading service corresponding to the camera application scenario of user a. Alternatively, the preloading service corresponding to the wallet application scenario of user B may be the same as or different from the preloading service corresponding to the wallet application scenario of user a. For example, when using the camera service, the user B rarely uses the AI service and often uses the beauty service. Then, in the probability of the service corresponding to the camera service determined by the cloud, the use probability of the AI service is smaller than the set threshold, and the beauty service is larger than the set threshold. Therefore, the pre-loaded service corresponding to the camera application of the user B includes, but is not limited to, a media service, a beauty service, and the like, and does not include an AI service. For example, the cloud sends the pre-loading service corresponding to the camera application scene of the user B to the mobile phone of the user B. When the user B triggers the camera application, the camera application is started, and corresponding services, such as a media loading service, a camera service, a beauty service and the like, are loaded based on the preloading service sent by the cloud. Correspondingly, when the user clicks the beauty option, the time for the user to wait for the service to be started can be effectively shortened because the beauty service is preloaded, and the use experience of the user is improved.
In a possible implementation manner, the cloud end can also count habits of a plurality of users. For example, the cloud acquires 100 pieces of user habit information corresponding to the camera application sent by each of the 100 users in a period (e.g., three days) (that is, the cloud receives 1000 pieces of user habit information corresponding to the camera application in total). The cloud may analyze the user habits in the camera application scenario of each user, and the analysis process is as described above and is not described herein again. For example, the cloud may further analyze the usage probability of each service of the camera application in combination with the user habit information of 100 users. For the specific analysis, reference is made to the above description, which is not repeated herein. For example, the cloud end may obtain a usage probability of each service of the camera application, so as to determine a preloaded service set corresponding to the camera application (different from a preloaded service combination of a single user, the preloaded service set in this example may be referred to as a global preloaded service set). For example, the analysis result is still as shown in fig. 16a, i.e. most of the possible 100 users call media service, camera hardware abstraction layer, camera driver, AI service, etc. when using the camera application. The cloud may record the analysis (global set of preloaded services corresponding to the camera application). For example, when a new user is registered in the cloud, the cloud may send the global preloaded service set to the new user because the cloud fails to obtain the user habit information of the user. Then, after the mobile phone of the new user starts the camera application in response to the received user operation, the corresponding service may be loaded based on the received global preloaded service combination. It should be noted that, because the manner is not obtained according to the user habit of the user, the mobile phone of the new user can count the service called by the camera application in the actual use process of the camera application, and send the user habit information to the cloud. Then, the cloud end can obtain the preloaded service set in the camera application scene corresponding to the user according to the user habit information of the user. The cloud can send the updated preloaded service set to the mobile phone of the new user. Then, when the mobile phone of the new user calls the camera application, the corresponding service may be called based on the newly acquired preloaded service set.
In another possible implementation manner, as described above, the cloud may record a corresponding relationship between the user account and the pre-loading service in each application scenario. Optionally, if the user logs in to the account using another mobile phone, the mobile phone may obtain the pre-loading service in each application scenario corresponding to the user account from the cloud. The mobile phone can call the preloading service corresponding to the application after starting the application in response to the received user operation based on the received preloading service under each application scene. Details which are not described may be referred to above and are not described herein. Optionally, the mobile phone may send a request message to the cloud to obtain the preloaded service information from the cloud. Optionally, after the mobile phone logs in the user account, the cloud may actively send the preloaded service information to the mobile phone, which is not limited in the present application.
In another possible implementation manner, the cloud may periodically update the recorded correspondence between the pre-loading services of the different applications corresponding to each user account. For example, in the first period, the cloud end determines, based on a plurality of user habit information sent by the user a, that the pre-loading service corresponding to the camera application includes: media services, camera hardware abstraction layers, camera drivers, AI services, and the like. Correspondingly, the cloud end can send indication information to the mobile phone of the user A, and the indication information is used for indicating that services such as media service, camera service, a camera hardware abstraction layer, camera drive and AI service are loaded when the camera application is started. The mobile phone can respond to the received indication information, start the camera application after receiving the operation of the user, and load the media service, the camera hardware abstraction layer, the camera driver and the AI service in the process of starting the application A. For example, in the second period, the cloud end determines, based on the plurality of user habit information sent by the user a, that the pre-loading service corresponding to the camera application includes: media services, camera hardware abstraction layers, camera drivers, flash services, filter services, and the like. That is, in the second period, the user a may not use the AI service every time (or most of the cases) the camera application is used, but starts the flash service and the filter service by clicking the flash option and the filter option. Correspondingly, the cloud end can send indication information to the mobile phone of the user A, and the indication information is used for indicating that services such as media service, camera service, a camera hardware abstraction layer, camera drive, flash lamp service and filter service are loaded when the camera application is started. The mobile phone can respond to the received indication information, start the camera application after receiving the operation of the user, and load the media service, the camera hardware abstraction layer, the camera drive, the flash lamp service, the filter service and the like in the process of starting the camera application. For example, when the user clicks the filter option, the activated filter service may perform corresponding processing on the image, such as a rendering operation of adding a filter. For example, in other embodiments, while the user starts the camera application, the camera application may load a plurality of preloaded services including a filter service, and automatically add a filter to the image. Optionally, in this embodiment of the application, the mobile phone may store pre-loading services (including the keep-alive services in the following embodiments) corresponding to different applications in a storage, for example, in a memory. For example, after receiving the pre-loading service of the camera application sent by the cloud, the mobile phone may update the pre-loading service corresponding to the stored camera application, for example, delete the pre-loading service corresponding to the stored camera application, and store a new pre-loading service. The mobile phone can load corresponding services based on the updated pre-loading services corresponding to the camera application in a period.
It should be noted that, in the embodiment of the present application, the camera application starts the AI service, but actually the AI service is only preloaded, but does not perform AI processing on the image. The loaded AI service may perform a processing on the image directly only after receiving a user click on the AI option. In other embodiments, after the camera application determines that the preloaded service includes the AI service, the image may also be processed by the AI service after the AI service is loaded. That is to say, the AI service is changed from the default off state to the default on state, and after the user opens the camera application, the camera application can start the AI service and perform AI processing on the image through the AI service.
The embodiment of the application also provides a keep-alive scheme of the application service, which can selectively reserve part of the closed application service according to the use habit of a user. In the embodiment of the application, the cloud end can determine the keep-alive service corresponding to each application scene based on the user habit information sent by the mobile phone.
In one example, the cloud may use, as the keep-alive service, a service having a probability of use greater than a set threshold (e.g., 60%, which may be the same as or different from the set threshold of the preloaded service, but is not limited to the present application). For example, referring to fig. 16a, a camera application scenario is taken as an example for the camera application, the cloud end may determine that the keep-alive service corresponding to the camera application scenario is a media service, a camera hardware abstraction layer, a camera driver, an AI service, and the like. The cloud sends the keep-alive service corresponding to the camera application scene to the mobile phone. It should be noted that the cloud end may send instruction information to the mobile phone, where the instruction information includes a pre-loading service and a keep-alive service corresponding to a camera application scenario. The cloud end can also send first indication information and second indication information to the mobile phone, wherein the first indication information comprises preloading services corresponding to the camera application scene. The second indication information includes a keep-alive service (which may be one service or multiple services, and this application is not limited to this application) corresponding to the camera application. And the mobile phone responds to the keep-alive service corresponding to the received camera application scene and stores the corresponding relation between the camera application and the keep-alive service. If the user triggers the camera application, the mobile phone may load the corresponding service based on the preloaded application corresponding to the camera application described above. After the user uses the camera application, the camera application is closed. The camera application determines that shutdown is required in response to the received user operation. The camera application acquires the keep-alive service corresponding to the camera application stored by the mobile phone. Including, for example, media services, camera hardware abstraction layers, camera drivers, AI services. And the camera service closes the processes corresponding to other services except the keep-alive service. For example, referring to fig. 16a, the services that the camera application may currently turn on include, but are not limited to: media services, camera hardware abstraction layers, camera drivers, AI services, audio hardware abstraction layers, audio drivers, and the like. The camera application closes processes corresponding to services such as audio service, an audio hardware abstraction layer and audio drive based on the obtained keep-alive service corresponding to the camera application, and exits the application. Therefore, when the user starts the camera application again, the partial service of the camera application is kept in the open state, so that the starting time of the camera application can be shortened, the soft start effect can be realized, and the user use experience is effectively improved.
Optionally, the cloud may further count the frequency of use of the application. For example, assuming that the usage frequency of the camera application is greater than a set threshold (e.g., 50%) and the usage frequency of the video application is less than the set threshold within a period (e.g., three days), the cloud may not count the keep-alive services of the video application. That is to say, for an application with a low use frequency, the keep-alive mechanism in the embodiment of the present application may not be used, so as to reduce the memory usage.
In another example, as described above, there may be associated services between different applications, for example, application a may invoke part of the services within application B, and it may also be understood that application a and application B use part of the same services. Unlike the scenario in which the application a preloads the service (e.g., the service set a described above) associated with the application B in the foregoing embodiment, in the embodiment of the present application, the cloud may determine the keep-alive service corresponding to the application based on the user habit information, that is, after the application is closed, the running service is still retained. For example, in the process of using the application a by the user, the service called by the application a is the service set C. And the mobile phone responds to the received user operation and starts the application B. During each running process of the application B in the period (which is merely an illustrative example, and the application does not limit the service and application usage probabilities), part of the services (e.g., the service subset H) in the service set C are invoked. Is cardiopathic and application B is started each time within 2 hours after application a was shut down. For example, the user starts the application B within 2 hours of the closing of the application a or during the running of the application a each time. And the mobile phone responds to the received user operation and starts the application C. During each running process of the application C in the cycle, part of the services (e.g., the service subset I) in the service set C are called. Illustratively, the user starts application C each time after application a is closed for 2 hours (e.g., 5 hours apart, or one day apart). Alternatively, service subset H and service subset I may be the same or different.
For example, the mobile phone sends the user habit information to the cloud, where the user habit information is optionally used to indicate a service called by the application a scenario (i.e., the service set C), and the user habit information is also used to indicate a service called by the application B scenario and a service called by the application C scenario. And the user habit information is also used for indicating the interval duration of calling between the application B and the application C and the application A. The cloud end can determine that overlapped services exist between the services called by the application A and the application B in the period based on the user habit information. And the start-up interval between application a and application B is less than or equal to a set time threshold (e.g., 3 hours). Accordingly, the cloud can determine that the service in the service subset H is the keep-alive service of the application a. And the cloud sends keep-alive service information to the mobile phone for indicating the keep-alive service corresponding to the application A. Accordingly, in the case where the application a determines that the application needs to be closed in response to the received user operation. Application a may shut down processes corresponding to services other than service subset H according to the keep-alive service (e.g., including service subset H) corresponding to application a. After the mobile phone starts the application B in response to the received user operation, because the service subset H is in the survival state, the application B can directly call the service subset H in the survival state when calling the service subset H, thereby shortening the service starting response time.
For example, the cloud determines that there is a service that overlaps with the service called by the application a and the application C in the period based on the user habit information. However, the average start-up interval between application C and application a (e.g., 5 hours) is greater than the set time threshold. Thus, the service subset I does not belong to the keep-alive service of application a. That is, if application a is closed, these services are still alive, while in reality application C typically calls service subset I after 5 hours after application a is closed. The service subset I is always in a keep-alive state, which occupies a memory and affects the performance of the mobile phone.
The following is illustrated with specific applications, and as mentioned above, the preloading services corresponding to the camera application include, but are not limited to: media services, camera hardware abstraction layers, camera drivers, AI services, etc., that is, the camera application is started each time it is started before its preloaded services are updated. For example, in a period (e.g., three days), the user starts the camera application 10 times, and the camera application starts a plurality of services including the AI service each time. After the user uses the camera application, the camera application is closed, and the camera application closes all services including the AI service each time. For example, after the user uses the camera application each time, that is, within 2 hours after the camera application is closed, the cell phone starts the gallery application (which may also be another application, such as a drawing application, an article scanning application, etc.) in response to the received user operation, and, in this period, the cell phone receives an operation of the user clicking an AI option in the gallery application each time the cell phone runs the gallery application. And the mobile phone responds to the received user operation, starts an AI service, and carries out AI processing on the images in the gallery based on the AI service. For the detailed description, reference is made to the above description, which is not repeated herein.
Illustratively, the mobile phone sends a plurality of user habit information to the cloud end in a period, the user habit information is used for indicating that the AI service is called every time the gallery application is used for 10 times, the AI service is a pre-loading service of the camera application, and the usage interval between the camera application and the gallery application is 2 hours. For example, the starting time and the closing time of the camera application, and the using time and the closing time of the gallery application can be recorded in the user habit information sent to the cloud by the mobile phone. Correspondingly, the cloud end may count that the AI service is started by the camera application each time and the AI service is called by the gallery application each time based on the received user habit information, and that a usage interval between the gallery application and the camera application (i.e., a difference between a time when the camera application is turned off and a time when the gallery application is started) is smaller than a set threshold (e.g., 2 hours). Accordingly, the cloud may determine, based on the received plurality of user habit information, that the keep-alive services applied by the camera in the period include, but are not limited to: and (4) AI service. The cloud can send keep-alive service, namely AI service, of the camera application to the mobile phone of the user. Illustratively, the handset receives a keep-alive service for the camera application. The mobile phone responds to the received user operation, the camera application is closed, and when the camera application is closed, all services except the keep-alive service (namely AI service) which are started during the running process of the camera application are closed, such as media service, camera service, a camera hardware abstraction layer, camera drive and the like. Thus, when the mobile phone responds to the received user operation and starts the gallery application, the AI service can be automatically started. Correspondingly, when the user clicks the AI option in the gallery application, the loaded AI service can directly perform AI processing on the image in the gallery, thereby reducing the time for waiting for the start of the AI service and shortening the response time.
It should be noted that, in the embodiments of the present application, a cloud is taken as an example to analyze user habit information, so as to determine an execution subject of a pre-loading service and a keep-alive service corresponding to an application. In other embodiments, the mobile phone can also perform the steps performed by the cloud. For example, the mobile phone may periodically obtain the user habit information, and analyze the user habit information to obtain the pre-loading service and the keep-alive service corresponding to each application. The specific implementation details are the same as the related steps executed by the cloud, and are not described herein again.
It will be appreciated that the electronic device, in order to implement the above-described functions, comprises corresponding hardware and/or software modules for performing the respective functions. The present application is capable of being implemented in hardware or a combination of hardware and computer software in conjunction with the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, with the embodiment described in connection with the particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In one example, fig. 20 shows a schematic block diagram of an apparatus 2000 of an embodiment of the present application, where the apparatus 2000 may include: a processor 2001 and transceiver/transceiver pins 2002, and optionally, memory 2003.
The various components of device 2000 are coupled together by a bus 2004, where bus 2004 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, the various busses are referred to in the figures as busses 2004.
Optionally, memory 2003 may be used for the instructions in the foregoing method embodiments. The processor 2001 may be used to execute instructions in the memory 2003 and to control receive pin receive signals and to control transmit pin transmit signals.
The apparatus 2000 may be an electronic device or a chip of an electronic device in the above method embodiments.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
The present embodiment also provides a computer storage medium, where a computer instruction is stored in the computer storage medium, and when the computer instruction runs on an electronic device, the electronic device is caused to execute the above related method steps to implement the method for calling the application in the above embodiment.
The embodiment also provides a computer program product, which, when running on a computer, causes the computer to execute the relevant steps described above, so as to implement the method for calling an application in the above embodiment.
In addition, an apparatus, which may be specifically a chip, a component or a module, may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip can execute the calling method of the application in the above method embodiments.
The electronic device, the computer storage medium, the computer program product, or the chip provided in this embodiment are all configured to execute the corresponding method provided above, so that the beneficial effects achieved by the electronic device, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding method provided above, and are not described herein again.
Through the description of the above embodiments, those skilled in the art will understand that, for convenience and simplicity of description, only the division of the above functional modules is used as an example, and in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a module or a unit may be divided into only one logic function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed to a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
Any of the various embodiments of the present application, as well as any of the same embodiments, can be freely combined. Any combination of the above is within the scope of the present application.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
The steps of a method or algorithm described in connection with the disclosure of the embodiments of the application may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in the embodiments of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An electronic device, comprising:
one or more processors, memory, and a fingerprint sensor;
and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when executed by the one or more processors, cause the electronic device to perform the steps of:
in response to the received first user operation, starting a camera application;
when the camera application is operated, responding to the received operation that a user clicks a first AI option of the camera application, and starting an Artificial Intelligence (AI) service;
based on the AI service, carrying out AI processing on a first image acquired by a camera;
after the camera application is closed, first user habit information is sent to a server, and the first user habit information is used for indicating that the AI service is called when the electronic equipment runs the camera application;
in response to the received second user operation, starting the camera application again;
when the camera application is operated, responding to the received operation that the user clicks the first AI option, and starting the AI service;
based on the AI service, carrying out AI processing on a second image acquired by the camera;
after the camera application is closed again, second user habit information is sent to a server, and the second user habit information is used for calling the AI service when the electronic equipment is indicated to operate the camera application again;
receiving first indication information sent by the server, wherein the first indication information is used for indicating that the preloading service of the camera application comprises the AI service;
determining that the preloading service of the camera application includes the AI service in response to the received first indication information;
in response to the received third user operation, starting the camera application, and starting a pre-loading service of the camera application;
responding to the received operation that the user clicks the first AI option, and carrying out AI processing on a third image acquired by the camera based on the started AI service.
2. The electronic device of claim 1, wherein the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of:
after the camera application is closed, responding to the received fourth user operation, starting the camera application, and starting the AI service;
when the camera application is operated, responding to the received operation that a user clicks a filter option of the camera application, and starting a filter service;
rendering a fourth image acquired by the camera based on the filter service;
after the camera application is closed, in response to the received fifth user operation, starting the camera application and starting the AI service;
when the camera application runs, responding to the received operation that the user clicks the filter option, and starting the filter service;
rendering a fifth image acquired by the camera based on the filter service;
determining that the preloaded service of the camera application includes the filter service and does not include the AI service;
after the camera application is closed again, responding to the received sixth operation, starting the camera application, and starting the preloading service of the camera application;
and responding to the received operation of clicking the filter option by the user, and rendering a sixth image acquired by the camera based on the started filter service.
3. The electronic device of claim 2, wherein the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of:
and after the electronic equipment renders a fourth image acquired by the camera based on the filter service, sending third user habit information to a server, wherein the third user habit information is used for indicating that the electronic equipment calls the filter service when the camera is operated.
4. The electronic device of claim 3, wherein the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of:
and after the electronic equipment renders a fifth image acquired by the camera based on the filter service, sending fourth user habit information to the server, wherein the fourth user habit information is used for indicating that the electronic equipment calls the filter service when the camera application is operated.
5. The electronic device of claim 4, wherein the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of:
receiving second indication information sent by the server, wherein the second indication information is used for indicating that the preloading service of the camera application comprises the filter service;
determining that the preloading service of the camera application includes the filter service and does not include the AI service in response to the received second indication information.
6. The electronic device of claim 1, wherein the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of:
after performing AI processing on a first image acquired by a camera based on the AI service, responding to a received seventh user operation, closing the camera application, and closing the AI service;
responding to the received eighth user operation, and starting the gallery application;
when the gallery application is operated, responding to the received operation that the user clicks a second AI option of the gallery application, and starting the AI service;
performing AI processing on the images in the gallery based on the AI service;
after AI processing is carried out on a second image acquired by the camera based on the AI service, the camera application is closed in response to the received ninth user operation, and the AI service is closed;
responding to the received tenth user operation, and starting the gallery application;
when the gallery application is operated, responding to the received operation that the user clicks the second AI option, and starting the AI service;
performing AI processing on the images in the gallery based on the AI service;
determining that the keep-alive service of the camera application comprises the AI service;
and after AI processing is carried out on a third image acquired by the camera based on the started AI service, the camera application is closed in response to the received eleventh user operation, wherein the AI service is in a starting state.
7. The electronic device of claim 6, wherein the computer program, when executed by the one or more processors, causes the electronic device to perform the steps of:
responding to the received twelfth user operation, and starting the gallery application;
responding to the received operation that the user clicks the second AI option, and carrying out AI processing on a fifth image in the gallery based on the started AI service.
8. A method for scheduling an application, comprising:
in response to the received first user operation, starting a camera application;
when the camera application is operated, responding to the received operation that a user clicks a first AI option of the camera application, and starting an Artificial Intelligence (AI) service;
based on the AI service, carrying out AI processing on a first image acquired by a camera;
after the camera application is closed, first user habit information is sent to a server, and the first user habit information is used for indicating that the AI service is called when the camera application is operated by electronic equipment;
in response to the received second user operation, restarting the camera application;
when the camera application is operated, responding to the received operation that the user clicks the first AI option, and starting the AI service;
based on the AI service, carrying out AI processing on a second image acquired by the camera;
after the camera application is closed again, second user habit information is sent to a server, and the second user habit information is used for calling the AI service when the electronic equipment is indicated to operate the camera application again;
receiving first indication information sent by the server, wherein the first indication information is used for indicating that the preloading service of the camera application comprises the AI service;
determining that the preloading service of the camera application includes the AI service in response to the received first indication information;
in response to the received third user operation, starting the camera application, and starting a pre-loading service of the camera application;
responding to the received operation that the user clicks the first AI option, and carrying out AI processing on a third image acquired by the camera based on the started AI service.
9. The method of claim 8, further comprising:
after the camera application is closed, responding to the received fourth user operation, starting the camera application, and starting the AI service;
when the camera application is operated, responding to the received operation that a user clicks a filter option of the camera application, and starting a filter service;
rendering a fourth image acquired by the camera based on the filter service;
after the camera application is closed, in response to the received fifth user operation, starting the camera application and starting the AI service;
when the camera application runs, responding to the received operation that the user clicks the filter option, and starting the filter service;
rendering a fifth image acquired by the camera based on the filter service;
determining that the preloaded service of the camera application includes the filter service and does not include the AI service;
after the camera application is closed again, responding to the received sixth operation, starting the camera application, and starting the preloading service of the camera application;
and responding to the received operation of clicking the filter option by the user, and rendering a sixth image acquired by the camera based on the started filter service.
10. The method of claim 9, wherein after rendering the fourth image acquired by the camera based on the filter service, the method further comprises:
and sending third user habit information to a server, wherein the third user habit information is used for indicating that the filter service is called when the electronic equipment operates the camera application.
11. The method of claim 10, wherein after rendering the fifth image acquired by the camera based on the filter service, the method further comprises:
and sending fourth user habit information to the server, wherein the fourth user habit information is used for indicating that the filter service is called when the electronic equipment operates the camera application.
12. The method of claim 11, wherein the determining that the pre-load service for the camera application includes the filter service and not the AI service comprises:
receiving second indication information sent by the server, wherein the second indication information is used for indicating that the preloading service of the camera application comprises the filter service;
determining that the preloading service of the camera application includes the filter service and does not include the AI service in response to the received second indication information.
13. The method of claim 8, wherein after AI processing the first image captured by the camera based on the AI service, the method further comprises:
closing the camera application and closing the AI service in response to the received seventh user operation;
responding to the received eighth user operation, and starting a gallery application;
when the gallery application is operated, responding to the received operation that the user clicks a second AI option of the gallery application, and starting the AI service;
performing AI processing on the images in the gallery based on the AI service;
after performing AI processing on the second image acquired by the camera based on the AI service, the method further includes:
in response to the received ninth user operation, closing the camera application and closing the AI service;
responding to the received tenth user operation, and starting the gallery application;
when the gallery application is operated, responding to the received operation that the user clicks the second AI option, and starting the AI service;
performing AI processing on the images in the gallery based on the AI service;
determining that the keep-alive service of the camera application comprises the AI service;
after performing AI processing on the third image acquired by the camera based on the started AI service, the method further includes:
in response to the received eleventh user operation, closing the camera application, wherein the AI service is in an active state.
14. The method of claim 13, further comprising:
responding to the received twelfth user operation, and starting the gallery application;
responding to the received operation that the user clicks the second AI option, and carrying out AI processing on a fifth image in the gallery based on the started AI service.
15. A computer-readable storage medium comprising a computer program, which, when run on an electronic device, causes the electronic device to perform the method of any one of claims 8-14.
CN202110917469.1A 2021-08-10 2021-08-10 Application scheduling method and electronic equipment Active CN113934519B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210898089.2A CN115705241B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment
CN202110917469.1A CN113934519B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110917469.1A CN113934519B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202210898089.2A Division CN115705241B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Publications (2)

Publication Number Publication Date
CN113934519A CN113934519A (en) 2022-01-14
CN113934519B true CN113934519B (en) 2022-08-02

Family

ID=79274370

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210898089.2A Active CN115705241B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment
CN202110917469.1A Active CN113934519B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210898089.2A Active CN115705241B (en) 2021-08-10 2021-08-10 Application scheduling method and electronic equipment

Country Status (1)

Country Link
CN (2) CN115705241B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116679900B (en) * 2022-12-23 2024-04-09 荣耀终端有限公司 Audio service processing method, firmware loading method and related devices
CN116244008B (en) * 2023-05-10 2023-09-15 荣耀终端有限公司 Application starting method, electronic device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107748698A (en) * 2017-11-21 2018-03-02 广东欧珀移动通信有限公司 Start control method, device, storage medium and the terminal of application with broadcast mode
CN109144676A (en) * 2017-06-15 2019-01-04 阿里巴巴集团控股有限公司 A kind of self-starting detection method, device and the server of application program
CN111464690A (en) * 2020-02-27 2020-07-28 华为技术有限公司 Application preloading method and electronic equipment
CN112527403A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Application starting method and electronic equipment
CN112527407A (en) * 2020-12-07 2021-03-19 深圳创维-Rgb电子有限公司 Application starting method, terminal and computer readable storage medium
CN112631679A (en) * 2020-12-28 2021-04-09 北京三快在线科技有限公司 Preloading method and device for micro-application
WO2021126427A1 (en) * 2019-12-19 2021-06-24 Microsoft Technology Licensing, Llc Management of indexed data to improve content retrieval processing

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833465B (en) * 2010-04-23 2013-03-13 中国科学院声学研究所 Embedded system supporting dynamic loading operation of application programs
US20190095074A1 (en) * 2015-06-29 2019-03-28 Orange Method for controlling the execution of a program configurable into a disabled state and enabled state
CN105893129B (en) * 2016-03-30 2020-01-07 北京小米移动软件有限公司 Method and device for processing application program in terminal
CN106708617B (en) * 2016-12-23 2019-12-03 武汉斗鱼网络科技有限公司 A kind of application process keep-alive system and keepalive method based on Service
EP3682328B1 (en) * 2017-09-13 2024-05-15 Uber Technologies, Inc. Alternative service pathway for service application
CN108647055B (en) * 2018-05-10 2021-05-04 Oppo广东移动通信有限公司 Application program preloading method and device, storage medium and terminal
CN108681475B (en) * 2018-05-21 2021-11-26 Oppo广东移动通信有限公司 Application program preloading method and device, storage medium and mobile terminal
CN108762843B (en) * 2018-05-29 2020-05-05 Oppo广东移动通信有限公司 Application program preloading method and device, storage medium and intelligent terminal
CN109151216B (en) * 2018-10-30 2021-01-26 努比亚技术有限公司 Application starting method, mobile terminal, server and computer readable storage medium
CN112162796A (en) * 2020-10-10 2021-01-01 Oppo广东移动通信有限公司 Application starting method and device, terminal equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144676A (en) * 2017-06-15 2019-01-04 阿里巴巴集团控股有限公司 A kind of self-starting detection method, device and the server of application program
CN107748698A (en) * 2017-11-21 2018-03-02 广东欧珀移动通信有限公司 Start control method, device, storage medium and the terminal of application with broadcast mode
CN112527403A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Application starting method and electronic equipment
WO2021126427A1 (en) * 2019-12-19 2021-06-24 Microsoft Technology Licensing, Llc Management of indexed data to improve content retrieval processing
CN111464690A (en) * 2020-02-27 2020-07-28 华为技术有限公司 Application preloading method and electronic equipment
CN112527407A (en) * 2020-12-07 2021-03-19 深圳创维-Rgb电子有限公司 Application starting method, terminal and computer readable storage medium
CN112631679A (en) * 2020-12-28 2021-04-09 北京三快在线科技有限公司 Preloading method and device for micro-application

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Evolutionary approaches to signal decomposition in an application service management system";Tomasz D. Sikora;《Soft Computing》;20160831;第20卷(第8期);第3063-3084页 *
"基于用户行为和位置感知的边际服务加载优化研究";童智高;《中国优秀硕士学位论文全文数据库 信息科技辑》;20171115(第11期);第I136-410页 *

Also Published As

Publication number Publication date
CN115705241A (en) 2023-02-17
CN113934519A (en) 2022-01-14
CN115705241B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN112130742B (en) Full screen display method and device of mobile terminal
WO2021017889A1 (en) Display method of video call appliced to electronic device and related apparatus
CN110114747B (en) Notification processing method and electronic equipment
CN109559270B (en) Image processing method and electronic equipment
CN113722058B (en) Resource calling method and electronic equipment
CN111913750B (en) Application program management method, device and equipment
CN112492193B (en) Method and equipment for processing callback stream
CN114650363A (en) Image display method and electronic equipment
CN113934519B (en) Application scheduling method and electronic equipment
CN113452945A (en) Method and device for sharing application interface, electronic equipment and readable storage medium
CN113641271A (en) Application window management method, terminal device and computer readable storage medium
CN113891009A (en) Exposure adjusting method and related equipment
CN115967851A (en) Quick photographing method, electronic device and computer readable storage medium
WO2022170856A1 (en) Method for establishing connection, and electronic device
CN112532508B (en) Video communication method and video communication device
CN113542574A (en) Shooting preview method under zooming, terminal, storage medium and electronic equipment
US20240114110A1 (en) Video call method and related device
CN113923372B (en) Exposure adjusting method and related equipment
CN114116073A (en) Electronic device, drive loading method thereof, and medium
CN114828098A (en) Data transmission method and electronic equipment
CN113672454B (en) Screen freezing monitoring method, electronic equipment and computer readable storage medium
CN116048831B (en) Target signal processing method and electronic equipment
CN113973152A (en) Unread message quick reply method and electronic equipment
CN114816028A (en) Screen refreshing method, electronic device and computer-readable storage medium
CN114490006A (en) Task determination method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant