CN116028005A - Audio session acquisition method, device, equipment and storage medium - Google Patents

Audio session acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN116028005A
CN116028005A CN202210912804.3A CN202210912804A CN116028005A CN 116028005 A CN116028005 A CN 116028005A CN 202210912804 A CN202210912804 A CN 202210912804A CN 116028005 A CN116028005 A CN 116028005A
Authority
CN
China
Prior art keywords
audio
session
application
default
notification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210912804.3A
Other languages
Chinese (zh)
Other versions
CN116028005B (en
Inventor
姜传标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN116028005A publication Critical patent/CN116028005A/en
Application granted granted Critical
Publication of CN116028005B publication Critical patent/CN116028005B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses an audio session acquisition method, device, equipment and storage medium, and belongs to the technical field of computers. The method comprises the following steps: determining a default audio device, the default audio device being an audio device that allows input audio or allows output audio; registering an audio session creation notification with a session manager of a default audio device; if the audio session creation notification sent by the session manager is received, acquiring all audio sessions using the default audio device after receiving a preset duration of the audio session creation notification. In the method, since the audio session is enumerated after the preset duration of the audio session creation notification is acquired, the audio session being created can be ensured to have enough time to be created, so that all audio sessions can be accurately enumerated when the audio session is enumerated.

Description

Audio session acquisition method, device, equipment and storage medium
The present application claims priority from chinese patent application No. 202210530867.2, entitled "audio event processing method," filed on month 16 of 2022, the entire contents of which are incorporated herein by reference.
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for acquiring an audio session.
Background
Along with the improvement of the performance of the electronic equipment, the power consumption of the electronic equipment is higher and higher, but the improvement of the battery capacity is very slow, so that the endurance of the electronic equipment cannot meet the requirements of users, and the use experience of the users is reduced. At present, accurate resource scheduling can be performed on bottom hardware resources according to application running conditions in the electronic equipment, so that the performance of the electronic equipment is ensured, and meanwhile, the long endurance experience of a user is met. In determining the application running conditions in the electronic device, it may be determined according to the audio state of the application. And how to accurately acquire the audio session of the application is a key to determining the audio state of the application.
Disclosure of Invention
The application provides an audio session acquisition method, an audio session acquisition device, audio session acquisition equipment and a storage medium, and all audio sessions can be accurately acquired. The scheme is as follows:
in a first aspect, an audio session acquisition method is provided. In the method, a default audio device is determined, the default audio device being an audio device that allows input audio or allows output audio. Thereafter, an audio session creation notification is registered with a session manager of the default audio device. If the audio session creation notification sent by the session manager is received, acquiring all audio sessions using the default audio device after receiving a preset duration of the audio session creation notification.
In the method, since the audio session is enumerated after the preset duration of the audio session creation notification is acquired, the audio session being created can be ensured to have enough time to be created, and therefore all audio sessions can be accurately enumerated when the audio session is enumerated.
Optionally, a default audio device change notification is registered at start-up. Thus, when the default audio device is changed, a default audio device change notification can be received, and the changed default audio device can be known according to the default audio device change notification.
Wherein the operation of determining the default audio device may be: if a default audio equipment change notification is received in the operation process, the changed default audio equipment is determined according to the default audio equipment change notification.
Optionally, the session manager of the default audio device may also be activated before registering the audio session creation notification with the session manager of the default audio device.
The session manager of the default audio device is for managing audio sessions using the default audio device. Upon determining the default audio device, the corresponding session manager may be activated.
Optionally, after acquiring all audio sessions that are using the default audio device after receiving the preset duration of the audio session creation notification, the audio session state change notification may also be registered with the audio session controller; if the audio session state change notification sent by the audio session controller is received, determining an application running state according to the audio session state change notification; determining a user scene according to the application running state; and carrying out resource scheduling according to the user scene.
In the method, the resources can be reasonably allocated according to the current user scene, so that the resources are reasonably allocated in the user scene as much as possible, the power consumption of the electronic equipment is reduced, the endurance of the electronic equipment is improved, and the influence on the user scene is reduced. In addition, the resources are reasonably distributed according to the current user scene, so that the resources can meet the resource requirements of the user scene, the smooth operation of the application in the user scene is ensured, and the user experience is improved.
Optionally, the audio session state change notification carries a process identifier of a process of the target audio session and a session state of the target audio session, where the target audio session is an audio session with a changed session state, and the operation of determining the application running state according to the audio session state change notification may be: if the process identification of the process of the focus window is the same as the process identification of the process of the target audio session, determining the application running state of the focus application to which the focus window belongs according to the session state of the target audio session; if the process identification of the process of the focus window is different from the process identification of the process of the target audio session, the application identification corresponding to the process identification of the process of the focus window is searched, and if the application identification corresponding to the process identification of the process of the focus window is the same as the application identification of the application to which the target audio session belongs, the application running state of the focus application to which the focus window belongs is determined according to the session state of the target audio session.
Alternatively, if the process identifier of the process of the focus window and the process identifier of the process of the audio session in the audio output state are the same, it may be determined that the focus application to which the focus window belongs is outputting audio.
If the process identifier of the process of the focus window is different from the process identifier of the process of the audio session in the audio output state, an application identifier corresponding to the process identifier of the process of the focus window can be obtained. As an example, it may be queried whether a process of the target audio session exists in the processes of the applications identified by the application identifier, if so, it is determined that the focus application to which the focus window belongs is outputting audio, otherwise, it may be determined, according to the application identifier corresponding to the process identifier of the process of the target audio session (i.e., the application identifier of the application to which the target audio session belongs), which application is outputting audio. As another example, it may be determined whether an application identifier corresponding to a process identifier of a process of the focus window is the same as an application identifier of an application to which the target audio session belongs, if so, it is determined that the focus application to which the focus window belongs is outputting audio, otherwise, it may be determined which application is outputting audio according to the application identifier of the application to which the target audio session belongs.
Similarly, if the process identifier of the process of the focus window and the process identifier of the process of the audio session in the audio input state are the same, it may be determined that the focus application to which the focus window belongs is inputting audio.
If the process identifier of the process of the focus window is different from the process identifier of the process of the audio session in the audio output state, an application identifier corresponding to the process identifier of the process of the focus window can be obtained. As an example, the application may be queried to identify whether there is a progress of the audio session in the progress of the identified application, and if so, determine that the focus application to which the focus window belongs is inputting audio. Otherwise, it may be determined, specifically which application is inputting audio, according to the application identifier corresponding to the process identifier of the process of the target audio session (i.e., the application identifier of the application to which the target audio session belongs). As another example, it may be determined whether an application identifier corresponding to a process identifier of a process of the focus window is the same as an application identifier of an application to which the target audio session belongs, if so, it is determined that the focus application to which the focus window belongs is inputting audio, otherwise, it may be determined which application is outputting audio specifically according to the application identifier of the application to which the target audio session belongs.
Alternatively, the operation of acquiring all audio sessions that are using the default audio device after receiving the preset time period of the audio session creation notification may be: if an audio event is received, adding the audio event to a message queue of a first sub-thread, wherein the audio event comprises a default audio device change notification, an audio session creation notification and an audio session state change notification; executing a first sub-thread, wherein the first sub-thread is used for sequentially reading audio events from a message queue, and acquiring all audio sessions using a default audio device after reading a preset duration of the audio session creation notification when the audio events read from the message queue are the audio session creation notification.
In the present application, by adding a message queue, audio events can be processed in a fixed sub-thread, thereby eliminating the blocking of systems and other applications by processing logic time consuming.
Optionally, the first sub-thread is configured to: when the audio event read from the message queue is an audio session creation notification, adding the read audio session creation notification to a task queue of the second sub-thread; executing a second sub-thread, wherein the second sub-thread is used for reading the audio session creation notification from the task queue and acquiring all audio sessions using the default audio device after reading the preset duration of the audio session creation notification.
In the application, the second sub-thread is used for realizing the delay processing of the audio session creation notification, so that the processing of other audio events (such as a default audio device change notification, an audio session state change notification and the like) by the first sub-thread is not affected, that is, the processing of other audio events by the first sub-thread is not blocked.
Optionally, the second sub-thread is configured to: when the audio session creation notification is read from the task queue, if the read audio session creation notification indicates that the created audio session belongs to a preset application, acquiring all audio sessions using the default audio device after reading a preset duration of the audio session creation notification.
In a second aspect, an audio session acquisition device is provided, where the audio session acquisition device has a function of implementing the audio session acquisition method behavior in the first aspect. The audio session acquisition device comprises at least one module, and the at least one module is used for realizing the audio session acquisition method provided by the first aspect.
In a third aspect, an audio session acquisition device is provided, where the structure of the audio session acquisition device includes a processor and a memory, where the memory is configured to store a program for supporting the audio session acquisition device to perform the audio session acquisition method provided in the first aspect, and store data related to implementing the audio session acquisition method in the first aspect. The processor is configured to execute a program stored in the memory. The audio session acquisition means may further comprise a communication bus for establishing a connection between the processor and the memory.
In a fourth aspect, a computer readable storage medium is provided, in which instructions are stored which, when run on a computer, cause the computer to perform the audio session acquisition method according to the first aspect described above.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the audio session acquisition method of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a software module architecture according to an embodiment of the present application;
FIG. 3 is a schematic diagram of interactions between software modules according to an embodiment of the present application;
FIG. 4 is a schematic diagram of another software module architecture provided in an embodiment of the present application;
FIG. 5 is a schematic diagram of interactions between another software module provided by an embodiment of the present application;
fig. 6 is a flowchart of an audio session acquiring method provided in an embodiment of the present application;
FIG. 7 is a flowchart of another audio session acquisition method provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of an audio session acquiring device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that reference herein to "a plurality" means two or more. In the description of the present application, "/" means or, unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, for the purpose of facilitating the clear description of the technical solutions of the present application, the words "first", "second", etc. are used to distinguish between the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
The statements of "one embodiment" or "some embodiments" and the like, described in this application, mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in various places throughout this application are not necessarily all referring to the same embodiment, but mean "one or more, but not all, embodiments" unless expressly specified otherwise. Furthermore, the terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless otherwise specifically noted.
For clarity and conciseness of description of various embodiments, a brief introduction to related concepts or technologies is given below:
1. a focus window (focus window) refers to a window having focus. The focus window is the only window that can receive keyboard input. The manner in which the focus window is determined is associated with the focus mode (focus mode) of the system. The top level window of the focus window is called an active window (active window). Only one window at a time may be an active window. The focus window is a window which is needed to be used by the user at present with high probability.
2. The focus application refers to an application to which a focus window belongs, and the focus application is an application which can receive keyboard input, mouse operation and other operations in the current foreground operation.
3. Non-focus applications refer to applications that run in the foreground but are not currently able to receive keyboard input, mouse operations, and the like, i.e., applications that generally run in the foreground but are not operated by the user.
4. Background applications refer to applications that have been minimized to run in the background.
5. The focus mode may be used to determine how the mouse brings a window into focus. In general, the focus modes may include three types, respectively:
(1) Click-to-focus (click-to-focus) in this mode, the window that the mouse clicks on gets focus. That is, when the mouse clicks on any position of a window where focus is available, the window is activated, and is positioned at the forefront of all windows, and receives keyboard input. When the mouse clicks on another window, this window loses focus.
(2) The focus follows the mouse (focus-mouse), in which mode a window under the mouse can acquire focus. That is, when the mouse is moved to a window where focus is available, the user may activate the window without clicking somewhere on the window, receiving keyboard input, but the window is not necessarily positioned at the front of all windows. When the mouse moves out of the range of this window, this window will also lose focus.
(3) Grass focus (slide focus), which is similar to focus-mouse, when the mouse is moved to a window where focus is available, the user activates the window without clicking somewhere on the window, receiving keyboard input, but the window is not necessarily placed at the front of all windows. Unlike focus-focus, focus does not change when the mouse moves out of the range of this window, but only when the mouse moves into another window that can receive focus.
6. A process, including multiple threads, a thread may create a window. The focus process is the process to which the thread that created the focus window belongs.
7. The long-time power consumption (PL 1) refers to the power consumption of the CPU under normal load, which is equivalent to the thermal design power consumption, and the running power consumption of the CPU for most of the time does not exceed PL1.
8. Short-time-with-frequency power consumption (PL 2), which refers to the highest power consumption that a CPU can reach in a short time, has a duration limit. Generally, PL2 is greater than PL1.
Notably, PL1 and PL2 are Intel (Intel)
Figure BDA0003774431680000051
Name of the platform. In the super semiconductor company (Advanced Micro Devices, AMD)/( >
Figure BDA0003774431680000052
Platform, PL1 is called SPL (sustained power limit), one stage of PL2 is called FPPT (fast ppt limit), and two stages of PL2 are called SPPT (slow ppt limit).
9. The CPU energy efficiency ratio (energy performance preference, EPP) is used for reflecting the scheduling trend of the CPU, and the value range is 0-255. The smaller the CPU energy efficiency ratio, the higher the CPU tends to be; the higher the CPU energy efficiency ratio, the lower the CPU trend.
10. Energy efficiency-performance optimization Gear (energy performance optimize Gear, EPO Gear) used for representing the strength of adjusting EPP, wherein the value range can be 1-5; the larger the value, the more energy efficient it is to adjust EPP; the smaller the value, the more performance is favored when adjusting EPP.
The following describes an electronic device according to an embodiment of the present application.
The electronic device may be a tablet, notebook, ultra-mobile personal computer (UMPC), desktop, personal digital assistant (personal digital assistant, PDA), or the like.
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application. As shown in fig. 1, the electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, wireless communication module 150, display screen 160, etc.
It is to be understood that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the electronic device 100. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
In some embodiments, a memory may also be provided in the processor 110 for storing instructions and data. The memory in the processor 110 is illustratively a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces, such as may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a USB interface, among others.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present application is only illustrative, and does not limit the structure of the electronic device 100. In other embodiments, the electronic device 100 may also employ different interfaces in the above embodiments, or a combination of interfaces.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The charging management module 140 may also supply power to the electronic device 100 through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display screen 160, the wireless communication module 150, and the like. In some embodiments, the power management module 141 and the charge management module 140 may also be provided in the same device.
The wireless communication module 150 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied to the electronic device 100. For example, in the embodiment of the present application, the electronic device 100 may establish a bluetooth connection with a device such as a wireless headset through the wireless communication module 150. The wireless communication module 150 may be one or more devices that integrate at least one communication processing module. The wireless communication module 150 receives electromagnetic waves via an antenna, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 150 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via an antenna.
The electronic device 100 implements display functions through a GPU, a display screen 160, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 160 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 160 is used to display images, videos, and the like. The display screen 160 includes a display panel.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 via the external memory interface 120 to implement data storage functions, such as storing files of music, video, etc. in the external memory card.
The internal memory 121 may be used to store computer executable program code that includes instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
One possible software system of the electronic device 100 is described next.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, a Windows system with a layered architecture is taken as an example, and a software system of the electronic device 100 is described as an example.
Fig. 2 is a block diagram of a software system of the electronic device 100 according to an embodiment of the present application. Referring to fig. 2, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, windows systems are classified into a user mode and a kernel mode. The user mode comprises an application layer and a subsystem dynamic link library. The kernel mode is divided into a firmware layer, a hardware abstraction layer (hardware abstraction layer, HAL), a kernel (kernel) and a driving layer and an executing body from bottom to top.
As shown in FIG. 2, the application layer includes applications for music, video, games, office, social, etc. The application layer also includes an environment subsystem, a system probe module, a first scene recognition engine, a first scheduling engine, and the like. In this embodiment, only a part of application programs are shown in the figure, and the application layer may further include other application programs, such as a shopping application, a browser, and the like.
The environment subsystem may expose certain subsets of the basic executive services to the application in a particular modality, providing an execution environment for the application.
And the system probe module is used for reporting the state to the first scene recognition engine. The first scene recognition engine is used for completing recognition of the user scene according to the state reported by the system probe module and determining a scheduling strategy according to the recognized user scene. The first scheduling engine is used for scheduling the firmware layer according to a scheduling policy.
In some embodiments, the first scenario recognition engine may recognize a user scenario in which the electronic device 100 is located and determine a base scheduling policy that matches the user scenario. The first scheduling engine may obtain the load situation of the electronic device 100, and determine an actual scheduling policy according to the actual operation situation of the electronic device 100 by combining the load situation of the electronic device 100 and the basic scheduling policy. The specific content of the first scene recognition engine and the first scheduling engine is described below, and is not described herein.
The subsystem dynamic link library includes an application programming interface (application programming interface, API) module including Windows API, windows native API, etc. The Windows APIs, which can provide system call entry and internal function support for the application program, are different in that they are Windows system native APIs. For example, windows APIs may include user. Dll, kernel. Dll, and Windows native APIs may include ntdll. The user. Dll is a Windows user interface, and can be used for performing operations such as creating a window, sending a message, and the like. kernel. Dll is used to provide an interface for applications to access the kernel. ntdll.dll is an important Windows NT kernel-level file that describes the interface of the Windows local NTAPI. When Windows is started, ntdll.dll resides in a particular write protect region of memory, which prevents other programs from occupying that memory region.
The executives include a process manager, a virtual memory manager, a secure reference monitor, an Input/Output (I/O) manager, a Windows management specification (Windows management instrumentation, WMI) plug-in, a power manager, a system Event driver (operating system Event driver, oseeventdriver) node (also referred to as an Event driver (Event driver) node), a system and chip driver (operating system to System on Chip, OS2 SOC) node, and the like.
The process manager is used to create and suspend processes and threads. The virtual memory manager implements "virtual memory". The virtual memory manager also provides basic support for the cache manager. The security reference monitor may execute a security policy on the local computer that protects operating system resources, performs protection and monitoring of runtime objects. The I/O manager performs device independent input/output and further processes call the appropriate device drivers. The power manager may manage power state changes for all devices that support power state changes. The OsEventDriver node can interact with the kernel and the driving layer, such as the display card driving, and after determining that the GPU video decoding event exists, the OsEventDriver node reports the GPU video decoding event to the system probe module. The OS2SOC node may be used by the first scheduler engine to send adjustment information to the hardware device, such as information to adjust PL1 and PL2 to the CPU.
The kernel and driver layer includes a kernel and a device driver. The kernel is an abstraction of the processor architecture, separates the difference between the executable and the processor architecture, and ensures the portability of the system. The kernel may perform thread scheduling and scheduling, trap handling and exception scheduling, interrupt handling and scheduling, etc. The device driver operates in kernel mode as an interface between the I/O system and the associated hardware. The device driver may include a graphics driver,
Figure BDA0003774431680000081
Dynamic tuning technology (dynamic tuning technology, DTT) drive, mouse drive, audio video drive, camera drive, keyboard drive, etc. For example, the graphics driver may drive the GPU to run and the Intel DTT driver may drive the CPU to run.
The HAL is a core state module, which can hide various details related to hardware, such as an I/O interface, an interrupt controller, a multiprocessor communication mechanism and the like, provide uniform service interfaces for different hardware platforms running Windows, and realize portability on various hardware platforms. It should be noted that, in order to maintain portability of Windows, the Windows internal components and the device driver written by the user do not directly access the hardware, but rather by calling the routine in the HAL.
The firmware layer may include a basic input output system (basic input output system, BIOS), which is a set of programs that are cured into a Read Only Memory (ROM) chip on the motherboard of the computer, which holds the most important basic input output programs, post-boot self-test programs, and system self-start programs of the computer, which can read and write specific information of the system settings from the complementary metal oxide semiconductor (complementary metal oxide semiconductor, CMOS). Its main function is to provide the lowest, most direct hardware setup and control for the computer. The Intel DTT driver may send instructions to the CPU via the BIOS.
It should be noted that, in the embodiment of the present application, only a Windows system is used as an example to illustrate, and in other operating systems (for example, an Android (Android) system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The workflow of the software and hardware for scheduling resources by the electronic device 100 described in the embodiment of fig. 2 above is described next.
Fig. 3 is a schematic workflow diagram of software and hardware for scheduling resources by the electronic device 100 according to an embodiment of the present application.
As shown in fig. 3, the application layer includes a system probe module and a first scene recognition engine, which includes a scene recognition module and a base policy matching manager. The scene recognition module can interact with the system probe module and the basic policy matching manager respectively. The scene recognition module may send a request to the system probe module to obtain the probe status. The system probe module may acquire the operating state of the electronic device 100. For example, the system probe modules may include a power state probe, a peripheral state probe, a process load probe, an audio video state probe, a system load probe, a system event probe, and the like.
The power state probe may subscribe to a kernel state for a power state event, determine a power state according to a callback function fed back by the kernel state, where the power state includes a battery (remaining) power, a power mode, and the like, and the power mode may include an alternating current (alternating current, AC) state and a Direct Current (DC) state. For example, the power state probe may send a request to the oseeventdriver node of the executive layer to subscribe to a power state event, which is forwarded by the oseeventdriver node to the power manager of the executive layer. The power manager may feed back a callback function to the power state probe through the oseeventdriver node.
The peripheral state probe can subscribe a peripheral event to the kernel state, and the peripheral event is determined according to a callback function fed back by the kernel state. Peripheral events include mouse wheel slide events, mouse click events, keyboard input events, microphone input events, camera input events, and the like.
The process load probe may subscribe to the process load from kernel states and determine the load of the process (e.g., the focal process) from the callback function fed back from kernel states.
The system load probe can subscribe the system load to the kernel state, and the system load is determined according to a callback function fed back by the kernel state.
The audio and video status probe may subscribe to the kernel mode for audio and video events, and determine the audio and video events currently existing in the electronic device 100 according to the callback function fed back by the kernel mode. The audio video events may include GPU decoding events, and the like. For example, the audio/video status probe may send a request to the oseeventdriver node of the executive layer to subscribe to the GPU decoding event, and the oseeventdriver node forwards the request to the graphics card driver of the kernel and driver layer. The display card driver can monitor the state of the GPU, and after the GPU is monitored to perform decoding operation, callback functions are fed back to the audio and video state probes through the OsEventDriver node.
The system event probe can subscribe to the kernel state for system events, and the system events are determined according to a callback function fed back by the kernel state. The system events may include window change events, process creation events, thread creation events, and the like. For example, the system event probe may send a request to the oseeventdriver node of the executive layer to subscribe to a process creation event, which is forwarded by the oseeventdriver node to the process manager. After the process manager creates the process, a callback function can be fed back to the system event probe through the OsEventDriver node. For another example, the system event probe may also send a request to the API module to subscribe to a focus window change event, and the API module may monitor whether the focus window of the electronic device 100 has changed, and when it monitors that the focus window has changed, feed back a callback function to the system event probe.
It can be seen that the system probe module subscribes to various events of the electronic device 100 from the kernel mode, and then determines the running state of the electronic device 100 according to the callback function fed back from the kernel mode, so as to obtain the probe state. After the system probe module obtains the probe state, the probe state can be fed back to the scene recognition module. After the scene recognition module receives the probe state, the scene recognition module can determine the user scene where the electronic device 100 is located according to the probe state. The user scene may include a video scene, a game scene, an office scene, a social scene, and the like. The user scenario may reflect the current use needs of the user. For example, when the scene recognition module recognizes that the focus window is a window of the video application, it determines that the electronic device 100 is in a video scene, which indicates that the user needs to watch and browse the video using the video application. For another example, the scene recognition module determines that the electronic device 100 is in a social scene when recognizing that the focus window is a chat window of an instant messaging application. The scene recognition module may also send the user scene to the base policy matching manager. The base policy matching manager may determine a base scheduling policy based on the user scenario. The base policy matching manager may feed back the base scheduling policy to the scene recognition module. The scene recognition module may send the base scheduling policy and the user scene to a first scheduling engine of an application layer.
As shown in fig. 3, the first scheduling engine includes a load manager, a chip policy aggregator, and a scheduling executor. The load controller may receive the basic scheduling policy and the user scenario sent by the scenario identification module. The load controller can also acquire the system load from the system probe module, and adjust the basic scheduling strategy according to the system load and the user scene to obtain the actual scheduling strategy. The actual scheduling policy includes an Operating System (OS) scheduling policy and a first CPU power consumption scheduling policy.
The load manager may send the OS scheduling policy to the scheduling executor, and the scheduling executor may schedule based on the OS scheduling policy. The OS scheduling policy is used to adjust the process priority and I/O priority of the focal process. For example, the schedule executor may send an instruction to the process manager to adjust the process priority of the focal process, in response to which the process manager adjusts the process priority of the focal process. For another example, the scheduling executor may send an instruction to the I/O manager to adjust the I/O priority of the focal process, in response to which the I/O manager adjusts the I/O priority of the focal process.
The load management controller can also send a first CPU power consumption scheduling policy to a chip policy fusion device, and the chip policy fusion device can be used for scheduling based on the type of a chip platform of the CPU and the first CPU power consumptionAnd obtaining a second CPU power consumption scheduling strategy by the degree strategy. The chip platform types of the CPU are mainly divided into two types, namely
Figure BDA0003774431680000091
CPU and +.>
Figure BDA0003774431680000092
These two types of CPUs are different in the adjustment manner of CPU power consumption, and therefore need to be distinguished.
If the chip platform type of the CPU is
Figure BDA0003774431680000101
The schedule executor may send an instruction to the power manager to adjust the EPP of the CPU. In addition, the schedule executor may also send instructions to the OS2SOC driving node to adjust PL1, PL2 to adjust PL1 and PL2 of the CPU.
If the chip platform type of the CPU is
Figure BDA0003774431680000102
The scheduling executor may send a second CPU power consumption scheduling policy to the Intel DTT driver through the WMI plug-in, the second CPU power consumption scheduling policy may include a minimum value of PL1 (pl1_mini), a maximum value of PL1 (pl1_max), a duration of PL2, PL2 (pl2_time), and EPO Gear, and the Intel DTT driver instructs the CPU to operate based on the second CPU power consumption scheduling policy.
Another possible software system of the electronic device 100 is described next.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, a Windows system with a layered architecture is taken as an example, and a software system of the electronic device 100 is described as an example.
Fig. 4 is a block diagram of a software system of the electronic device 100 according to an embodiment of the present application. Referring to fig. 4, the layered architecture divides the software into several layers, each with a clear role and division of work. The layers communicate with each other through a software interface. In some embodiments, the Windows system includes an application layer, a subsystem dynamic link library, a driver layer, and a firmware layer.
As shown in FIG. 4, the application layer includes applications for music, video, games, office, social, etc. The application layer also comprises a system probe module, a second scene recognition engine, a second scheduling engine, a strategy configuration module, a manager interface module and the like. In this embodiment, only a part of application programs are shown in the figure, and the application layer may further include other application programs, such as a shopping application, a browser, and the like.
And the system probe module is used for reporting the state to the second scene recognition engine. The second scene recognition engine is used for completing recognition of the user scene according to the state reported by the system probe module and determining a scheduling strategy according to the recognized user scene. The second scheduling engine is used for scheduling the firmware layer according to the scheduling policy.
The policy configuration module is used for sending a plurality of preset scheduling policies to the second scene recognition engine, and the second scene recognition engine searches the scheduling policies matched with the recognized user scene from the plurality of scheduling policies after recognizing the user scene. The housekeeping interface module is used for providing a current power mode to the second scene recognition module, and the second scene recognition engine can select a scheduling strategy matched with the current power mode and the current user scene.
The subsystem dynamic link library comprises an API module comprising Windows API, windows native API, etc. The Windows APIs, which can provide system call entry and internal function support for the application program, are different in that they are Windows system native APIs. For example, windows APIs may include user. Dll, kernel. Dll, and Windows native APIs may include ntdll. The user. Dll is a Windows user interface, and can be used for performing operations such as creating a window, sending a message, and the like. kernel. Dll is used to provide an interface for applications to access the kernel. ntdll.dll is an important Windows NT kernel-level file that describes the interface of the Windows local NTAPI. When Windows is started, ntdll.dll resides in a particular write protect region of memory, which prevents other programs from occupying that memory region.
The driver layer may include a process manager, a virtual memory manager, a secure reference monitor, an I/O manager, a power manager, a WMI plug-in, an Event driver node, an OS2SOC driver node.
The process manager is used to create and suspend processes and threads. The virtual memory manager implements "virtual memory". The virtual memory manager also provides basic support for the cache manager. The security reference monitor may execute a security policy on the local computer that protects operating system resources, performs protection and monitoring of runtime objects. The I/O manager performs device independent input/output and further processes call the appropriate device drivers. The power manager may manage power state changes for all devices that support power state changes. The WMI plug-in can be used for the second scheduling engine to send scheduling strategies to the firmware layer; the Event driven node may interact with a graphics card driver, an audio/video driver, a camera driver, a keyboard driver, etc. to enable the system probe module to detect various events (which may also be referred to as data or information), for example, interact with the graphics card driver, so that the system probe module may monitor GPU video decoding events. The OS2SOC driver node may be configured for the second scheduling engine to send scheduling policies to the firmware layer.
The firmware layer includes various hardware and hardware drivers configured for the electronic device 100, e.g., the firmware layer may include a CPU, a mouse, etc., and the firmware layer may also include a mouse driver. The hardware of the electronic device 100 may be configured by different hardware platforms, for example, the hardware platforms include:
Figure BDA0003774431680000111
heweida->
Figure BDA0003774431680000112
Etc., the scheduling policies of the three hardware platforms may be different, so the second scheduling engine may distinguish hardware platform types when determining the scheduling policies. In this case, the firmware layer may also include Intel DTT, AMD Power management framework (power management framework, PMF), NVIDIA database (DB) Etc.
It should be noted that, in the embodiment of the present application, only a Windows system is used as an example to illustrate, and in other operating systems (for example, an Android system, an IOS system, etc.), the scheme of the present application can be implemented as long as the functions implemented by the respective functional modules are similar to those implemented by the embodiment of the present application.
The workflow of the software and hardware for scheduling resources by the electronic device 100 described in the embodiment of fig. 4 above is described next.
Fig. 5 is a schematic workflow diagram of software and hardware for scheduling resources by the electronic device 100 according to an embodiment of the present application.
As shown in fig. 5, the operating system of the electronic device 100 includes a system probe module, a second scene recognition engine, a second scheduling engine, and a chip scheduling engine. The system probe module, the second scene recognition engine and the second scheduling engine are located at an application layer, the second scene recognition engine can be operated as a plug-in, and the second scheduling engine can be operated as a service. The chip scheduling engine is located at the driver layer and can operate as a service.
The second scene recognition engine may interact with the system probe module to recognize a user scene according to the operation state of the electronic device 100 fed back by the system probe module. The second scene recognition engine can interact with the second scheduling engine, and after the second scene recognition engine recognizes the user scene, the scheduling strategy is determined according to the user scene, and the scheduling strategy is issued to the second scheduling engine. And after the second scheduling engine receives the scheduling policy, returning a receiving result to the second scene recognition engine so as to inform the second scene recognition engine that the second scene recognition engine has successfully received the scheduling policy. Then, the second scheduler engine transmits the scheduling policy to the chip scheduler engine, and the chip scheduler engine executes the scheduling policy.
The second scene recognition engine comprises a scene recognition module, a scene library, a strategy scheduling module and a strategy library. The second scheduling engine comprises a scene interaction module, a scheduling policy fusion module and a scheduling executor. The chip scheduling engine comprises a WMI plug-in, an Event driving node and an OS2SOC driving node. The firmware layer includes Intel DTT, AMD PMF, NVIDIA DB, etc.
As shown in fig. 5, the second scenario recognition engine may interact with the system probe module, the policy configuration module, the housekeeping interface module, and the second scheduling engine, respectively.
The manager interface module may send a power mode currently used by the electronic device to the second scene recognition engine, where the power mode may assist the second scene recognition engine in determining the scheduling policy; the policy configuration module is used for sending a plurality of pre-configured scheduling policies to the second scene recognition engine.
The system probe module may acquire the operating state of the electronic device 100. For example, the system probe modules may include power state probes, peripheral state probes, audiovisual state probes, application switching probes, system load probes, system operational state probes, and the like.
Wherein the power state probe is used to detect a power state including a battery (remaining) amount, a power mode, a power plan, etc., the power mode may include an AC state and a DC state, and the power plan may include an energy efficiency plan, a balance plan, a performance plan, etc. The peripheral status probe is used for detecting peripheral events, including mouse wheel sliding events, mouse click events, keyboard input events, microphone input events, camera input events, and the like. The audio-visual status probe is used to detect audio events and video events currently present in the electronic device 100. The application switching probe is used for detecting an application currently running in the electronic device 100, that is, detecting a focus application, a non-focus application, a background application, and the like, where the focus application is an application to which a focus window belongs, the non-focus application is an application to which a window that is not a focus window but is not minimized in a currently opened window belongs, and the background application is an application running in the background. The system load probe is used to detect the current load level of the system. The system working state probe is used for detecting the current working state of the system, namely, detecting whether the system is in an idle state or not.
The system probe module detects the operation state of the electronic device 100 through various probes, and obtains the probe state. The second scenario recognition engine may subscribe to the system probe module for probe status. In this case, after the system probe module obtains the probe state, the probe state may be reported to the second scene recognition engine.
The scene library in the second scene recognition engine is used for storing a plurality of user scenes, such as a plurality of main scenes including a social scene, an office scene, a browser scene and the like, and a plurality of sub-scenes can be divided under each main scene, such as a browser scene including a browser internet surfing scene, a browser audio playing scene, a browser video playing scene and the like. The policy library in the second scene recognition engine is used for storing various scheduling policies sent by the policy configuration module. For example, the scheduling policies include a size core scheduling, an office policy library, and the like, in which the scheduling policies related to office applications are recorded, the size core scheduling being
Figure BDA0003774431680000121
The architecture of the 12 th generation platform provides a large and small core scheduling capability indicating policy configuration that prioritizes use of large cores (bias performance) or small cores (bias energy efficiency).
The policy scheduling module may send a query subscription scenario request to the scenario recognition module, where the query subscription scenario request is used to trigger the scenario recognition module to perform scenario recognition, and the query subscription scenario request may be sent immediately after the electronic device 100 is powered on, or may be sent periodically, which is not limited in the embodiment of the present application.
After receiving the inquiry subscription scene request, the scene recognition module sends an inquiry subscription state request to the system probe module, wherein the inquiry subscription state request is used for indicating each probe in the system probe module to perform state detection/state determination and the like, and then the system probe module can report the state of the electronic device 100 to the scene recognition module. The scene recognition module determines the current user scene of the electronic device 100 from the scene library according to the state of the electronic device, and reports the user scene to the policy scheduling module. The policy scheduling module may determine a scheduling policy from a policy repository according to a user scenario in which the electronic device 100 is currently located.
In addition, the policy dispatching module can also receive a power mode issued by the manager interface module, and the power mode can be determined according to a user switch identifier issued by the manager interface module. The policy scheduling module may refer to the power mode in determining the scheduling policy, such as determining a scheduling policy that matches the currently used power mode and the current user scenario. Or, the power mode is used as a condition of determining the scheduling policy by the policy scheduling module, and when the power mode is a preset mode, the policy scheduling module determines the scheduling policy according to the user scene.
In some embodiments, the policy dispatch module may also obtain the system load, such as from a system load probe in the system probe module. The policy scheduling module may refer to the system load in determining the scheduling policy, such as determining a scheduling policy that matches the current system load and the current user scenario.
The strategy scheduling module sends the scheduling strategy to the scene interaction module, and the scene interaction module returns a receiving result to the strategy scheduling module after receiving the scheduling strategy, wherein the receiving result is used for informing the strategy scheduling module that the scheduling strategy is successfully received. The scene interaction module sends the scheduling strategy to the scheduling strategy fusion module, and the scheduling strategy fusion module analyzes and changes the scheduling strategy to analyze and change strategy parameters in the scheduling strategy into parameters recognized by the hardware platform. And the scheduling policy fusion module transmits the parsed scheduling policy to a scheduling executor. And the scheduling executor transmits the scheduling strategy after resolution and escape according to the type of the hardware platform.
For example, if the hardware platform type is
Figure BDA0003774431680000131
The scheduling executor can send the analyzed and escaped scheduling strategy to the OS2SOC driving node; if the hardware platform type is +. >
Figure BDA0003774431680000132
The scheduling executor can send the analyzed and escape scheduling strategy to the Intel DTT driver through the WMI plug-in. In this embodiment of the present application, the scheduling policy may be a chip scheduling policy, and by adjusting an energy efficiency ratio of a chip, an optimal power consumption is achievedBalance. For example, the scheduling policy may be a power consumption scheduling policy of the CPU.
If the hardware platform type is
Figure BDA0003774431680000133
The schedule executor may send instructions in the scheduling policy to the power manager to adjust the EPP of the CPU. In addition, the schedule executor may also send instructions for adjusting PL1, PL2 of the CPU in the schedule policy to the OS2SOC driving node. If the hardware platform type is +.>
Figure BDA0003774431680000134
The scheduling executor may send a scheduling policy to the Intel DTT driver through the WMI plug-in, where the minimum value of PL1, the maximum value of PL1, the duration of PL2, and EPO Gear may be included, and the Intel DTT driver instructs the CPU to operate based on the scheduling policy. After receiving the scheduling policy, the WMI plug-in, intel DTT and OS2SOC driving node may return a receiving result, where the receiving result is used to indicate that the receiving result has successfully received the scheduling policy.
It should be noted that in the embodiments of fig. 2-5, the system probe modules include audio/video status probes. Alternatively, the audiovisual state probe may comprise an audio probe and a video probe. The audio probe may detect audio events and the video probe may detect video events.
The audio probe may detect various information related to the audio state. For example, a default audio device may be determined, an audio session may be determined, a session state of the audio session may be determined, and so forth. The implementation of the audio probe is described below.
Fig. 6 is a flowchart of an audio session acquisition method according to an embodiment of the present application. Referring to fig. 6, the method includes the steps of:
step 601: the observer registers a probe callback with the probe manager.
Alternatively, an Observer (underserver) may be a module that needs to perform a preset process flow, including but not limited to determining a user scenario. By way of example, the observer may be a scene recognition module as described in the embodiments of fig. 2-5 above. Of course, the observer may be another module that needs to know the audio status, which is not limited in this embodiment of the present application.
The role of the observer registering a probe callback with the probe manager (ProbeManager) is to return an audio event to the observer when the probe manager has acquired the audio event. The probe management program may be an audio-visual status probe as described in the embodiments of fig. 2-5 above. Of course, the probe management program may be another module that can implement probe management, which is not limited in this embodiment of the present application.
Step 602: the probe manager registers probe callbacks with the audio probes.
The role of the probe manager to register a probe callback with an audio probe (AudioProbe) is to return an audio event to the probe manager when the audio probe acquires the audio event.
Step 603: the audio probe registers a default audio device change notification with the audio session controller.
The default audio device may be a default audio input device or a default audio output device. The default audio input device refers to an audio device that is currently allowed to input audio, and for example, the default audio input device may be a microphone, a headset, or the like. The default audio output device refers to an audio device that is currently permitted to output audio, and for example, the default audio output device may be a speaker, a headphone, a sound, or the like. Typically, only one default audio input device can exist at a time, i.e., only one audio device can be allowed to input audio at a time. Only one default audio output device can exist at a time, i.e. only one audio device can be allowed to output audio at a time.
An audio session controller (AudioSessionManager) may detect a default audio device, i.e. whether a change of the default audio device has occurred. After the audio probe registers a default audio device change notification with the audio session controller, the audio session controller may send the default audio device change notification to the audio probe when detecting that the default audio device has changed.
Step 604: the audio session controller sends a default audio device change notification to the audio probe when a change in the default audio device is detected.
The default audio device change notification may carry the device identification of the changed default audio device (i.e., the most current default audio device). Optionally, the default audio device change notification may also carry the device identification of the default audio device (i.e., the historical default audio device) prior to the change.
For example, when detecting that the default audio device is changed from sound to earphone, the audio session controller may send a default audio device change notification to the audio probe, where the default audio device change notification carries the device identifier of the default audio device (i.e. earphone) after the change, and may further carry the device identifier of the default audio device (i.e. sound) before the change.
For another example, the audio session controller may send a default audio device change notification to the audio probe when detecting that the default audio device is changed from the earphone to the speaker, where the default audio device change notification carries the device identifier of the default audio device (i.e., the speaker) after the change, and may further carry the device identifier of the default audio device (i.e., the earphone) before the change.
Step 605: the audio probe registers an audio session state change notification with the audio session controller.
After receiving the default audio device change notification sent by the audio session controller, the audio probe knows that the default audio device has changed, and at this time, the audio probe can register the audio session state change notification with the audio session controller so as to obtain session states of all audio sessions using the latest default audio device.
Alternatively, the audio probe may enumerate all audio sessions using the default audio device and then register an audio session state change notification for those audio sessions with the audio session controller.
An application creates an audio session when it needs to input audio or output audio, which is an intermediary between the application and the operating system for configuring the audio behavior of the application.
For example, a music application creates an audio session using headphones when playing a song. In this case, when the audio is started to be output at the time of immediately starting the song to be played, the session state of the audio session is changed from the non-output audio to the output audio, and the audio session controller detects that the session state of the audio session is changed. Then, when no audio is output after the song is played, the session state of the audio session is changed from output audio to non-output audio, and the audio session controller detects that the session state of the audio session is changed.
After the audio probe registers the audio session state change notification to the audio session controller, the audio session controller sends the audio session state change notification to the audio probe when detecting that the session state of a certain audio session is changed.
Step 606: and if the audio session controller detects that the session state of any audio session is changed, sending an audio session state change notification to the audio probe.
The audio session state change notification carries a process identification of a process of the audio session (which may be referred to as a target audio session) in which the session state is changed and a session state of the audio session. Further, the audio session state change notification may also carry the device state of the default audio device used by the target audio session. For example, when the default audio device is a speaker, the device status of the default audio device may include whether the speaker is mute, the current volume of the speaker, and the like. Further, the audio session state change notification may also carry an application identification of the application to which the target audio session belongs (i.e., the application that created the target audio session).
Step 607: the audio probe sends the audio session state change notification to the observer.
The observer can determine, in particular, which application is using the default audio device, and can learn the current audio input state (i.e., audio is being input or audio is not being input) or the audio output state (i.e., audio is being output or audio is not being output) based on the progress identification of the progress of the target audio session and the session state of the target audio session carried in the audio session state change notification. In this way, it is convenient for the observer to determine the application running state of the application, which may include, for example, a voice chat, a video conference, listening to music, watching video, and the like.
For example, the observer may determine application running information of each application running according to the audio session state change notification, and determine a user scene in which the electronic device is located according to the application running information.
The application information of an application may include an application name, an application type, an application running state, etc. of the application.
In a possible manner, after receiving the focus window change event reported by the focus window probe, an observer can determine applications such as a current focus application, a non-focus application, a background application and the like according to process creation events, process exit events, window events and the like detected by other probes in the system probe module, namely, determine application names and application types of all applications, and then determine application running states of all applications by combining peripheral states, audio states (namely, audio session state change notification fed back by the audio probe), video states, process loads and the like detected by other probes in the system probe module, so that application running information of the applications such as the current focus application, the non-focus application, the background application and the like can be obtained.
In the embodiment of the application, resource scheduling can also be performed according to the user scene, and specifically, the scheduling can be performed on the bottom hardware resources. That is, the resources can be reasonably configured according to the current user scene, so that the resources are reasonably allocated in the user scene as much as possible, the power consumption of the electronic equipment is reduced, the endurance of the electronic equipment is improved, and the influence on the user scene is reduced. In addition, the resources are reasonably distributed according to the current user scene, so that the resources can meet the resource requirements of the user scene, the smooth operation of the application in the user scene is ensured, and the user experience is improved.
The following is an example of the flow shown in fig. 7, and the above-described audio session acquisition method is exemplarily described with reference to fig. 7.
Fig. 7 is a flowchart of an audio session acquisition method according to an embodiment of the present application. Referring to fig. 7, the audio session acquisition method may be implemented by the following steps.
Step 701: the electronic device creates a device enumerator.
Illustratively, the device enumerator may be created by a CoCreateInstance function.
The device enumerator may enumerate all audio devices in the current electronic device.
Step 702: the electronic device determines a default audio device.
The audio device that allows input audio or allows output audio among all audio devices in the enumerated current electronic device may be determined as a default audio device.
For example, the default audio device may be determined by a GetDefaultAudioEndpoint function.
Alternatively, the electronic device may actively determine the default audio device at startup.
After determining the current default audio device, the electronic device may perform the following step 704 to activate the session manager of the default audio device.
Step 703: the electronic device registers a default audio device change notification.
The electronic device may register a default audio device change notification with the operating system so that when the default audio device changes, the default audio device change notification may be received, and the changed default audio device may be known according to the default audio device change notification.
For example, a default audio device change notification may be registered through a registerendnotificationcallback function.
Alternatively, the electronic device may register a default audio device change notification at startup.
Optionally, if the electronic device receives a default audio device change notification during operation, the electronic device may determine a changed default audio device (i.e., the latest default audio device) according to the default audio device change notification, and then the electronic device may execute the following step 704 to activate a session manager of the default audio device.
Step 704: the electronic device activates a session manager of the default audio device.
The session manager of the default audio device is for managing audio sessions using the default audio device. Upon determining the default audio device, the electronic device may activate the corresponding session manager.
An application creates an audio session when it needs to input audio or output audio, which is an intermediary between the application and the operating system for configuring the audio behavior of the application. For example, the browser will not create an audio session when the browser is just opened, but will create an audio session when a page with music output is opened in the browser.
For example, the session manager of the default audio device may be activated by an active function.
Step 705: the electronic device registers an audio session creation notification.
The electronic device may register an audio session creation notification with a session manager of the default audio device. In this manner, the session manager of the default audio device may send an audio session creation notification upon detecting that an application creates an audio session using the default audio device. The audio session creation notification is used to indicate that there are applications that are creating audio sessions using the default audio device. Alternatively, the audio session creation notification may carry an application identification of the application that is creating the audio session, i.e. the audio session creation notification may carry an application identification of the application to which the created audio session belongs.
For example, the audio session creation notification may be registered through a registerSessionNotification function.
The electronic device, upon receiving the audio session creation notification sent by the session manager of the default audio device, may perform the following step 706.
Step 706: the electronic device obtains an audio session enumerator.
The audio session enumerator may enumerate all audio sessions that are using the default audio device.
For example, the audio session enumerator may be obtained through a getsessionenergy function.
Alternatively, the electronic device may directly obtain the audio session enumerator to enumerate all audio sessions that are using the default audio device upon receiving the audio session creation notification.
However, for some applications, such as the Windows system's own recorder application, when the application triggers an audio session creation notification when an audio session is created, if the audio session is enumerated immediately, there is a small probability that the audio session created by the recorder application is not enumerated. For this case, after receiving the audio session creation notification, the embodiments of the present application may enumerate all audio sessions that are using the default audio device after a preset period of time (e.g., 1 second) has elapsed.
For example, the audio session creation notification may be processed after a delay of 1 second in the independent event loop thread to enumerate all audio sessions that are using the default audio device. In particular, in the embodiment of the present application, a separate sub-thread (i.e., a separate event loop thread, which may also be referred to as a second sub-thread) may be newly established to asynchronously process the audio session creation notification. That is, after receiving the audio session creation notification, when the audio session creation notification needs to be processed, the audio session creation notification may be added to the task queue of the second sub-thread, and then the second sub-thread is executed. The second sub-thread may read the audio session creation notification from the task queue when executing, and may delay a preset time period after reading the audio session creation notification, and then process the audio session creation notification. Processing the audio session creation notification refers to obtaining an audio session enumerator to enumerate all audio sessions that are using the default audio device. Thus, the delay processing of the audio session creation notification is realized through the second sub-thread, and the processing of other events of the system is not influenced, namely the other events of the system are not blocked.
Therefore, since the audio session is enumerated after the preset time length of the audio session creation notification is received, the audio session being created can be ensured to have enough time to be created, and all audio sessions can be accurately enumerated when the audio session is enumerated.
Step 707: the electronic device obtains the number of audio sessions.
After the electronic device obtains the number of audio sessions, all audio sessions using the default audio device can be enumerated by the audio session enumerator according to the number of audio sessions.
The number of audio sessions may be obtained by a GetCount function, for example.
Step 708: the electronic device obtains an audio session controller.
The audio session controller is configured to detect a session state of an audio session.
The audio session controller may be obtained by a GetSession function, for example.
Step 709: the electronic device registers an audio session state change notification.
After enumerating all audio sessions that are using the default audio device, the electronic device may register an audio session state change notification for those audio sessions with the audio session controller. In this way, the audio session controller can send an audio session state change notification when detecting that the session state of any one of the audio sessions has changed. The audio session state change notification carries a process identifier of a process of the audio session (which may be referred to as a target audio session) with the session state changed and a session state of the target audio session. Further, the audio session state change notification may also carry the device state of the default audio device used by the target audio session. For example, when the default audio device is a speaker, the device status of the default audio device may include whether the speaker is mute, the current volume of the speaker, and the like. Further, the audio session state change notification may also carry an application identification of the application to which the target audio session belongs (i.e., the application that created the target audio session).
The electronic device may determine, in particular, which application is using the default audio device, and may learn the current audio input state (i.e., audio is being input or audio is not being input) or the audio output state (i.e., audio is being output or audio is not being output) based on the progress identification of the progress of the target audio session and the session state of the target audio session carried in the audio session state change notification. In this way, it is convenient to determine the application running state of the application, which may include, for example, voice chat, video conferencing, listening to music, watching video, and the like.
For example, application running information of each running application may be determined according to the audio session state change notification, and a user scene in which the electronic device is located may be determined according to the application running information.
The application information of an application may include an application name, an application type, an application running state, etc. of the application.
In a possible manner, after receiving the focus window change event reported by the focus window probe, determining applications such as a current focus application, a non-focus application, a background application and the like according to process creation events, process exit events, window events and the like detected by other probes in the system probe module, namely determining application names and application types of all applications, and then determining application running states of all applications by combining peripheral states, audio states (i.e. session states of audio sessions), video states, process loads and the like detected by other probes in the system probe module, so that application running information of the applications such as the current focus application, the non-focus application, the background application and the like can be obtained.
In the embodiment of the application, the electronic device may further perform resource scheduling according to the user scenario, and may specifically schedule the bottom hardware resource. That is, the resources can be reasonably configured according to the current user scene, so that the resources are reasonably allocated in the user scene as much as possible, the power consumption of the electronic equipment is reduced, the endurance of the electronic equipment is improved, and the influence on the user scene is reduced. In addition, the resources are reasonably distributed according to the current user scene, so that the resources can meet the resource requirements of the user scene, the smooth operation of the application in the user scene is ensured, and the user experience is improved.
In this case, the operation of determining the application running state from the audio session state change notification may be as follows:
if the process identification of the process of the focus window is the same as the process identification of the process of the target audio session, determining the application running state of the focus application to which the focus window belongs according to the session state of the target audio session. If the process identification of the process of the focus window is different from the process identification of the process of the target audio session, the application identification corresponding to the process identification of the process of the focus window is searched, and if the application identification corresponding to the process identification of the process of the focus window is the same as the application identification of the application to which the target audio session belongs, the application running state of the focus application to which the focus window belongs is determined according to the session state of the target audio session.
Alternatively, if the process identifier of the process of the focus window and the process identifier of the process of the audio session in the audio output state are the same, it may be determined that the focus application to which the focus window belongs is outputting audio.
If the process identifier of the process of the focus window is different from the process identifier of the process of the audio session in the audio output state, an application identifier corresponding to the process identifier of the process of the focus window can be obtained. As an example, it may be queried whether a process of the target audio session exists in the processes of the applications identified by the application identifier, if so, it is determined that the focus application to which the focus window belongs is outputting audio, otherwise, it may be determined, according to the application identifier corresponding to the process identifier of the process of the target audio session (i.e., the application identifier of the application to which the target audio session belongs), which application is outputting audio. As another example, it may be determined whether an application identifier corresponding to a process identifier of a process of the focus window is the same as an application identifier of an application to which the target audio session belongs, if so, it is determined that the focus application to which the focus window belongs is outputting audio, otherwise, it may be determined which application is outputting audio according to the application identifier of the application to which the target audio session belongs.
Similarly, if the process identifier of the process of the focus window and the process identifier of the process of the audio session in the audio input state are the same, it may be determined that the focus application to which the focus window belongs is inputting audio.
If the process identifier of the process of the focus window is different from the process identifier of the process of the audio session in the audio output state, an application identifier corresponding to the process identifier of the process of the focus window can be obtained. As an example, the application may be queried to identify whether there is a progress of the audio session in the progress of the identified application, and if so, determine that the focus application to which the focus window belongs is inputting audio. Otherwise, it may be determined, specifically which application is inputting audio, according to the application identifier corresponding to the process identifier of the process of the target audio session (i.e., the application identifier of the application to which the target audio session belongs). As another example, it may be determined whether an application identifier corresponding to a process identifier of a process of the focus window is the same as an application identifier of an application to which the target audio session belongs, if so, it is determined that the focus application to which the focus window belongs is inputting audio, otherwise, it may be determined which application is outputting audio specifically according to the application identifier of the application to which the target audio session belongs.
When the application identifier corresponding to the process identifier of the process of the focus window is obtained, the application identifier corresponding to the process identifier of the process of the focus window can be searched from the application white list. The application whitelist is preset and may be preset by a technician according to the use requirement. The application white list includes at least one process identifier corresponding to each of a plurality of application identifiers, the at least one process identifier corresponding to each of the plurality of application identifiers being a process identifier of each of at least one process of the application identified by each application identifier. As an example, the application whitelist may include at least one process identification corresponding to an application identification of each of all applications in the electronic device. As another example, the application white list may include at least one process identifier corresponding to an application identifier of each application in the portion of applications in the electronic device, where, in this case, each application identified by each application identifier in the plurality of application identifiers in the application white list is an application that needs to be focused by a technician according to a requirement of use, for example, may be an application that is often used by a user, or may be an application that may greatly affect energy consumption or cruising ability of the electronic device.
It should be noted that, for the above-mentioned default audio device change notification, audio session creation notification, audio session state change notification, and other audio events, in order to avoid blocking other notifications of the system or blocking notifications of other applications listening to the same event by the system when processing the audio events, in this embodiment of the present application, the audio events may be processed asynchronously, that is, a message queue may be added, when the audio events arrive, to be added to the message queue for queuing, where other notifications may be processed normally without waiting for the completion of processing of the audio events, so that blocking of the system and other applications may be avoided.
In particular, a separate sub-thread (which may be referred to as a first sub-thread) may be newly established additionally to asynchronously handle audio events such as default audio device change notifications, audio session creation notifications, audio session state change notifications, and the like. That is, upon receiving audio events such as a default audio device change notification, an audio session creation notification, an audio session state change notification, etc., these audio events may be added to the message queue of the first sub-thread, which may then be executed. The first sub-thread, when executing, may read these audio events from the message queue in sequence for processing. That is, in the embodiment of the present application, by adding a message queue, audio events can be processed in a fixed sub-thread, so that blocking caused by time consuming processing logic on a system and other applications can be eliminated.
In the embodiment of the application, the processing of the default audio device change notification refers to redefining the default audio device. The processing of the audio session creation notification refers to acquiring all audio sessions that are using the default audio device. The processing of the audio session state change notification means that the audio session state change notification is sent to the observer.
In particular, for the audio session creation notification, a separate sub-thread (which may be referred to as a second sub-thread) may be additionally established to implement the delay processing of the audio session creation notification. That is, when the first sub-thread is executing, if the audio event read from the message queue is an audio session creation notification, all audio sessions that are using the default audio device are acquired after a preset duration of the audio session creation notification is read. Specifically, the first sub-thread processes the read audio session creation notification by adding the audio session creation notification to the task queue of the second sub-thread, and then executing the second sub-thread. The second sub-thread may read the audio session creation notification from the task queue when executing, and may delay a preset time period after reading the audio session creation notification, and then process the audio session creation notification, that is, obtain all audio sessions using the default audio device after reading the preset time period of the audio session creation notification. Thus, the second sub-thread is used for realizing the delay processing of the audio session creation notification, and the processing of other audio events (such as default audio equipment change notification, audio session state change notification and the like) by the first sub-thread is not influenced, i.e. the processing of other audio events by the first sub-thread is not blocked.
Alternatively, since most applications create audio sessions in a relatively short time, only individual applications need a longer time to create audio sessions, such as the Windows system's own recorder application. Thus, the second sub-thread may also determine, when reading the audio session creation notification from the task queue, that the read audio session creation notification indicates an application to which the created audio session belongs. If the read audio session creation notification indicates that the created audio session belongs to a preset application (including but not limited to a recorder application), all audio sessions that are using the default audio device are acquired after a preset duration of the audio session creation notification is read. If the read audio session creation notification indicates that the created audio session does not belong to the preset application, all audio sessions using the default audio device may be directly acquired when the audio session creation notification is read.
It is noted that in the embodiment of the present application, the default audio device and the default audio device change notification may be obtained through a Windows API, the audio session and the session state thereof may be enumerated through the Windows API, including the audio input state and the audio output state, and the process identifier of the process of the audio session may also be obtained through the Windows API.
The embodiment of the application can provide accurate judgment basis for tuning strategy calculation, and whether an application is in audio input or audio output is identified by monitoring the creation, change and expiration of an audio session. That is, the operating state of the application is determined by listening for a change in the default audio device and a change in the session state of the audio session of the application.
According to the embodiment of the application, the application scene such as voice chat, video conference, music listening, video watching and the like can be identified by combining the basic running state of the audio session process with the application white list. And the user can be reminded of whether the setting is improper or not, if the user opens the social application, namely, when the current focus application is detected to be the social application, if the default audio equipment is found to be in a mute state according to the audio session state change notification, a reminding message can be displayed so as to remind the user of whether the default audio equipment needs to be started or not.
Fig. 8 is a schematic structural diagram of an audio session obtaining apparatus provided in the embodiment of the present application, where the apparatus may be implemented by software, hardware, or a combination of both, and may be part or all of a computer device, which may be the electronic device 100 in the embodiment of fig. 1. Referring to fig. 8, the apparatus includes: a determining module 801, a registering module 802, and an acquiring module 803.
A determining module 801, configured to determine a default audio device, where the default audio device is an audio device that allows audio to be input or allows audio to be output;
a registration module 802 for registering an audio session creation notification with a session manager of a default audio device;
an obtaining module 803, configured to obtain all audio sessions that are using the default audio device after receiving the preset duration of the audio session creation notification if the audio session creation notification sent by the session manager is received.
In the embodiment of the application, since the audio session is enumerated after the preset duration of the audio session creation notification is acquired, the audio session being created can be ensured to have enough time to be created, and further all audio sessions can be ensured to be enumerated correctly when the audio session is enumerated.
It should be noted that: in the audio session acquiring device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above.
The functional units and modules in the above embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the embodiments of the present application.
The audio session acquiring device and the audio session acquiring method provided in the foregoing embodiments belong to the same concept, and specific working processes and technical effects brought by units and modules in the foregoing embodiments may be referred to a method embodiment part, which is not described herein again.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, data subscriber line (Digital Subscriber Line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium such as a floppy Disk, a hard Disk, a magnetic tape, an optical medium such as a digital versatile Disk (Digital Versatile Disc, DVD), or a semiconductor medium such as a Solid State Disk (SSD), etc.
The above embodiments are not intended to limit the present application, and any modifications, equivalent substitutions, improvements, etc. within the technical scope of the present disclosure should be included in the protection scope of the present application.

Claims (11)

1. An audio session acquisition method, the method comprising:
determining a default audio device, the default audio device being an audio device that allows input audio or allows output audio;
registering an audio session creation notification with a session manager of the default audio device;
and if the audio session creation notification sent by the session manager is received, acquiring all audio sessions using the default audio device after receiving the preset duration of the audio session creation notification.
2. The method of claim 1, wherein the method further comprises:
registering a default audio device change notification at start-up;
the determining a default audio device includes:
if a default audio equipment change notification is received in the running process, determining changed default audio equipment according to the default audio equipment change notification;
before registering the audio session creation notification with the session manager of the default audio device, the method further comprises:
A session manager of the default audio device is activated.
3. The method of claim 1 or 2, wherein after the acquiring all audio sessions that are using the default audio device after receiving the preset duration of the audio session creation notification, further comprising:
registering an audio session state change notification with an audio session controller;
if the audio session state change notification sent by the audio session controller is received, determining an application running state according to the audio session state change notification;
determining a user scene according to the application running state;
and carrying out resource scheduling according to the user scene.
4. The method of claim 3, wherein the audio session state change notification carries a session state of a session of a target audio session and a session identification of the target audio session, the target audio session being an audio session with a session state change, the determining an application running state based on the audio session state change notification comprising:
if the process identification of the process of the focus window is the same as the process identification of the process of the target audio session, determining the application running state of the focus application to which the focus window belongs according to the session state of the target audio session;
If the process identification of the process of the focus window is different from the process identification of the process of the target audio session, searching for an application identification corresponding to the process identification of the process of the focus window, and if the application identification corresponding to the process identification of the process of the focus window is identical to the application identification of the application to which the target audio session belongs, determining the application running state of the focus application to which the focus window belongs according to the session state of the target audio session.
5. The method of any of claims 1-4, wherein the obtaining all audio sessions that are using the default audio device after receiving the preset duration of the audio session creation notification comprises:
if an audio event is received, adding the audio event to a message queue of a first sub-thread, wherein the audio event comprises a default audio equipment change notification, an audio session creation notification and an audio session state change notification;
executing the first sub-thread, wherein the first sub-thread is used for sequentially reading the audio events from the message queue, and acquiring all audio sessions using the default audio device after reading the preset duration of the audio session creation notification when the audio events read from the message queue create the notification for the audio session.
6. The method of claim 5, wherein the first sub-thread is to:
when the audio event read from the message queue creates a notification for the audio session, adding the read audio session creation notification to a task queue of a second sub-thread;
and executing the second sub-thread, wherein the second sub-thread is used for reading the audio session creation notification from the task queue and acquiring all audio sessions using the default audio equipment after reading the preset duration of the audio session creation notification.
7. The method of claim 6, wherein the second sub-thread is to:
when the audio session creation notification is read from the task queue, if the read audio session creation notification indicates that the created audio session belongs to a preset application, acquiring all audio sessions using the default audio device after the preset duration of the audio session creation notification is read.
8. An audio session acquisition device, the device comprising:
a determining module for determining a default audio device, the default audio device being an audio device that allows input audio or allows output audio;
A registration module for registering an audio session creation notification with a session manager of the default audio device;
and the acquisition module is used for acquiring all audio sessions using the default audio equipment after receiving the preset duration of the audio session creation notification if the audio session creation notification sent by the session manager is received.
9. The apparatus of claim 8, wherein the acquisition module is to:
if an audio event is received, adding the audio event to a message queue of a first sub-thread, wherein the audio event comprises a default audio equipment change notification, an audio session creation notification and an audio session state change notification;
executing the first sub-thread, wherein the first sub-thread is used for sequentially reading the audio events from the message queue, and acquiring all audio sessions using the default audio device after reading the preset duration of the audio session creation notification when the audio events read from the message queue create the notification for the audio session.
10. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, which computer program, when executed by the processor, implements the method according to any of claims 1-7.
11. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any of claims 1-7.
CN202210912804.3A 2022-05-16 2022-07-31 Audio session acquisition method, device, equipment and storage medium Active CN116028005B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022105308672 2022-05-16
CN202210530867 2022-05-16

Publications (2)

Publication Number Publication Date
CN116028005A true CN116028005A (en) 2023-04-28
CN116028005B CN116028005B (en) 2023-10-20

Family

ID=86069520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210912804.3A Active CN116028005B (en) 2022-05-16 2022-07-31 Audio session acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116028005B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111580776A (en) * 2020-04-28 2020-08-25 广州市百果园信息技术有限公司 Audio function recovery method, device, terminal and storage medium
WO2021021752A1 (en) * 2019-07-30 2021-02-04 Dolby Laboratories Licensing Corporation Coordination of audio devices
CN112527403A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Application starting method and electronic equipment
CN113039774A (en) * 2018-11-21 2021-06-25 深圳市欢太科技有限公司 Method and device for processing application program and electronic equipment
CN114443256A (en) * 2022-04-07 2022-05-06 荣耀终端有限公司 Resource scheduling method and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113039774A (en) * 2018-11-21 2021-06-25 深圳市欢太科技有限公司 Method and device for processing application program and electronic equipment
WO2021021752A1 (en) * 2019-07-30 2021-02-04 Dolby Laboratories Licensing Corporation Coordination of audio devices
CN112527403A (en) * 2019-09-19 2021-03-19 华为技术有限公司 Application starting method and electronic equipment
CN111580776A (en) * 2020-04-28 2020-08-25 广州市百果园信息技术有限公司 Audio function recovery method, device, terminal and storage medium
CN114443256A (en) * 2022-04-07 2022-05-06 荣耀终端有限公司 Resource scheduling method and electronic equipment

Also Published As

Publication number Publication date
CN116028005B (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN115599513B (en) Resource scheduling method and electronic equipment
US10725972B2 (en) Continuous and concurrent device experience in a multi-device ecosystem
US20210389973A1 (en) Electronic device and method for operating the same
CN116028205B (en) Resource scheduling method and electronic equipment
CN116025580B (en) Method for adjusting rotation speed of fan and electronic equipment
CN116069209A (en) Focus window processing method, device, equipment and storage medium
CN116028005B (en) Audio session acquisition method, device, equipment and storage medium
CN116028210B (en) Resource scheduling method, electronic equipment and storage medium
CN116027879B (en) Method for determining parameters, electronic device and computer readable storage medium
CN116028211A (en) Display card scheduling method, electronic equipment and computer readable storage medium
EP4332756A1 (en) Application deployment method, distributed operation system, electronic device, and storage medium
CN116028208B (en) System load determining method, device, equipment and storage medium
CN116028207B (en) Scheduling policy determination method, device, equipment and storage medium
CN116055443B (en) Method for identifying social scene, electronic equipment and computer readable storage medium
US8209685B1 (en) Virtual machine device access
CN116028209B (en) Resource scheduling method, electronic equipment and storage medium
CN116027880B (en) Resource scheduling method and electronic equipment
CN116089055B (en) Resource scheduling method and device
CN116027878B (en) Power consumption adjustment method and electronic equipment
WO2023221752A1 (en) Information processing method and electronic device
CN117130454A (en) Power consumption adjustment method and electronic equipment
WO2023221720A1 (en) Resource scheduling method and apparatus
WO2020133455A1 (en) Application program management method, device, storage medium and electronic apparatus
CN116028206A (en) Resource scheduling method, electronic equipment and storage medium
CN117130772A (en) Resource scheduling method, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant