CN113132263A - Method and device for scheduling core processor and storage medium - Google Patents

Method and device for scheduling core processor and storage medium Download PDF

Info

Publication number
CN113132263A
CN113132263A CN202010040360.XA CN202010040360A CN113132263A CN 113132263 A CN113132263 A CN 113132263A CN 202010040360 A CN202010040360 A CN 202010040360A CN 113132263 A CN113132263 A CN 113132263A
Authority
CN
China
Prior art keywords
core processor
application
specified
scheduling
specified application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010040360.XA
Other languages
Chinese (zh)
Other versions
CN113132263B (en
Inventor
刘杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010040360.XA priority Critical patent/CN113132263B/en
Publication of CN113132263A publication Critical patent/CN113132263A/en
Application granted granted Critical
Publication of CN113132263B publication Critical patent/CN113132263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2475Traffic characterised by specific attributes, e.g. priority or QoS for supporting traffic characterised by the type of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The disclosure relates to a method and a device for scheduling a core processor and a storage medium. The method for scheduling the core processor is applied to a terminal, the terminal comprises the multi-core processor, and the method comprises the following steps: detecting an application running over a network connection; when a specified application is detected, transferring the soft interrupt of the network transmission data packet of the specified application to a specified core processor, wherein the specified core processor has the capacity of processing the soft interrupt data throughput which is greater than a preset throughput threshold. Through the method and the device, the application can be operated smoothly, and blockage is avoided.

Description

Method and device for scheduling core processor and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for scheduling a core processor, and a storage medium.
Background
Currently, there is a class of applications that need to run in a network environment, which is very demanding on the network when running on a terminal, typically such as a network game class of applications. When such applications run on a terminal, if the processor does not meet requirements for Wireless Fidelity (WiFi) data throughput and WiFi data processing speed of a network game, the situations that the applications run slowly and displayed application pictures are stuck based on network connection occur, and user experience is affected.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method for scheduling a core processor, a device for scheduling a core processor, and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for scheduling a core processor, which is applied to a terminal, where the terminal includes a multi-core processor, and an application that operates based on network connection is installed on the terminal, and the method for scheduling a core processor includes: detecting an application running over a network connection; when the designated application is detected, the soft interrupt of the network transmission data packet of the designated application is migrated to a designated core processor, and the designated core processor has the capability of processing the soft interrupt data throughput which is greater than a preset throughput threshold.
In one example, detecting an application running over a network connection includes: acquiring a stack top task process of a currently running task line stack; and when the task process at the top of the stack is the task process of the specified application, determining that the specified application is detected.
In one example, a top task process of a currently running task line stack is acquired by means of timing acquisition.
In one example, migrating a soft interrupt of a network transmission data packet of a specified application to a specified core processor when the specified application is detected comprises: when the specified application is detected, transmitting a message of the running of the specified application to the bottom-layer driver; and the bottom layer driver migrates the soft interrupt of the network transmission data packet of the specified application to the specified core processor.
In one example, the designated application is a gaming application.
In an example, the multi-core processor includes a large-core processor and a small-core processor;
and designating the core processor as a designated large core processor.
According to a second aspect of the embodiments of the present disclosure, there is provided a core processor scheduling apparatus, which is applied to a terminal, where the terminal includes a multi-core processor, and an application that operates based on network connection is installed on the terminal, and the core processor scheduling apparatus includes: a detection unit configured to detect an application that runs based on a network connection; the processing unit is configured to migrate the soft interrupt of the network transmission data packet of the specified application to the specified core processor when the specified application is detected, and the specified core processor has the capacity of processing the soft interrupt data throughput which is larger than a preset throughput threshold.
In one example, the core processor scheduling apparatus further includes: the acquiring unit is configured to acquire a stack top task process of a currently running task line stack; the detection unit detects the application running based on the network connection in the following way: and when the task process at the top of the stack is the task process of the specified application, determining that the specified application is detected.
In one example, a top task process of a currently running task line stack is acquired by means of timing acquisition.
In one example, the core processor scheduling apparatus further includes: a delivery unit configured to deliver a message specifying an application operation;
the processing unit migrates the soft interrupt of the network transmission data packet of the designated application to the designated core processor in the following way: when the detection unit detects the specified application, the transfer unit transfers the message of the running of the specified application to the bottom layer driver; and the bottom layer driver migrates the soft interrupt of the network transmission data packet of the specified application to the specified core processor.
In one example, the designated application is a gaming application.
In an example, the multi-core processor includes a large-core processor and a small-core processor; and designating the core processor as a designated large core processor.
According to a third aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform the method of core processor scheduling in the aforementioned first aspect or any one of the examples of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a battery temperature determination apparatus including: a memory configured to store instructions. And a processor configured to invoke the instruction to execute the core processor scheduling method in the foregoing first aspect or any example of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the method comprises the steps of detecting an application running based on network connection, and migrating soft interrupt of a network transmission data packet of the specified application to a specified core processor when the specified application is detected, wherein the specified core processor has the capacity of processing soft interrupt data throughput which is larger than a preset throughput threshold. By transferring the soft interrupt of the network transmission data packet of the designated application to the core processor with strong processing capability, the application running in network connection can run smoothly, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a method for scheduling a core processor in accordance with an example embodiment.
FIG. 2 is a flowchart illustrating a method for scheduling a core processor in accordance with an example embodiment.
FIG. 3 is a flowchart illustrating a method for scheduling a core processor in accordance with an example embodiment.
FIG. 4 is a schematic diagram illustrating a method of scheduling a core processor in accordance with an example embodiment.
Fig. 5 is a diagram illustrating an effect of processing delay of a specific application to which the scheduling method of the core processor according to the embodiment of the present disclosure is not applied.
Fig. 6 is a diagram illustrating an effect of processing delay of a specific application to which the method for scheduling a core processor according to the embodiment of the present disclosure is applied.
FIG. 7 is a block diagram illustrating a core processor scheduling apparatus in accordance with an example embodiment.
FIG. 8 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The technical scheme of the exemplary embodiment of the present disclosure may be applied to an application scenario in which a wifi application is run on a multi-core terminal. In the exemplary embodiments described below, a terminal is sometimes also referred to as an intelligent terminal device, where the terminal may be a Mobile terminal, also referred to as a User Equipment (UE), a Mobile Station (MS), etc., and the terminal is a device providing voice and/or data connection for a User, or a chip disposed in the device, such as a handheld device with a wireless connection function, a vehicle-mounted device, etc. Examples of terminals may include, for example: the Mobile terminal comprises a Mobile phone, a tablet computer, a notebook computer, a palm computer, Mobile Internet Devices (MID), a wearable device, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a wireless terminal in industrial control, a wireless terminal in unmanned driving, a wireless terminal in remote operation, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home and the like.
At present, with the performance of a Central Processing Unit (CPU) of a terminal being improved, the number of core processors is increasing, the heat generation and power consumption of the terminal are also increasing remarkably, and in order to meet the requirements of high performance and low power consumption of the CPU, a terminal CPU manufacturer begins to design a multi-core CPU core processor into a large core processor and a small core processor, and the large core processor and the small core processor perform data processing in a manner of respective division of work. The core processor with strong processing capability and high processing speed is called a large core processor, and the core processor with weak processing capability and low processing speed is called a small core processor.
In the related technology, because the network transmission data packets are all processed in a software interrupt mode, the network transmission data packet software interrupt can be operated on a CPU (central processing unit) large-core processor when the flow is more than 100M. Some network connection applications have low flow for receiving and transmitting network transmission data packets during operation, but have high real-time requirements on network environments, and when the data of the network connection applications with high requirements on the network environments are processed in a small-core processor, the conditions that the network connection applications run slowly and displayed application pictures are stuck occur, so that user experience is influenced.
For example, the network game application has very high requirements on the network environment, but when the network game application runs, the network transmission data throughput is far less than the threshold of 100M. Therefore, the network game is processed in the small-core processor, so that the network game runs unsmoothly, and a display screen is unsmooth.
Therefore, how to increase the processing speed of the application which has high requirements on the network environment and has smaller network transmission data throughput and make the data of the network connection application run smoothly is a problem which needs to be solved urgently.
Fig. 1 is a flowchart illustrating a method for scheduling a core processor according to an exemplary embodiment, where as shown in fig. 1, the method for scheduling a core processor is used in a terminal, and the terminal includes a multi-core processor, and the method for scheduling a core processor includes the following steps.
In step S11, an application running over the network connection is detected.
The network involved in the present disclosure may be a data network or a WIFI network provided by a mobile operator, and the embodiment of the present disclosure is not limited thereto. The application running based on the network connection may be an application running in the foreground of the terminal and based on the network connection, such as a network game.
In step S12, when the designated application is detected, migrating the soft interrupt of the network transmission packet of the designated application to a designated core processor, wherein the designated core processor has the capability of processing the soft interrupt data throughput which is greater than the preset throughput threshold.
The specific application in the present disclosure may be a preset specific type of application, or may be a specific application under a specific type.
In one embodiment, the application running based on the network connection is detected, and when the application running based on the network connection is detected to be the designated application, the soft interrupt of the network transmission data packet of the designated application is migrated to the designated core processor.
Wherein the designated core processor is one of the multi-core processors. The specified core processor has the capability to handle soft interrupt data throughput greater than a preset throughput threshold.
In one embodiment, the core processor is designated as a large core processor in a multi-core processor.
For example, an 8-core processor, 4 of which are large core processors and 4 of which are small core processors. The designated core processor is a designated large core processor of the 8-core processors.
In the exemplary embodiment of the disclosure, an application running based on network connection is detected, and when a specified application is detected, a soft interrupt of a network transmission data packet of the specified application is migrated to a specified core processor, wherein the specified core processor has the capability of processing the soft interrupt data throughput which is greater than a preset throughput threshold. By transferring the soft interrupt of the network transmission data packet of the designated application to the core processor with strong processing capability, the application running in network connection can run smoothly, and the user experience is improved.
FIG. 2 is a flowchart illustrating a method for scheduling a core processor in accordance with an example embodiment. As shown in fig. 2, step S11 shown in fig. 1 includes the following steps.
In step S111, a task process at the top of the stack of the currently running task line stack is acquired.
A stack, as a data structure, is a special linear table that can only perform insert and delete operations at one end. The method stores data according to a principle of first-in and second-out, the first-in data is pressed to the bottom of a stack, the last data is on the top of the stack, and the data is popped up from the top of the stack when the data needs to be read.
According to the method and the device, the stack top task process of the currently running task line stack RunningTask stack can be obtained by utilizing the characteristics of the stack, and the application running in the foreground is determined according to the stack top task process of the RunningTask stack.
For example, in the present disclosure, a RunningTask standard interface may be used to obtain a task process at the stack top of a RunningTask stack, and by obtaining the task process at the stack top, it is determined whether the task process at the stack top is a task process of a specific application.
For example, acquiring an application running in the foreground may be determined in the user space of the operating system by:
Figure BDA0002367524910000051
in the method, by obtaining the Runningtask stack, if the current Runningtask stack is not empty, the activity of the stack top of the Runningtask stack is obtained, that is, the task process of the stack top is obtained, the packet name of the activity is obtained, and whether the running application is the designated application is determined according to the obtained packet name of the activity.
By the method, the application running in the foreground can be acquired. Based on the obtained foreground running application, whether the foreground running application is a specified application can be determined.
In step S112, when the top task process is a task process of the designated application, it is determined that the designated application is detected.
In the disclosure, a stack top task process of a stack running an application is acquired, whether the task process is a task process of a designated application is judged according to the acquired task process, and if the acquired task process is the task process of the designated application, it is determined that the designated application is detected.
In the exemplary embodiment of the disclosure, by acquiring the task process at the stack top of the RunningTask stack, determining whether the task process is the task process of the designated application according to the acquired task process, and determining that the designated application is detected when the task process is determined to be the task process of the designated application, the running designated application can be acquired in time, and further the processing speed of the data of the subsequent designated application is ensured.
FIG. 3 is a flowchart illustrating a method for scheduling a core processor in accordance with an example embodiment. As shown in fig. 3, step S12 shown in fig. 3 includes the following steps.
In step S121, upon detection of the specified application, a message that the specified application runs is passed to the bottom layer driver.
In the present disclosure, when a specific application is detected, a message of the specific application running is passed to the underlying driver, which may be implemented by:
when detecting an application running based on the network connection and determining that the specified application is detected, an event of the application layer, that is, an event of the specified application running, is transferred to the bottom layer driver.
After the bottom driver receives the event transmitted by the application layer, the original driving strategy is changed for the specified application according to the transmitted event operated by the specified application, and the soft interrupt of the network transmission data packet of the specified application is migrated to the specified core processor, namely to the large core processor with strong processing capability.
In step S122, the soft interrupt of the network transmission packet of the specified application is migrated to the specified core processor by the underlying driver.
In the present disclosure, after receiving a message of a specific application running, the underlying driver may migrate a soft interrupt of a network transmission packet of the specific application to a specific core processor by using an affinity method.
Since the affinity method is a scheduling attribute of the CPU, processes may be "migrated" to one or a group of core processors using the affinity method.
In the present disclosure, an application running based on a network connection is detected, and when it is determined that a specified application is detected, an event of the specified application running is transferred to a bottom-layer driver. And the bottom driver receives the event transmitted by the application layer, changes the original driving strategy for the specified application according to the event transmitted as the event of the specified application, and migrates the soft interrupt of the network transmission data packet of the specified application to the specified large-core processor by an affinity (affinity) method.
For example, the designated application is a network game a, the underlying driver receives an event of running the network game a transferred by the application layer, and the software interrupt of the game application data packet is migrated to the designated large-core processor through the affinity method.
The scheduling of soft interrupts for a given application packet to a given large-core processor by the affinity method may be implemented, for example, as follows:
Figure BDA0002367524910000071
in the method, the throughput of the current received data packet is scored, if the throughput of the received data packet is lower, the current received data packet is executed according to the driving strategy of the preset application, and if the throughput of the received data packet is higher, the appointed application is migrated to the large-core processor for processing when the appointed application is detected to be operated in the foreground.
Fig. 4 is a schematic diagram illustrating a task allocation situation of each core after scheduling a soft interrupt of a specific application packet to a specific large-core processor by using an affinity method and scheduling by using the affinity method. In fig. 4, after the soft interrupt of the specified application data packet is scheduled to the specified large-core processor by the affinity method, the snapdragon tool is used to check the task allocation status of each core after setting, and it can be known that the current task load after scheduling is mainly on the large-core processors of the CPUs 0-3.
In the exemplary embodiment of the disclosure, an application running based on network connection is detected, and in response to the detection that the application is a designated application, by using the scheduling attribute of the affine method, it is possible to migrate the soft interrupt of the network transmission packet of the designated application to a designated large-core processor, so as to improve the processing speed of the designated application, and by scheduling of the affine method, an effect of improving the system performance of the terminal can be achieved.
In order to further explain that the processing capacity of the specified application data becomes stronger after the core processor scheduling method related by the disclosure is applied, the disclosure further explains according to the actual test result.
Fig. 5 is a diagram illustrating an average delay effect of specified application data processing in a method for scheduling a core processor to which the embodiment of the present disclosure is not applied. And fig. 6 is a diagram of an average delay effect of specified application data processing, to which the core processor scheduling method according to the embodiment of the present disclosure is applied.
As can be seen from fig. 5 and 6, the maximum delay of the designated application data processing is decreased a lot after the core processor scheduling method of the present disclosure is applied, and the average delay of the designated application data processing is lower.
Based on the same inventive concept, the disclosure also provides a scheduling device of the core processor.
It is understood that, in order to implement the above functions, the core processor scheduling apparatus provided in the embodiments of the present disclosure includes a hardware structure and/or a software module corresponding to executing each function. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
FIG. 7 is a block diagram illustrating a core processor scheduler in accordance with an example embodiment. Referring to fig. 7, the core processor scheduling apparatus is applied to a terminal, the terminal includes a multi-core processor, and an application operating based on network connection is installed on the terminal, the core processor scheduling apparatus includes: a detection unit 701 and a processing unit 702.
Wherein, the detecting unit 701 is configured to detect an application running based on a network connection;
the processing unit 702 is configured to, when the specified application is detected, migrate the soft interrupt of the network transmission data packet of the specified application to a specified core processor, where the specified core processor has a capability of processing the soft interrupt data throughput that is greater than a preset throughput threshold.
In one example, the core processor scheduling apparatus further includes: an obtaining unit 703 configured to obtain a stack top task process of a currently running task line stack; the detection unit 701 detects an application running on the basis of a network connection in the following manner: and when the task process at the top of the stack is the task process of the specified application, determining that the specified application is detected.
In one example, a top task process of a currently running task line stack is acquired by means of timing acquisition.
In one example, the core processor scheduling apparatus further includes: a delivery unit 704 configured to deliver a message specifying an application operation; the processing unit 702 migrates the soft interrupt of the network transmission data packet of the specified application to the specified core processor in the following manner: when the detection unit 701 detects a specific application, the delivery unit 704 delivers a message that the specific application runs to the bottom layer driver; and the bottom layer driver migrates the soft interrupt of the network transmission data packet of the specified application to the specified core processor.
In one example, the designated application is a gaming application.
In an example, the multi-core processor includes a large-core processor and a small-core processor; and designating the core processor as a designated large core processor.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 8 is a block diagram illustrating an apparatus 800 for core processor scheduling in accordance with an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power supplies for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is further understood that the use of "a plurality" in this disclosure means two or more, as other terms are analogous. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," and the like are fully interchangeable. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A method for scheduling a core processor is applied to a terminal, wherein the terminal comprises a multi-core processor, and the method comprises the following steps:
detecting an application running over a network connection;
when a specified application is detected, transferring the soft interrupt of the network transmission data packet of the specified application to a specified core processor, wherein the specified core processor has the capacity of processing the soft interrupt data throughput which is greater than a preset throughput threshold.
2. The method according to claim 1, wherein the detecting an application running over a network connection comprises:
acquiring a stack top task process of a currently running task line stack;
and when the stack top task process is the task process of the specified application, determining that the specified application is detected.
3. The method for scheduling a core processor according to claim 2, wherein a top task process of the currently running task line stack is obtained by a timing obtaining method.
4. The method for scheduling a core processor according to any one of claims 1 to 3, wherein the migrating the soft interrupt of the network transmission data packet of the specified application to the specified core processor when the specified application is detected comprises:
when a specific application is detected, transmitting a message run by the specific application to a bottom layer driver;
and the bottom layer driver migrates the soft interrupt of the network transmission data packet of the specified application to a specified core processor.
5. The method of claim 4, wherein the specified application is a gaming application.
6. The method according to claim 1, wherein the multi-core processor comprises a large-core processor and a small-core processor;
the specified core processor is a specified large core processor.
7. The utility model provides a core processor scheduling device which characterized in that is applied to the terminal, the terminal includes many core processor, just install the application based on the network connection operation on the terminal, the device includes:
a detection unit configured to detect an application that runs based on a network connection;
the processing unit is configured to migrate the soft interrupt of the network transmission data packet of the specified application to a specified core processor when the specified application is detected, and the specified core processor has the capacity of processing the soft interrupt data throughput which is greater than a preset throughput threshold.
8. The apparatus of claim 7, the apparatus further comprising:
the acquiring unit is configured to acquire a stack top task process of a currently running task line stack;
the detection unit detects the application running based on the network connection in the following way:
and when the stack top task process is the task process of the specified application, determining that the specified application is detected.
9. The core processor scheduling device according to claim 8, wherein a top task process of the currently running task line stack is obtained by a timing obtaining method.
10. The core processor scheduling apparatus according to any one of claims 7 to 9, further comprising: a delivery unit configured to deliver a message specifying an application operation;
the processing unit migrates the soft interrupt of the network transmission data packet of the specified application to a specified core processor in the following way:
when the detection unit detects a specified application, the transfer unit transfers a message of running of the specified application to a bottom driver;
and the bottom layer driver migrates the soft interrupt of the network transmission data packet of the specified application to a specified core processor.
11. The core processor scheduler of claim 10, wherein the specified application is a gaming application.
12. The core processor scheduling apparatus of claim 7, wherein the multi-core processor comprises a large-core processor and a small-core processor;
the specified core processor is a specified large core processor.
13. A core processor scheduling apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: executing the method of scheduling by a core processor of any of claims 1-6.
14. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform the method of core processor scheduling of any of claims 1-6.
CN202010040360.XA 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium Active CN113132263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010040360.XA CN113132263B (en) 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010040360.XA CN113132263B (en) 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium

Publications (2)

Publication Number Publication Date
CN113132263A true CN113132263A (en) 2021-07-16
CN113132263B CN113132263B (en) 2024-02-13

Family

ID=76771210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010040360.XA Active CN113132263B (en) 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium

Country Status (1)

Country Link
CN (1) CN113132263B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114356445A (en) * 2021-12-28 2022-04-15 山东华芯半导体有限公司 Multi-core chip starting method based on large and small core architectures

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232321A1 (en) * 2017-02-16 2018-08-16 Qualcomm Incorporated Optimizing network driver performance and power consumption in multi-core processor-based systems
CN109117291A (en) * 2018-08-27 2019-01-01 惠州Tcl移动通信有限公司 Data dispatch processing method, device and computer equipment based on multi-core processor
CN109726135A (en) * 2019-01-25 2019-05-07 杭州嘉楠耘智信息科技有限公司 Multi-core debugging method and device and computer readable storage medium
CN110347508A (en) * 2019-07-02 2019-10-18 Oppo广东移动通信有限公司 Thread distribution method, device, equipment and the readable storage medium storing program for executing of application program
CN110462590A (en) * 2017-03-31 2019-11-15 高通股份有限公司 For based on central processing unit power characteristic come the system and method for dispatcher software task

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180232321A1 (en) * 2017-02-16 2018-08-16 Qualcomm Incorporated Optimizing network driver performance and power consumption in multi-core processor-based systems
CN110462590A (en) * 2017-03-31 2019-11-15 高通股份有限公司 For based on central processing unit power characteristic come the system and method for dispatcher software task
CN109117291A (en) * 2018-08-27 2019-01-01 惠州Tcl移动通信有限公司 Data dispatch processing method, device and computer equipment based on multi-core processor
CN109726135A (en) * 2019-01-25 2019-05-07 杭州嘉楠耘智信息科技有限公司 Multi-core debugging method and device and computer readable storage medium
CN110347508A (en) * 2019-07-02 2019-10-18 Oppo广东移动通信有限公司 Thread distribution method, device, equipment and the readable storage medium storing program for executing of application program

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114356445A (en) * 2021-12-28 2022-04-15 山东华芯半导体有限公司 Multi-core chip starting method based on large and small core architectures
CN114356445B (en) * 2021-12-28 2023-09-29 山东华芯半导体有限公司 Multi-core chip starting method based on large and small core architecture

Also Published As

Publication number Publication date
CN113132263B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
WO2021032097A1 (en) Air gesture interaction method and electronic device
US11963142B2 (en) Slot format indication method, apparatus and system, and device and storage medium
EP3133874B1 (en) Method and apparatus for starting energy saving mode
RU2755921C1 (en) Method, apparatus, device and system for indicating time slot format and data media
EP4030817A1 (en) Data processing method and apparatus, and electronic device and computer readable storage medium
EP3276301A1 (en) Mobile terminal and method for calculating a bending angle
EP3015983B1 (en) Method and device for optimizing memory
EP3232325B1 (en) Method and device for starting application interface
CN112217990B (en) Task scheduling method, task scheduling device and storage medium
CN111610912A (en) Application display method, application display device and storage medium
US9678868B2 (en) Method and device for optimizing memory
CN109062625B (en) Application program loading method and device and readable storage medium
CN107371222B (en) Virtual card disabling method and device
EP3280217B1 (en) Method and device for establishing service connection
CN113132263B (en) Kernel processor scheduling method, kernel processor scheduling device and storage medium
US20170147134A1 (en) Method and apparatus for controlling touch-screen sensitivity
US11533728B2 (en) Data transmission method and apparatus on unlicensed frequency band
CN111225111A (en) Function control method, function control device, and storage medium
CN112423092A (en) Video recording method and video recording device
JP2021531519A (en) Touch signal processing methods, devices and media
CN112307229A (en) Data processing method and device, electronic equipment and computer readable storage medium
EP3961363A1 (en) Number input method, apparatus, and storage medium
EP3890406B1 (en) Communication data processing method and apparatus, electronic device and storage medium
CN116089025A (en) Processor frequency control method, device and storage medium
CN113254092A (en) Processing method, apparatus and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant