CN115599574B - Graphic processing system, electronic component, electronic device, and information processing method - Google Patents

Graphic processing system, electronic component, electronic device, and information processing method Download PDF

Info

Publication number
CN115599574B
CN115599574B CN202211587596.0A CN202211587596A CN115599574B CN 115599574 B CN115599574 B CN 115599574B CN 202211587596 A CN202211587596 A CN 202211587596A CN 115599574 B CN115599574 B CN 115599574B
Authority
CN
China
Prior art keywords
state information
fifo
gpu core
information
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211587596.0A
Other languages
Chinese (zh)
Other versions
CN115599574A (en
Inventor
范文会
杨金刚
刘虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiangdixian Computing Technology Co Ltd
Original Assignee
Beijing Xiangdixian Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiangdixian Computing Technology Co Ltd filed Critical Beijing Xiangdixian Computing Technology Co Ltd
Priority to CN202211587596.0A priority Critical patent/CN115599574B/en
Publication of CN115599574A publication Critical patent/CN115599574A/en
Application granted granted Critical
Publication of CN115599574B publication Critical patent/CN115599574B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

The present disclosure provides a graphic processing system, an electronic component, an electronic device, and an information processing method, including: a FIFO configured to: caching state information; a management module configured to: caching currently received first state information into the FIFO, emptying all the currently cached state information in the FIFO when monitoring that the first state information is cached in failure due to FIFO overflow, and sending an FIFO overflow instruction to a GPU core with a connection relation; the GPU core configured to: and after the overflow instruction is received, acquiring target state information which is not successfully sent to the GPU core by polling an external module, and clearing a state identification bit on the external module generating the target state information. By the scheme, the loss of the state information can be avoided.

Description

Graphic processing system, electronic component, electronic device, and information processing method
Technical Field
The present disclosure relates to the field of GPU technologies, and in particular, to a graphics processing system, an electronic component, an electronic device, and an information processing method.
Background
Before and during operation of a GPU (Graphics Processing Unit), data required by the GPU needs to be prepared into a video Memory module, such as a GDDR (Graphics Double Data Rate DRAM), by configuring an external module, such as a DMA (Direct Memory Access) or other Master. Therefore, after receiving the state information which is sent by the external module and indicates that the data is ready, the GPU reads the data from the corresponding external module and processes the data.
In the prior art, the external module buffers state information, which may indicate that data is ready, to a FIFO (hardware queue), typically in an interrupt manner, and sends the state information to the GPU via the FIFO. In the process, a plurality of external modules continuously buffer new state information to the FIFO, when the speed of processing the state information by the GPU is less than the new speed of the state information in the FIFO, the FIFO may overflow and discard the newly received state information, and the discarding process is insensitive to the GPU and the external modules, so that the problem of state information loss may be caused.
Disclosure of Invention
The disclosure aims to provide a graphics processing system, an electronic component, an electronic device and an information processing method, which can avoid the situation that the GPU cannot process the state information in time to cause the state information to be lost.
According to an aspect of the present disclosure, there is provided a graphics processing system including: a FIFO configured to: caching state information; a management module configured to: caching currently received first state information into the FIFO, emptying all the currently cached state information in the FIFO when monitoring that the first state information is cached unsuccessfully due to the FIFO overflow, and sending an FIFO overflow instruction to a GPU core with a connection relation; the GPU core configured to: after the overflow instruction is received, target state information which is not successfully sent to the GPU core is obtained through polling an external module, and a state identification bit on the external module which generates the target state information is eliminated; the target state information includes the first state information and all state information currently buffered in the FIFO.
Optionally, in this disclosure, the GPU core is configured to: acquiring target state information which is not successfully sent to the GPU core through a polling candidate external module; the candidate external modules are preconfigured, and the candidate external modules can send the state information to the GPU core.
Optionally, in the present disclosure, a firmware is configured in the GPU core, and the GPU core is configured to: and polling the external module through the firmware to acquire target state information which is not successfully sent to the GPU core, and clearing a state identification bit on the external module generating the target state information through the firmware.
Optionally, in this disclosure, the management module is configured to: when the FIFO is monitored not to overflow and not to be empty, sending an information processing request to the GPU core; the GPU core configured to: and after receiving the information processing request, clearing a state identification bit on an external module generating the queue head state information according to the actively or passively acquired queue head state information, and sending feedback information to the management module, wherein the state information at the FIFO queue head is the queue head state information.
Optionally, in this disclosure, in a case that the head-of-line state information is actively acquired by the GPU core, the GPU core is configured to: after receiving the information processing request, reading and removing the queue head state information from the FIFO; when the head-of-line state information is passively acquired by the GPU core, the information processing request carries the head-of-line state information, and the management module is further configured to: removing the head of line state information from within the FIFO.
Optionally, in the present disclosure, a firmware is configured in the GPU core, and the GPU core is configured to: and generating an interrupt after receiving the information processing request, and clearing a state identification bit on an external module generating the queue head state information according to the queue head state information after the firmware detects the interrupt.
Optionally, in this disclosure, each piece of state information includes an identifier of an external module that generates the state information, and the GPU core is configured to: and clearing the state identification bit on the external module corresponding to the identification of the external module included in the queue head state information.
Optionally, on the basis of any of the foregoing embodiments, the management module communicates with the GPU core through a GPIO handshake protocol.
Optionally, in this disclosure, the management module is configured to: triggering an instruction for sending information to the GPU core by pulling up a request signal gpio _ input _ req, and pulling down the request signal gpio _ input _ req when monitoring that a feedback signal gpio _ input _ ack is at a high level; the GPU core configured to: and triggering an instruction for sending information to the management module by pulling up the feedback signal gpio _ input _ ack, and pulling down the feedback signal gpio _ input _ ack when detecting that the request signal gpio _ input _ req is low.
According to another aspect of the present disclosure, there is also provided an electronic assembly including the graphics processing system described in any of the above embodiments. In some use scenes, the electronic assembly is embodied in a display card in a product form; in other usage scenarios, the electronic component is embodied in the form of a CPU board.
According to another aspect of the present disclosure, there is also provided an electronic device including the electronic component described above. In some usage scenarios, the electronic device is in the form of a portable electronic device, such as a smartphone, a tablet, a VR device, etc.; in some usage scenarios, the electronic device is in the form of a personal computer, a game console, and the like.
According to another aspect of the present disclosure, there is also provided an information processing method applied to a graphics processing system, where the graphics processing system includes a FIFO, a management module, and a GPU core; the method comprises the following steps: the management module caches the first state information received currently into the FIFO, judges whether the FIFO overflows or not, empties all state information cached currently in the FIFO if the FIFO overflows, and sends an FIFO overflow instruction to the GPU core; after receiving the overflow instruction, the GPU core acquires target state information which is not successfully sent to the GPU core by polling an external module, and clears a state identification bit on the external module generating the target state information; the target state information includes the first state information and all state information currently buffered in the FIFO.
Optionally, in this disclosure, the acquiring, by polling the external module, the target state information that is not successfully sent to the GPU core includes: polling a candidate external module to acquire target state information which is not successfully sent to the GPU core; the candidate external modules are preconfigured external modules that can send the state information to the GPU core.
Optionally, in this disclosure, the method further includes: if the FIFO is not overflowed and is not empty, the management module sends an information processing request to the GPU core;
after receiving the information processing request, the GPU core clears a state identification bit on an external module generating the queue head state information according to the actively or passively acquired queue head state information and sends feedback information to the management module; the state information at the head of the FIFO queue is the head-of-queue state information.
Optionally, in this disclosure, in a case that the head-of-line state information is actively acquired by the GPU core, the method further includes: after receiving the information processing request, the GPU core reads and removes the queue head state information from the FIFO; when the queue head state information is passively acquired by the GPU core, the information processing request carries the queue head state information, and the method further comprises the following steps: the management module removes the head of line state information from the FIFO.
Drawings
FIG. 1 is a diagram illustrating a graphics processing system according to an embodiment of the present disclosure;
fig. 2 is a flowchart of an information processing method according to an embodiment of the present disclosure.
Detailed Description
Before the embodiments of the present disclosure are described, it should be noted that:
some embodiments of the disclosure are described as a process flow, and although various operational steps of the flow may be referred to by sequential step numbers, the operational steps therein may be performed in parallel, concurrently, or simultaneously.
The terms "first", "second", etc. may be used in embodiments of the disclosure to describe various features, but these features should not be limited by these terms. These terms are only used to distinguish one feature from another.
The term "and/or," "and/or," may be used in embodiments of the present disclosure to include any and all combinations of one or more of the associated listed features.
It should be understood that when a connection or communication between two components is described, the connection or communication between the two components may be understood as either a direct connection or communication or an indirect connection or communication through intermediate components, unless a direct connection or direct communication between the two components is explicitly indicated.
In order to make the technical solutions and advantages in the embodiments of the present disclosure more clearly understood, the following description of the exemplary embodiments of the present disclosure with reference to the accompanying drawings is made in further detail, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and are not exhaustive of all the embodiments. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
It is an object of the present disclosure to provide an image processing system including a GPU core. The GPU core refers to a processor with a graphics processing function, and includes a computing unit, a cache, a graphics rendering pipeline, and other components.
One embodiment of the present disclosure provides an image processing system, as shown in fig. 1, including at least: GPU core, management module GPU MISC and FIFO.
The management module is respectively connected with the GPU core and the FIFO.
Of course, in other embodiments, the image processing system may include multiple GPU cores, and in such embodiments, the management module is coupled to a master core of the multiple GPU cores.
The FIFO mentioned in the embodiments of the present disclosure is a hardware queue, which is mainly used for buffering status information generated by an external module.
Of course, the external module may also be located within the image processing system.
In the embodiment of the disclosure, the external module executes a corresponding data preparation task according to the configuration of the GPU, for example, data is moved to the GDDR for the GPU to use, and after the external module executes the data preparation task, state information may be generated.
The external module and the management module can be connected through an AXI _ Bus Bus. Based on this, after generating the state information, each external module writes the state information into the management module through the AXI _ Bus, so that the management module buffers the currently received state information (referred to as the first state information for the convenience of distinction) into the FIFO.
Optionally, after the external module generates the state information, the state information may be written into its own register, and in addition, the external module may fill the state information into the custom data structure, route the custom data structure to the management module through the network on chip, and write into the register included in the management module. Of course, the next custom data structure filled by the external module will overwrite the data currently written in the register when writing in the register of the management module and itself.
The data format of the custom data structure may include 9 bits. Wherein, the highest bit is used to indicate whether the state information is valid, for example, 1 is valid, and the others are invalid; the lower 8 bits are used to indicate the specific content of the state information, and may include, for example, the interrupt type, the identification MST _ ID of the external module that generated the interrupt, and so on.
After the register of the management module is written into the custom data structure, the management module can buffer the first state information represented by the lower 8 bits of the custom data structure into the FIFO after determining that the first state information included in the custom data structure is valid. Whether the first state information can be successfully buffered in the FIFO depends on whether the FIFO is currently overflowing.
Of course, after the state information is buffered in the FIFO, the management module fetches the state information from the FIFO and sends it to the GPU core for processing under certain conditions (e.g., the FIFO is not overflowed and is not empty). In this process, the external module may continuously generate more state information, and the GPU core has a slower processing speed for processing the state information, so that the amount of the state information stored in the FIFO is greater than the amount of the state information removed from the FIFO, and the FIFO overflows. That is, if the FIFO overflows, it indicates that there is a backlog of certain data in the state information currently to be processed by the GPU core.
In order to avoid that the state information in the FIFO is lost and cannot be acquired by the GPU core after the FIFO overflows, in the embodiment of the present disclosure, the management module may also monitor the overflow condition of the FIFO in the process of caching the state information in the FIFO, and send different instructions to the GPU core in combination with the overflow condition of the FIFO. The GPU core receives different information sent by the management module and executes different operations.
It is worth pointing out that the FIFO will continuously maintain its current depth value M during operation, for example: each time FIFO receives a data storage instruction (not equivalent to the FIFO storing a data), the current depth value M +1 is added, each data is removed, and the current depth value M-1 is added. When the current depth value M of the FIFO is greater than the total queue depth N of the FIFO, the FIFO can judge that the FIFO overflows and transmit the information of the FIFO overflow to the management module, for example, the FIFO actively sends the indication information of the FIFO overflow to the management module, or the FIFO identifies the position which is preset and used for indicating the FIFO overflow to be effective, so that the management module determines the FIFO overflow by reading the FIFO identification position.
Further, when monitoring that the first state information is not successfully cached in the FIFO due to FIFO overflow, in order to avoid that the state information in the FIFO is lost and cannot be acquired by the GPU core, the management module actively empties all currently cached state information in the FIFO and sends a FIFO overflow instruction to the GPU core having a connection relationship. Of course, if the first status information currently causing the FIFO overflow is not covered by the new status information, the management module may also actively delete the first status information.
In the disclosed embodiments, the GPU core may generate an interrupt in response to a FIFO overflow instruction after receiving the FIFO overflow instruction. Wherein the timing of responding to the FIFO overflow instruction is pre-configurable. Optionally, after receiving the overflow instruction, the GPU core may immediately respond to the FIFO overflow instruction, or may wait for the task currently being processed to finish executing and then respond to the FIFO overflow instruction.
For example, in one embodiment, a graphics processing system architecture includes a plurality of GPU cores, a master core of the plurality of GPU cores coupled to other slave cores, and a management module. The master core may generate an interrupt upon receiving a FIFO overflow instruction, and select at least one slave core from the plurality of slave cores having a minimum load to immediately respond to the FIFO overflow instruction.
For the GPU core, responding to the FIFO overflow instruction, namely polling an external module, and acquiring target state information which is not successfully sent to the GPU core. The target state information includes first state information and all state information currently cached in an empty management module FIFO.
After the external module generates the state information, the external module can write a register in the external module to enable the state identification bit position for indicating that the external module generates the state information to be valid. Of course, the state information generated by the external module is also written into the register.
Based on this, when the GPU core polls the external module, if the status flag bit of a certain external module is valid, the GPU core can know that the external module generates the status information, and can read the status information currently recorded by the external module by reading the fixed bit used for storing the status information in the register of the external module.
In some embodiments, the GPU core may poll all external modules when polling the external modules to obtain target status information that was not successfully sent to the GPU core.
In another embodiment, the GPU core may poll the candidate external modules when polling the external modules, i.e. obtain the target status information that was not successfully sent to the GPU core from the candidate external modules.
The candidate external module may be configured in advance, for example, by way of software, and a special identifier is added to the candidate external module to indicate that the candidate external module is an external module that may send state information to the GPU core.
And after the GPU core acquires the target state information, clearing the state identification bit on the external module generating the target state information.
For the external module, after the GPU core acquires the state information generated by the corresponding external module, the state identification bit on the corresponding external module may be cleared, that is, the state identification bit is invalid.
Optionally, in the embodiment of the present disclosure, the GPU core may include a meta core (meta core), and firmware configured by software is stored in the meta core. The GPU core may poll an external module via firmware to obtain target state information that was not successfully sent to the GPU core, and clear a state identification bit on the external module that generated the target state information via firmware.
That is, in the embodiments of the present disclosure, polling the external module and clearing the status flag on the external module are both performed by software in the GPU core.
Of course, after clearing the status flag bit on the external module that generates the target status information, the GPU core may also send feedback information to the management module, so that the management module knows that the GPU core has performed corresponding processing on the FIFO overflow condition, and the processing is completed.
As can be seen from the above, in order to avoid the situation that the status information is lost due to the FIFO overflow, in the embodiment of the present disclosure, the management module is configured to monitor the overflow status of the FIFO, and when the FIFO overflow is monitored, the management module clears the status information in the FIFO and sends a FIFO overflow instruction to the GPU core, so that the GPU core actively polls the external module after knowing that the FIFO overflow situation exists, so as to obtain the status information lost due to the FIFO overflow and the status information cleared after the FIFO overflow.
The reason that the current state information in the FIFO is cleared when the FIFO overflows is to avoid that a large amount of state information is accumulated in the FIFO. And a large amount of state information is accumulated in the FIFO and cannot be processed in time, so that system jamming is easily induced. Therefore, the scheme can clear the accumulated state information in the FIFO in time, and is favorable for ensuring the stability of the system.
In addition, in another embodiment of the disclosure, the management module sends an information processing request to a GPU core having a connection relationship if it is monitored that the FIFO is not overflowed and is not empty.
After receiving the information processing request, the GPU core may clear the status flag bit on the external module generating the queue head status information according to the actively or passively acquired queue head status information and according to the identification MST _ ID of the external module carried in the queue head status information, and send feedback information to the management module, so that the management module knows that the GPU core has processed the queue head status information, and may perform subsequent operations.
Wherein the state information at the head of the FIFO queue is the head of queue state information.
Optionally, when the head-of-line state information is actively acquired by the GPU core, the GPU core may read and remove the head-of-line state information from the FIFO after receiving the information processing request.
When the head-of-line status information is passively acquired by the GPU core, the head-of-line status information may be carried in an information processing request sent by the management module, and the management module is further configured to: and after an information processing request carrying the queue head state information is sent, removing the queue head state information from the FIFO.
Optionally, the GPU core may generate an interrupt after receiving the information processing request, and after detecting the interrupt, the firmware configured in the GPU core clears the state identification bit on the external module generating the queue head state information (i.e., the external module corresponding to the identification MST _ ID of the external module carried by the queue head state information) according to the identification MST _ ID of the external module carried by the queue head state information.
It is worth pointing out that, when the GPU core responds to the information processing request, if the FIFO overflow instruction is received, the FIFO overflow instruction is responded after the information processing request is processed.
Optionally, in some embodiments, when the management module performs information interaction with the GPU core, the management module performs communication based on a General Purpose Input/Output Port (GPIO) handshake protocol. That is to say, the management module and the GPU core perform interaction of interrupt information based on the GPIO handshake protocol.
Optionally, in the operation process of the GPIO handshake protocol, the main signals to be maintained include: a request signal gpio _ input _ req and a feedback signal gpio _ input _ ack.
The management module triggers an instruction for sending information to the GPU core by pulling up a request signal gpio _ input _ req, and pulls down the request signal gpio _ input _ req when monitoring that a feedback signal gpio _ input _ ack is at a high level;
and the GPU core triggers an instruction for sending information to the management module by pulling up a feedback signal gpio _ input _ ack, and pulls down the feedback signal gpio _ input _ ack when detecting that the request signal gpio _ input _ req is low.
Based on the above, before sending the FIFO overflow instruction or the information processing request to the GPU core, the management module pulls up the request signal gpio _ input _ req through the gpio _ input _ req port (general purpose input output port), thereby triggering the FIFO overflow instruction or the information processing request.
It is worth noting that the FIFO overflow instruction and the information processing request carry different contents.
The content carried by the FIFO overflow instruction is a special character (e.g. 0 xff) configured by software in advance, and negotiation is performed with the GPU core based on the special character in advance, so that the GPU core executes operations of polling the external module to acquire the target state information and clearing the state flag bit of the external module generating the target state information after receiving the FIFO overflow instruction carrying the special character.
The content carried by the information processing request is the queue head state information currently included by the FIFO, so that the GPU core executes the operation of clearing the state identification bit of the external module generating the queue head state information after receiving the information processing request carrying the queue head state information.
After the GPU core executes corresponding operations according to different instructions, the GPU core pulls up the feedback signal gpio _ input _ ack to trigger the feedback information to be sent to the management module.
At this time, the management module monitors feedback information triggered by the feedback signal gpio _ input _ ack being at a high level, pulls down the request signal gpio _ input _ req that was pulled up before the request signal gpio _ input _ req was pulled down, and after the GPU core monitors that the request signal gpio _ input _ req is at a low level, pulls down the feedback signal gpio _ input _ ack, which indicates that this information interaction is finished.
Based on this, the management module at least needs to monitor the pulled-down gpio _ input _ ack before initiating the next information interaction with the GPU core (including sending a FIFO overflow instruction when the FIFO overflows, and sending information processing requests when the FIFO does not overflow and is not empty).
In addition, after the management module clears all the state information in the FIFO, it may continue to receive new first state information written by the external module, and at this time, the management module will continue to buffer the first state information into the FIFO. In this process, it is possible that the overflow is not completed yet, and the FIFO overflows again, that is, there may be a case that the FIFO overflows continuously.
Based on the above premise, when the management module sends the FIFO overflow instruction to the GPU core due to FIFO overflow, and the GPU core responds to the FIFO overflow instruction, and executes the operation of polling the external module to obtain the target state information and clearing the state flag of the external module generating the target state information, if the process is not executed all the time by the GPU core, the feedback signal gpio _ input _ ack is temporarily not pulled up, and the FIFO overflows again due to continuous caching of new state information, the management module may trigger only one FIFO overflow instruction, and the GPU core obtains the target state information emptied by the FIFO continuous overflow.
Specifically, the management module may empty all state information currently cached in the FIFO for the first time, and empty all state information cached in the FIFO at the time of the subsequent overflow again when a continuous overflow is detected, on the basis of triggering one FIFO overflow instruction.
And the GPU core polls an external module after acquiring the triggered FIFO overflow instruction. For the first state information causing the subsequent overflow of the FIFO and all the state information buffered by the FIFO during the subsequent overflow, the external module generating the state information also has a state flag bit configured to be valid, and therefore, when polling the external module, the GPU core can acquire the target state information emptied by the continuous overflow of the FIFO. By the mode, the instruction interaction process between the management module and the GPU core can be saved for the condition that the FIFO overflows continuously.
In addition, it is worth pointing out that, in the prior art, after the FIFO overflows, the FIFO may notify the external module to suspend sending the status information, at this time, more and more external modules may not be able to execute other tasks because the status flag bits thereof are not cleared all the time, and finally, the system may be jammed.
In the embodiment of the present disclosure, even if the FIFO overflows, the state information generated by the external module may still be sent to the FIFO for buffering, and the state flag on the external module may still be cleared in a GPU core polling manner, thereby helping to ensure the stability of the system.
An embodiment of the present disclosure further provides an electronic component, which includes the graphics processing system described in any of the above embodiments. In some use scenes, the product form of the electronic component is embodied as a display card; in other usage scenarios, the electronic component is embodied in the form of a CPU board.
The embodiment of the disclosure also provides an electronic device, which includes the electronic component. In some usage scenarios, the electronic device is in the form of a portable electronic device, such as a smartphone, a tablet, a VR device, etc.; in some usage scenarios, the electronic device is in the form of a personal computer, a game console, a workstation, a server, or the like.
The embodiment of the disclosure further provides an information processing method applied to a graphics processing system, wherein the graphics processing system comprises a FIFO, a management module and a GPU core. As shown in fig. 2, the method includes:
s110: and the management module caches the first state information received currently into the FIFO, judges whether the FIFO overflows or not, empties all state information cached currently in the FIFO if the FIFO overflows, and sends an FIFO overflow instruction to the GPU core.
S120: and after receiving the overflow instruction, the GPU core acquires target state information which is not successfully sent to the GPU core by polling an external module, and clears a state identification bit on the external module generating the target state information.
Optionally, in some embodiments, the acquiring, by polling the external module, the target state information that is not successfully sent to the GPU core includes: polling a candidate external module to acquire target state information which is not successfully sent to the GPU core;
the candidate external modules are preconfigured external modules that can send the state information to the GPU core.
Optionally, in some embodiments, the method further comprises:
if the FIFO is not overflowed and is not empty, the management module sends an information processing request to the GPU core;
after receiving the information processing request, the GPU core clears a state identification bit on an external module generating the queue head state information according to the actively or passively acquired queue head state information and sends feedback information to the management module;
the state information at the head of the FIFO queue is the head-of-queue state information.
Optionally, in some embodiments, in a case that the head-of-line state information is actively acquired by the GPU core, the method further includes: after receiving the information processing request, the GPU core reads and removes the queue head state information from the FIFO;
when the queue head state information is passively acquired by the GPU core, the information processing request carries the queue head state information, and the method further comprises the following steps: the management module removes the head of line state information from the FIFO.
Based on the above scheme, in order to avoid the loss of the state information by the GPU core, in the scheme provided by the embodiment of the present disclosure, the management module and the FIFO are configured to monitor the processing progress of the state information processed by the GPU, when the FIFO overflows, the management module clears the state information in the FIFO and sends an FIFO overflow instruction to the GPU core, and the GPU core actively polls the external module to obtain the state information that may be lost due to the FIFO overflow, so that the loss of the state information can be avoided, and the system stability is improved.
While preferred embodiments of the present disclosure have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the disclosure.
It will be apparent to those skilled in the art that various changes and modifications can be made in the present disclosure without departing from the spirit and scope of the disclosure. Thus, if such modifications and variations of the present disclosure fall within the scope of the claims of the present disclosure and their equivalents, the present disclosure is intended to include such modifications and variations as well.

Claims (15)

1. A graphics processing system, comprising:
a FIFO configured to: caching state information;
a management module configured to: caching currently received first state information into the FIFO, emptying all the currently cached state information in the FIFO when monitoring that the first state information is cached unsuccessfully due to the FIFO overflow, and sending an FIFO overflow instruction to a GPU core with a connection relation;
the GPU core configured to: after the overflow instruction is received, target state information which is not successfully sent to the GPU core is obtained through polling an external module, and a state identification bit on the external module which generates the target state information is eliminated;
the target state information includes the first state information and all state information currently buffered in the FIFO.
2. The system of claim 1, the GPU core configured to: acquiring target state information which is not successfully sent to the GPU core through a polling candidate external module;
the candidate external modules are preconfigured, and the candidate external modules can send the state information to the GPU core.
3. The system of claim 1, configured with firmware within the GPU core, the GPU core configured to: and polling the external module through the firmware to acquire target state information which is not successfully sent to the GPU core, and clearing a state identification bit on the external module generating the target state information through the firmware.
4. The system of claim 1, the management module configured to: when the FIFO is monitored not to overflow and not to be empty, sending an information processing request to the GPU core;
the GPU core configured to: after receiving the information processing request, clearing a state identification bit on an external module generating the queue head state information according to the actively or passively acquired queue head state information, and sending feedback information to the management module;
the state information at the head of the FIFO queue is the head-of-queue state information.
5. The system of claim 4, where the head of line state information is actively acquired by the GPU core, the GPU core configured to: after receiving the information processing request, reading and removing the queue head state information from the FIFO;
when the head-of-line state information is passively acquired by the GPU core, the information processing request carries the head-of-line state information, and the management module is further configured to: removing the head of line state information from within the FIFO.
6. The system of claim 4, configured with firmware within the GPU core, the GPU core configured to: and generating an interrupt after receiving the information processing request, and clearing a state identification bit on an external module generating the queue head state information according to the queue head state information after the firmware detects the interrupt.
7. The system of claim 6, each of the state information comprising an identification of an external module that generated the state information, the GPU core configured to: and clearing the state identification bit on the external module corresponding to the identification of the external module included in the queue head state information.
8. The system of any of claims 1-7, the management module to communicate with the GPU core via a GPIO handshake protocol.
9. The system of claim 8, the management module configured to: triggering an instruction for sending information to the GPU core by pulling up a request signal gpio _ input _ req, and pulling down the request signal gpio _ input _ req when monitoring that a feedback signal gpio _ input _ ack is at a high level;
the GPU core configured to: and triggering an instruction for sending information to the management module by pulling up the feedback signal gpio _ input _ ack, and pulling down the feedback signal gpio _ input _ ack when detecting that the request signal gpio _ input _ req is low.
10. An electronic assembly comprising the system of any one of claims 1-9.
11. An electronic device comprising the electronic assembly of claim 10.
12. An information processing method is applied to a graphic processing system, and the graphic processing system comprises an FIFO, a management module and a GPU core; the method comprises the following steps:
the management module caches the first state information received currently into the FIFO, judges whether the FIFO overflows or not, empties all state information cached currently in the FIFO if the FIFO overflows, and sends an FIFO overflow instruction to the GPU core;
after receiving the overflow instruction, the GPU core acquires target state information which is not successfully sent to the GPU core by polling an external module, and clears a state identification bit on the external module generating the target state information;
the target state information includes the first state information and all state information currently buffered in the FIFO.
13. The method of claim 12, wherein the obtaining target status information that was not successfully sent to the GPU core by polling an external module comprises:
polling a candidate external module to acquire target state information which is not successfully sent to the GPU core;
the candidate external modules are preconfigured, and the candidate external modules can send the state information to the GPU core.
14. The method of claim 12, further comprising:
if the FIFO is not overflowed and is not empty, the management module sends an information processing request to the GPU core;
after receiving the information processing request, the GPU core clears a state identification bit on an external module generating the queue head state information according to the actively or passively acquired queue head state information and sends feedback information to the management module;
the state information at the head of the FIFO queue is the head-of-queue state information.
15. The method of claim 14, where the head-of-line state information is actively acquired by the GPU core, further comprising: after receiving the information processing request, the GPU core reads and removes the queue head state information from the FIFO;
when the head-of-line state information is passively acquired by the GPU core, the information processing request carries the head-of-line state information, and the method further includes: the management module removes the head of line state information from the FIFO.
CN202211587596.0A 2022-12-12 2022-12-12 Graphic processing system, electronic component, electronic device, and information processing method Active CN115599574B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211587596.0A CN115599574B (en) 2022-12-12 2022-12-12 Graphic processing system, electronic component, electronic device, and information processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211587596.0A CN115599574B (en) 2022-12-12 2022-12-12 Graphic processing system, electronic component, electronic device, and information processing method

Publications (2)

Publication Number Publication Date
CN115599574A CN115599574A (en) 2023-01-13
CN115599574B true CN115599574B (en) 2023-03-24

Family

ID=84853335

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211587596.0A Active CN115599574B (en) 2022-12-12 2022-12-12 Graphic processing system, electronic component, electronic device, and information processing method

Country Status (1)

Country Link
CN (1) CN115599574B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572316A (en) * 2010-09-30 2012-07-11 苹果公司 Overflow control techniques for image signal processing
CN104008524A (en) * 2013-02-26 2014-08-27 英特尔公司 Techniques for low energy computation in graphics processing
CN104081449A (en) * 2012-01-27 2014-10-01 高通股份有限公司 Buffer management for graphics parallel processing unit

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102572316A (en) * 2010-09-30 2012-07-11 苹果公司 Overflow control techniques for image signal processing
CN104081449A (en) * 2012-01-27 2014-10-01 高通股份有限公司 Buffer management for graphics parallel processing unit
CN104008524A (en) * 2013-02-26 2014-08-27 英特尔公司 Techniques for low energy computation in graphics processing

Also Published As

Publication number Publication date
CN115599574A (en) 2023-01-13

Similar Documents

Publication Publication Date Title
US7457892B2 (en) Data communication flow control device and methods thereof
CN107527317B (en) Data transmission system based on image processing
US6192428B1 (en) Method/apparatus for dynamically changing FIFO draining priority through asynchronous or isochronous DMA engines in response to packet type and predetermined high watermark being reached
EP2172847A2 (en) Deadlock avoidance in a bus fabric
US10078470B2 (en) Signal transfer device that maintains order of a read request and write request in posted write memory access
CN113157625B (en) Data transmission method, device, terminal equipment and computer readable storage medium
CN112650558B (en) Data processing method and device, readable medium and electronic equipment
CN112559436B (en) Context access method and system of RDMA communication equipment
CN110858188A (en) Multiprocessor system with distributed mailbox structure and communication method thereof
CN116996647B (en) Video transmission method of BMC, BMC and system-level chip
CN115934625B (en) Doorbell knocking method, equipment and medium for remote direct memory access
US9330033B2 (en) System, method, and computer program product for inserting a gap in information sent from a drive to a host device
CN110941578A (en) LIO design method and device with DMA function
US8230137B2 (en) Network processor, reception controller and data reception processing method performing direct memory access transfer
CN115599574B (en) Graphic processing system, electronic component, electronic device, and information processing method
CN116738089A (en) Rendering method, chip, electronic device and storage medium
CN116009770A (en) Read response circuit, method, data transmission system and related equipment
US7716397B2 (en) Methods and systems for interprocessor message exchange between devices using only write bus transactions
CN116912079B (en) Data processing system, electronic component, electronic device and data processing method
JP2002024007A (en) Processor system
CN117234972B (en) Host data reading method and system
CN116225988A (en) Data transmission method, data transmission device and electronic equipment
CN118674602A (en) Graphics processor, system, method, electronic component, and apparatus
CN118363738A (en) Access request scheduler, access request scheduler method, system on chip, electronic component and equipment
CN115826857A (en) NVMe instruction processing method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant