WO2017184265A1 - Apparatus and method for performing trace/debug data compression - Google Patents

Apparatus and method for performing trace/debug data compression Download PDF

Info

Publication number
WO2017184265A1
WO2017184265A1 PCT/US2017/021252 US2017021252W WO2017184265A1 WO 2017184265 A1 WO2017184265 A1 WO 2017184265A1 US 2017021252 W US2017021252 W US 2017021252W WO 2017184265 A1 WO2017184265 A1 WO 2017184265A1
Authority
WO
WIPO (PCT)
Prior art keywords
compression
trace
data
debug
packet
Prior art date
Application number
PCT/US2017/021252
Other languages
French (fr)
Inventor
Menon M. SANKARAN
Rolf H. KUEHNIS
Babu TRP
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Publication of WO2017184265A1 publication Critical patent/WO2017184265A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging

Definitions

  • This invention relates generally to the field of computer processors. More particularly, the invention relates to an apparatus and method for data compression such as the compression of debug/trace data implemented, for example, within a system-on-a-chip (SoC) or other type of system or processor.
  • SoC system-on-a-chip
  • SoCs are moving from interfacing discrete components on a printed circuit board or through use of other package configurations, to integrating multiple components onto a single integrated chip, which is commonly referred to as a System on a Chip (SoC) architecture.
  • SoCs offer a number of advantages, including denser packaging, higher speed communication between functional components, and lower temperature operation. SoC designs also provide standardization, scalability, modularization, and reusability.
  • SoC architectures present challenges with respect to verification of design and integration when compared with discrete components.
  • personal computers employed the ubiquitous "North" bridge and “South” bridge architecture, wherein a central processing unit was interfaced to a memory controller hub (MCH) chip via a first set of buses, and the memory controller hub, in turn, was interfaced to an Input/Output controller hub (ICH) chip via another set of buses.
  • MCH memory controller hub
  • ICH Input/Output controller hub
  • Each of the MCH and ICH further provided interface to various system components and peripherals via further buses and interfaces.
  • Each of these buses and interfaces adhere to well-established standards, enabling the system architectures to support modular designs.
  • each of the individual or groups of components could be tested using test interfaces which are accessible through the device pins.
  • IP Intellectual Property
  • IP blocks including functional blocks or components that are commonly referred to in the industry as Intellectual Property (“IP”) cores, IP blocks, or simply IP.
  • IP Intellectual Property
  • IP blocks IP blocks
  • IP IP blocks
  • IP any other component or block generally known as
  • IP IP, as would be understood by those in the SoC development and manufacturing industries. These IP blocks generally serve one or more dedicated functions and often comprise existing circuit design blocks that are licensed from various vendors or developed in-house.
  • IP blocks include compression engines (e.g., logic and circuitry) used to perform compression when implementing the function for which the IP block was designed.
  • a compression engine designed to compress image data such as a graphics circuit, video processing circuit, and/or camera interface
  • communication circuits will typically include logic and circuitry for performing various types of packet- based compression.
  • Audio circuits such as DSPs may also include forms of compression logic/circuitry.
  • these compression engines are left unused when performing testing operations such as debug traces on the SoC.
  • FIG. 1 A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-order
  • FIG. 1 B is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to
  • FIG. 2 is a block diagram of a single core processor and a multicore processor with integrated memory controller and graphics according to embodiments of the invention;
  • FIG. 3 illustrates a block diagram of a system in accordance with one embodiment of the present invention
  • FIG. 4 illustrates a block diagram of a second system in accordance with an embodiment of the present invention
  • FIG. 5 illustrates a block diagram of a third system in accordance with an embodiment of the present invention
  • FIG. 6 illustrates a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present invention
  • FIG. 7 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention
  • FIG. 8 illustrates a trace/debug architecture on which embodiments of the invention may be used
  • FIG. 9 illustrates an architecture in accordance with one embodiment of the invention.
  • FIG. 10 illustrates a method in accordance with one embodiment of the invention
  • FIG. 11 illustrates one embodiment of a packet-based compression circuit and/or logic
  • FIG. 12 illustrates one embodiment of a data compression circuit and/or logic
  • FIG. 13 illustrates encoding compression data stored within a packet header in one embodiment.
  • Figure 1 A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.
  • Figure 1 B is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution
  • FIG. 1 A-B illustrate the in-order portions of the pipeline and core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core.
  • a processor pipeline 100 includes a fetch stage 102, a length decode stage 104, a decode stage 106, an allocation stage 108, a renaming stage 1 10, a scheduling (also known as a dispatch or issue) stage 1 12, a register read/memory read stage 1 14, an execute stage 1 16, a write back/memory write stage 1 18, an exception handling stage 122, and a commit stage 124.
  • Figure 1 B shows processor core 190 including a front end unit 130 coupled to an execution engine unit 150, and both are coupled to a memory unit 170.
  • the core 190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type.
  • the core 190 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.
  • GPGPU general purpose computing graphics processing unit
  • the front end unit 130 includes a branch prediction unit 132 coupled to an instruction cache unit 134, which is coupled to an instruction translation lookaside buffer (TLB) 136, which is coupled to an instruction fetch unit 138, which is coupled to a decode unit 140.
  • the decode unit 140 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions.
  • the decode unit 140 may be implemented using various different mechanisms.
  • the core 190 includes a microcode ROM or other medium that stores microcode for certain macro! nstructions (e.g., in decode unit 140 or otherwise within the front end unit 130).
  • the decode unit 140 is coupled to a rename/allocator unit 152 in the execution engine unit 150.
  • the execution engine unit 150 includes the rename/allocator unit 152 coupled to a retirement unit 154 and a set of one or more scheduler unit(s) 156.
  • the scheduler unit(s) 156 represents any number of different schedulers, including reservations stations, central instruction window, etc.
  • the scheduler unit(s) 156 is coupled to the physical register file(s) unit(s) 158.
  • Each of the physical register file(s) units 158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc.
  • the physical register file(s) unit 158 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit.
  • register units may provide architectural vector registers, vector mask registers, and general purpose registers.
  • the physical register file(s) unit(s) 158 is overlapped by the retirement unit 154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.).
  • the retirement unit 154 and the physical register file(s) unit(s) 158 are coupled to the execution cluster(s) 160.
  • the execution cluster(s) 160 includes a set of one or more execution units 162 and a set of one or more memory access units 164.
  • the execution units 162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions.
  • the scheduler unit(s) 156, physical register file(s) unit(s) 158, and execution cluster(s) 160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 164).
  • a scalar integer pipeline e.g., a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only
  • the set of memory access units 164 is coupled to the memory unit 170, which includes a data TLB unit 172 coupled to a data cache unit 174 coupled to a level 2 (L2) cache unit 1 76.
  • the memory access units 164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 172 in the memory unit 170.
  • the instruction cache unit 134 is further coupled to a level 2 (L2) cache unit 176 in the memory unit 170.
  • the L2 cache unit 1 76 is coupled to one or more other levels of cache and eventually to a main memory.
  • the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 100 as follows: 1 ) the instruction fetch 138 performs the fetch and length decoding stages 102 and 104; 2) the decode unit 140 performs the decode stage 106; 3) the rename/allocator unit 152 performs the allocation stage 108 and renaming stage 1 10; 4) the scheduler unit(s) 156 performs the schedule stage 1 12; 5) the physical register file(s) unit(s) 158 and the memory unit 170 perform the register read/memory read stage 114; the execution cluster 160 perform the execute stage 1 16; 6) the memory unit 170 and the physical register file(s) unit(s) 158 perform the write back/memory write stage 1 18; 7) various units may be involved in the exception handling stage 122; and 8) the retirement unit 154 and the physical register file(s) unit(s) 158 perform the commit stage 124.
  • the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel®
  • register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture.
  • the illustrated embodiment of the processor also includes separate instruction and data cache units 134/174 and a shared L2 cache unit 176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1 ) internal cache, or multiple levels of internal cache.
  • the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be externa! to the core and/or the processor.
  • Figure 2 is a block diagram of a processor 200 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention.
  • the solid lined boxes in Figure 2 illustrate a processor 200 with a single core 202A, a system agent 210, a set of one or more bus controller units 216, while the optional addition of the dashed lined boxes illustrates an alternative processor 200 with multiple cores 202A-N, a set of one or more integrated memory controller unit(s) 214 in the system agent unit 210, and special purpose logic 208.
  • processor 200 may include: 1 ) a CPU with the special purpose logic 208 being integrated graphics and/or scientific
  • throughput logic (which may include one or more cores), and the cores 202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 202A-N being a large number of general purpose in-order cores.
  • general purpose cores e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two
  • coprocessor with the cores 202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput)
  • 3) a coprocessor with the cores 202A-N being a large number of general purpose in-order cores e.g., general purpose in-order cores, general purpose out-of-
  • the processor 200 may be a general- purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like.
  • the processor may be implemented on one or more chips.
  • the processor 200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.
  • the memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 206, and external memory (not shown) coupled to the set of integrated memory controller units 214.
  • the set of shared cache units 206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.
  • LLC last level cache
  • a ring based interconnect unit 212 interconnects the integrated graphics logic 208, the set of shared cache units 206, and the system agent unit 210/integrated memory controller unit(s) 214, alternative embodiments may use any number of we!l-known techniques for interconnecting such units.
  • coherency is maintained between one or more cache units 206 and cores 202-A-N.
  • one or more of the cores 202A-N are capable of multi-threading.
  • the system agent 210 includes those components coordinating and operating cores 202A-N.
  • the system agent unit 210 may include for example a power control unit (PCU) and a display unit.
  • PCU power control unit
  • the PCU may be or include logic and
  • the display unit is for driving one or more externally connected displays.
  • the cores 202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 202A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.
  • the cores 202A-N are heterogeneous and include both the "small" cores and "big” cores described below.
  • Figures 3-6 are block diagrams of exemplary computer architectures.
  • Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable.
  • DSPs digital signal processors
  • graphics devices video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable.
  • DSPs digital signal processors
  • a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.
  • the system 300 may include one or more processors 310, 315, which are coupled to a controller hub 320.
  • the controller hub 320 includes a graphics memory controller hub (GMCH) 390 and an Input/Output Hub (IOH) 350 (which may be on separate chips);
  • the GMCH 390 includes memory and graphics controllers to which are coupled memory 340 and a coprocessor 345;
  • the IOH 350 is couples input/output (I/O) devices 360 to the GMCH 390.
  • the memory and graphics controllers are integrated within the processor (as described herein), the memory 340 and the coprocessor 345 are coupled directly to the processor 310, and the controller hub 320 in a single chip with the IOH 350.
  • processors 315 The optional nature of additional processors 315 is denoted in Figure 3 with broken lines.
  • Each processor 310, 315 may include one or more of the processing cores described herein and may be some version of the processor 200.
  • the memory 340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two.
  • DRAM dynamic random access memory
  • PCM phase change memory
  • the controller hub 320 communicates with the processor(s) 310, 315 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as
  • FSB frontside bus
  • point-to-point interface such as
  • QPI QuickPath Interconnect
  • the coprocessor 345 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like, in one embodiment, controller hub 320 may include an integrated graphics accelerator.
  • the processor 310 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 310 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 345.
  • the processor 310 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 345.
  • Coprocessor(s) 345 accept and execute the received coprocessor instructions.
  • multiprocessor system 400 is a point-to-point interconnect system, and includes a first processor 470 and a second processor 480 coupled via a point-to- point interconnect 450.
  • processors 470 and 480 may be some version of the processor 200.
  • processors 470 and 480 are respectively processors 310 and 315, while coprocessor 438 is coprocessor 345.
  • processors 470 and 480 are respectively processor 310 coprocessor 345.
  • Processors 470 and 480 are shown including integrated memory controller (IMC) units 472 and 482, respectively.
  • Processor 470 also includes as part of its bus controller units point-to-point (P-P) interfaces 476 and 478; similarly, second processor 480 includes P-P interfaces 486 and 488.
  • Processors 470, 480 may exchange information via a point-to-point (P-P) interface 450 using P-P interface circuits 478, 488.
  • fMCs 472 and 482 couple the processors to respective memories, namely a memory 432 and a memory 434, which may be portions of main memory locally attached to the respective processors.
  • Processors 470, 480 may each exchange information with a chipset 490 via individual P-P interfaces 452, 454 using point to point interface circuits 476, 494, 486, 498.
  • Chipset 490 may optionally exchange information with the coprocessor 438 via a high-performance interface 439.
  • the coprocessor 438 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.
  • a shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.
  • Chipset 490 may be coupled to a first bus 416 via an interface 496.
  • first bus 416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.
  • PCI Peripheral Component Interconnect
  • various I/O devices 414 may be coupled to first bus 416, along with a bus bridge 418 which couples first bus 416 to a second bus 420.
  • one or more additional processor(s) 415 such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics processing units (GPU).
  • second bus 420 may be a low pin count (LPC) bus.
  • Various devices may be coupled to a second bus 420 including, for example, a keyboard and/or mouse 422, communication devices 427 and a storage unit 428 such as a disk drive or other mass storage device which may include instructions/code and data 430, in one embodiment.
  • an audio I/O 424 may be coupled to the second bus 420. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 4, a system may implement a multi-drop bus or other such architecture.
  • FIG. 5 shown is a block diagram of a second more specific exemplary system 500 in accordance with an embodiment of the present invention.
  • Like elements in Figures 4 and 5 bear like reference numerals, and certain aspects of Figure 4 have been omitted from Figure 5 in order to avoid obscuring other aspects of Figure 5.
  • Figure 5 illustrates that the processors 470, 480 may include integrated memory and I/O control logic ("CL") 472 and 482, respectively.
  • CL 472, 482 include integrated memory controller units and include i/O control logic.
  • Figure 5 illustrates that not only are the memories 432, 434 coupled to the CL 472, 482, but also that I/O devices 514 are also coupled to the control logic 472, 482.
  • Legacy I/O devices 515 are coupled to the chipset 490.
  • an interconnect unit(s) 602 is coupled to: an application processor 610 which includes a set of one or more cores 202A-N and shared cache unit(s) 206; a system agent unit 210; a bus controller unit(s) 216; an integrated memory controller unit(s) 214; a set or one or more coprocessors 620 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 630; a direct memory access (DMA) unit 632; and a display unit 640 for coupling to one or more externa! displays.
  • the coprocessor(s) 620 include a special-purpose processor, such as, for example, a network or communication processor, compression engine,
  • Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches.
  • Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
  • Program code such as code 430 illustrated in Figure 4, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion.
  • a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • the program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system.
  • the program code may also be implemented in assembly or machine language, if desired.
  • the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
  • One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein.
  • Such representations known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.
  • Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.
  • storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto
  • embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.
  • HDL Hardware Description Language
  • an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set.
  • the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core.
  • the instruction converter may be implemented in software, hardware, firmware, or a combination thereof.
  • the instruction converter may be on processor, off processor, or part on and part off processor.
  • Figure 7 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.
  • the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof.
  • Figure 7 shows a program in a high level language 702 may be compiled using an x86 compiler 704 to generate x86 binary code 706 that may be natively executed by a processor with at least one x86 instruction set core 716.
  • the processor with at least one x86 instruction set core 716 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1 ) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core.
  • the x86 compiler 704 represents a compiler that is operable to generate x86 binary code 706 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 716.
  • Figure 7 shows the program in the high level language 702 may be compiled using an alternative instruction set compiler 708 to generate alternative instruction set binary code 710 that may be natively executed by a processor without at least one x86 instruction set core 714 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA).
  • the instruction converter 712 is used to convert the x86 binary code 706 into code that may be natively executed by the processor without an x86 instruction set core 714.
  • the instruction converter 712 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 706.
  • IP blocks include compression engines (e.g., logic and circuitry) used to perform compression when implementing the function for which the IP block was designed.
  • a compression engine designed to compress image data such as a graphics circuit, video processing circuit, and/or camera interface
  • RLE is a form of lossless data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count.
  • some IP blocks such as imaging circuits, audio circuits and/or communication circuits may include logic and circuitry for performing packet-based compression.
  • these compression engines are left unused when performing testing operations such as debug traces on the SoC, processor, ASIC, or other type of chip.
  • One embodiment of the invention reduces the high bandwidth consumed by debug traces by using existing compression engines in a chip's IP blocks to compress the trace data.
  • the traces generated from any system debug trace operations may be efficiently encoded using existing compression engines in a lossless manner so that the underlying debug data can be extracted and used by external tools and software.
  • compression engines which may be used in accordance with embodiments of the invention including those utilized in current tracing tools used for signal-level debugging and trace-level debugging, graphics and image compression engines, audio compression engines, and/or any other types of compression engines (e.g. embedded controllers, signal processors, co-processors, ASICs) that exist on a typical SoC.
  • compression engines e.g. embedded controllers, signal processors, co-processors, ASICs
  • FIG 8 shows an exemplary trace/debug architecture of an SoC.
  • a plurality of different traces are illustrated including a hardware trace 801 , a processor trace 802, a JTAG (Joint Test Action Group) mechanism for configuration and low- bandwidth trace, an Intel Architecture (IA)/core trace 804 and one or more software traces 805.
  • the hardware trace 801 may include tracing operations performed with respect to any hardware circuit within the SoC.
  • the processor trace 802 may include tracing of instructions executed by a processor. For example, it may be desirable to track jump and/or branch instructions to determine how the instruction stream is progressing over time (which is a bandwidth-intensive type of trace).
  • the JTAG module 803 may implement one or more tracing operations in accordance with JTAG standards. As is known by those of skill in the art, JTAG implements standards for on-chip instrumentation in electronic design automation (EDA).
  • EDA electronic design automation
  • the IA/core trace module 804 may implement specific traces on an IA (Intel Architecture) core components. It should be noted, however, that the underlying principles of the invention are not limited to IA.
  • a variety of different application-specific software traces 805, 812 may be designed and implemented in accordance with the embodiments of the invention.
  • the particular example in Figure 8 illustrates south complex hardware traces 81 1 directed to components of the SoC responsible for I/O operations (e.g., USB, PCIe, serial ATA, and BIOS interactions). It should be noted, however, that the underlying principles of the invention are not limited to this specific implementation.
  • Some implementations may be used in a semiconductor device designed according to the IntelTM on-chip system fabric (IOSF)TM specification, which provides a standardized on-die interconnect protocol for attaching IP blocks.
  • This embodiment may include a primary fabric 813 and a secondary fabric 814 to support communication between various system components.
  • each fabric can be impiemented as a bus, a hierarchical bus, a cascaded hub, or so forth.
  • the primary interface fabric 813 is used for all in-band communication between agents and memory, e.g., between a host processor such as a central processing unit (CPU) and an agent implemented on behalf of a system component (e.g., a particular IP block).
  • a host processor such as a central processing unit (CPU)
  • an agent implemented on behalf of a system component e.g., a particular IP block.
  • Primary interface fabric 813 may further enable communication of peer transactions between agents and supported fabrics. All transaction types including memory, input output (IO), configuration, and in-band messaging can be delivered via primary interface fabric 813. Thus the primary interface fabric 813 may act as a high performance interface for data transferred between peers and/or communications with upstream components.
  • All transaction types including memory, input output (IO), configuration, and in-band messaging can be delivered via primary interface fabric 813.
  • IO input output
  • in-band messaging can be delivered via primary interface fabric 813.
  • the primary interface fabric 813 may act as a high performance interface for data transferred between peers and/or communications with upstream components.
  • primary interface fabric 813 may implement a split transaction protocol to achieve maximum concurrency. That is, this protocol provides for a request phase, a grant phase, and a command and data phase.
  • this protocol provides for a request phase, a grant phase, and a command and data phase.
  • the primary interface fabric 813 supports the concept of distinct channels to provide a mechanism for independent data flows throughout the system.
  • the secondary interface fabric 814 (also sometimes referred to as a
  • sideband fabric may be a standard mechanism for communicating all out-of-band information.
  • special-purpose wires designed for a given implementation can be avoided, enhancing the ability to reuse IP blocks across a wide variety of chips.
  • a secondary interface fabric 814 standardizes all out-of-band communication (e.g., according to the IOSF specification), promoting modularity and reducing validation requirements for IP reuse across different designs.
  • secondary interface fabric 814 may be used to communicate nonperformance critical information, rather than for performance critical data transfers, which typically may be communicated via the primary interface fabric 814.
  • trace results may be collected and distributed across one or more buses 800, 810 of the SoC and provided to a trace aggregator 820 which may be implemented in hardware, software, or any combination thereof.
  • the aggregated results are then transmitted over one or more debug ports 830-831 which provide the results to an application (e.g., a debug/test application).
  • the ports 830-831 may be implemented using any type of data communication technology including wireless (e.g., 802.1 1 ports, Bluetooth ports, etc) and wired (e.g., Ethernet, USB, etc).
  • the traces collected from the various units may be sent to system memory 825 as well as debug ports 830-831 (e.g., via the primary fabric 813).
  • packet compression circuit/logic 910 performs packet-based compression on tracing data 905.
  • Compression resource selection circuitry/logic 913 selects one or more compression resources 91 1 existing within the SoC to perform the compression.
  • the existing compression resources 91 1 are not designed specifically for trace/debug compression. Rather, the compression resources 91 1 may normally be used to provide compression functions for applications other than compression of trace/debug packets.
  • the existing compression resources 91 1 may comprise circuitry/logic within a graphics circuit (for rendering graphics images), a video processing circuit (for encoding/decoding video data), an audio circuit (for encoding/decoding audio signals), an I/O circuit, or any other circuit within the SoC capable of performing compression operations.
  • a graphics circuit for rendering graphics images
  • a video processing circuit for encoding/decoding video data
  • an audio circuit for encoding/decoding audio signals
  • I/O circuit any other circuit within the SoC capable of performing compression operations.
  • the tracing data 905 is received in the form of debug/tracing packets containing data related to current debug/tracing operations.
  • the packets may comprise DW (double word), QW (quad word) or on larger packets and information related to the packet size may be encoded in the packet header.
  • the selected compression resources 911 identify repeated packets and substitute a single compressed packet in place of the repeated packets, thereby significantly reducing bandwidth.
  • the number of packets represented by the single compressed packet may be encoded within the header of the single compressed packet by the packet compression circuit/logic 910.
  • the single compressed packet may then be sent to a finer level of data compression performed by data compression circuit/logic 915.
  • the data compression circuit/logic 915 performs data compression on the compressed packets received from the packet compression circuit/logic 910.
  • compression resource selection logic/circuitry 914 identifies one or more existing compression resources 916 to be used to perform the data compression.
  • the existing compression resources 916 may include some or all of the same compression resources 91 1 used for packet-based compression or may utilize an entirely different set of compression resources (i.e., depending on the type of compression being implemented).
  • the existing compression resources 916 perform run length encoding (RLE) of the packets compressed by the packet compression circuit/logic 910. As such, in this embodiment, the existing compression resources 916 may be selected based on their capacity to efficiently perform run length encoding operations.
  • a compression encoding (CE) circuit/logic 920 encodes the compression parameters used by the packet compression 910 and/or data compression 915 within a packet header (i.e., so that the packets can be properly decoded). Note that the "packet" and “header” referred to here is different from the compressed packet/ header generated by the packet compression circuit/logic 910.
  • the packet output by the CE 920 will be referred to herein as an "output packet.”
  • the CE circuit/logic 920 may encode various different compression parameters within the output packet header such as an indication as to whether each output packet has been compressed, whether packet and/or data compression has been used, the number of packets which have been compressed into a single packet by the packet compression 910, whether run length encoding has been used and/or the run length encoding parameters.
  • the parameters coded by the CE circuit/logic 920 are then used to decode the output packet which may be passed through the trace outputs 930 to an application running on a test/debug system.
  • the data compression circuit/logic 915 may compress data within each packet and the packet compression circuit/logic 910 may then performing packet compression on the packets containing the compressed data.
  • the encoding compression circuit/logic 920 may update the compression parameters for each packet directly following the packet compression by the packet compression circuit/logic 910 but before data compression is performed.
  • the packet compression 910 may use its own, dedicated compression resources while the data compression 915 uses the existing compression resources 916.
  • the data compression 915 may use its own, dedicated compression resources while the packet compression uses the existing compression resources 91 1 .
  • Figure 10 illustrates a method in accordance with one embodiment of the invention. The method may be implemented within the context of the architectures described above, but is not limited to any particular architecture.
  • a trace generator generates data related to trace/debug operations performed on an SoC (or other type of data processor).
  • a determination is made as to whether compression resources are available to compress the trace/debug data. For example, as discussed in greater detail below, compression engine usage information may be collected to determine whether a particular compression resource is available (i.e., existing on the chip/SoC and not currently in use or overloaded). If not, then the trace/debug data is provided to the output control unit at 1007 in an
  • the compression resources are selected. For example, certain compression resources may be capable of compressing the trace/debug data more efficiently or effectively than other compression resources. This decision may be based on variables such as the type of compression resources, the current state of the compression resources, and/or the current load on the compression resources.
  • packet-based compression is performed.
  • packet- based compression may include combining multiple repeated packets of the same type into a single packet.
  • the packet header of the single packet may be updated to indicate the number of repeated packets represented by the single packet.
  • data compression is performed on the compressed packets.
  • frame compression circuitry of an image processing component may be used to perform run length encoding of the compressed packets.
  • RLE performs lossless data compression in which runs of data are stored as a single data value and count.
  • various other forms of lossless data compression may be used, in addition to, instead of RLE including Huffman coding, block-sorting
  • LZW Lempel-Ziv-Welch
  • CE compression encoding
  • the CE operation may encode an indication as to whether each packet has been compressed, whether packet and/or data compression has been used, the number of packets which have been compressed into a single packet, whether run length encoding has been used, and/or the run length encoding parameters.
  • the parameters coded by the CE operation are then used to decode the
  • the underlying principles of the invention are not limited to any particular order in which the various forms of compression are performed. So, for example, in Figure 9, the data compression circuit/logic 915 may compress data within each packet and the packet compression circuit/iogic 910 may then perform packet compression on the packets containing the compressed data. Similarly, the encoding compression (EC) operation may update the compression parameters for each packet directly following the packet compression operation 1004 but before the data compression operation is performed.
  • EC encoding compression
  • Figure 11 illustrates details of one embodiment which performs packet- based compression on trace/debug packets.
  • the packet-based compression 1 1 10 may be performed on packets of various sizes including, but not limited to DW (Dword) and QW (quad word) packets.
  • hardware and/or software trace packets are collected from various software trace sources 1 105 and hardware trace sources 1 106 and then checked for repeated packets.
  • packet-based compression 1 1 10 compresses the packets using existing SoC compression resources and sends out one packet in place of two or more repeated packets.
  • the packet-based compression 1 1 10 may perform its compression in accordance with configuration and control 1 120 information/signals which may be input by a user (e.g., via a user interface). For example, the user may specify the number of repeated packets which will trigger a compression operation.
  • the compressed packet header is updated to include an indication of the number of repeated packets represented by the compressed packet, which is then sent to the trace output 1 130. While only packet-based compression is illustrated in Figure 11 , the trace output 1 130 may also be subjected to data compression, as discussed above.
  • Figure 12 illustrates additional details of one embodiment of the data compression circuit/logic including a trace streaming circuit 1215 that collects tracing data from a variety of sources including software tracing data 1205, real-time processor tracing data 1206 which may include both Instruction and data tracing, and hardware tracing data 1207.
  • the trace streaming unit 1215 detects if there are compression resources available for compression and, if available, intelligently allocates compression jobs to the resources.
  • the compression resources include compression engines A-C 1230-1232.
  • the trace streaming unit 1215 receives compression engine data 1220 related to the current state of each of the compression engines 1230-1232 to determine which compression engines to select.
  • This data may include, for example, compression idle information (Cil) identifying whether the compression engines 1230-1232 are in an idle or active state.
  • the compression engine data 1220 may be maintained by the trace steaming unit 1215 within a resource usage database 1216 which it may then query to render its selections. It may then use this data in combination with compression resource policies specified by a policy manger to intelligently allocate jobs to each of the compression engines 1230-1232. For example, a low power" debug policy may instruct the trace streaming circuit 1215 to optimize power usage while enabling the compression engines. In contrast, a "high bandwidth” policy may instruct the trace streaming unit 1215 to select those compression engines which will complete the compression jobs the most efficiently.
  • a compression engine which is idle or OFF may be enabled in order to complete the job in the shortest amount of time.
  • Another policy may specify a power budget and the trace streaming unit 1215 may select the most efficient set of compression engines 1230-1232 capable of performing the compression within the power budget.
  • a policy may specify that specific types of jobs should be allocated to specific compression engines.
  • Yet another policy may perform a load balancing function based on the relative load on each of the compression engines 1230-1232 (e.g., attempting to distribute the load evenly across the compression engines).
  • Various additional or alternative policies may be specified in this manner while still complying with the underlying principles of the invention.
  • the policy manager 1217 may be configured to meet the specific
  • a trace collector 1240 receives compression information from the trace streaming circuit 1215 (e.g., indicating the portions of trace data compressed by difference compression engines) and reconstructs the traces. In an embodiment in which data compression is performed before packet-based
  • the trace results collected by the trace collector may be passed through a packet-based compression circuit 910.
  • the trace collector 1240 performs reconstruction of trace packets based on an ID which it extracts from the packets (e.g., associating each packet with a particular trace based on the ID used).
  • Figure 13 illustrates an exemplary output packet comprising a header 1301 and data 1302 and shows the details of the compression parameters which may be encoded in the header 1301.
  • a compression (CMP) field 131 1 provides an indication as to whether any compression has been applied to the packet. As mentioned, in some cases, compression may not be used (e.g., because resources are not available).
  • a packet compressed (PC) field 1312 indicates whether packet-based compression has been performed and a PC_CNT field 1313 indicates the number of packets which have been combined into a single compressed packet.
  • a RLE field 1314 indicates whether run length encoding has been applied (i.e., for data compression) and an RLE_CNT field 1315 indicates the run length encoding count. For example, the count value may indicate the number of consecutive data elements which have been converted to a single data value and count.
  • Embodiments of the invention may include various steps, which have been described above.
  • the steps may be embodied in machine-executable instructions which may be used to cause a general-purpose or special-purpose processor to perform the steps.
  • these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination thereof
  • instructions may refer to specific configurations of hardware such as application specific integrated circuits (ASICs) configured to perform certain operations or having a predetermined functionality or software instructions stored in memory embodied in a non-transitory computer readable medium.
  • ASICs application specific integrated circuits
  • the techniques shown in the Figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network element, etc.).
  • Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer machine-readable
  • non-transitory computer machine-readable storage media e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory
  • Such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine- readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections.
  • the coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers).
  • the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device.
  • the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device.
  • one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
  • numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. In certain instances, well known structures and functions were not described in elaborate detail in order to avoid obscuring the subject matter of the present invention. Accordingiy, the scope and spirit of the invention should be judged in terms of the claims which follow.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Microcomputers (AREA)

Abstract

An apparatus and method are described for compressing trace/debug data using existing compression resources within a semiconductor chip or an SoC. For example, one embodiment of an apparatus comprises: trace/debug circuitry and/or logic to implement trace/debug operations on a semiconductor chip or an SoC to generate trace/debug data; and compression resource selection circuitry and/or logic to select one or more compression resources of the semiconductor chip to compress the trace/debug data to generate compressed trace/debug data.

Description

APPARATUS AND METHOD FOR PERFORMING
TRACE/DEBUG DATA COMPRESSION
BACKGROUND
Field of the Invention
[0001] This invention relates generally to the field of computer processors. More particularly, the invention relates to an apparatus and method for data compression such as the compression of debug/trace data implemented, for example, within a system-on-a-chip (SoC) or other type of system or processor.
Description of the Related Art
[0002] Computer architectures are moving from interfacing discrete components on a printed circuit board or through use of other package configurations, to integrating multiple components onto a single integrated chip, which is commonly referred to as a System on a Chip (SoC) architecture. SoCs offer a number of advantages, including denser packaging, higher speed communication between functional components, and lower temperature operation. SoC designs also provide standardization, scalability, modularization, and reusability.
[0003] SoC architectures present challenges with respect to verification of design and integration when compared with discrete components. For example, for many years personal computers employed the ubiquitous "North" bridge and "South" bridge architecture, wherein a central processing unit was interfaced to a memory controller hub (MCH) chip via a first set of buses, and the memory controller hub, in turn, was interfaced to an Input/Output controller hub (ICH) chip via another set of buses. Each of the MCH and ICH further provided interface to various system components and peripherals via further buses and interfaces. Each of these buses and interfaces adhere to well-established standards, enabling the system architectures to support modular designs. To ensure proper design, each of the individual or groups of components could be tested using test interfaces which are accessible through the device pins.
[0004] Modularity is also a key aspect of SoC architectures. Typically, the system designer will integrate various functional blocks, including functional blocks or components that are commonly referred to in the industry as Intellectual Property ("IP") cores, IP blocks, or simply IP. For the purposes herein, these functional blocks are referred to as IP blocks or simply "IP"; it will be understood that the terminology IP blocks or IP also covers IP cores and any other component or block generally known as
IP, as would be understood by those in the SoC development and manufacturing industries. These IP blocks generally serve one or more dedicated functions and often comprise existing circuit design blocks that are licensed from various vendors or developed in-house.
[0005] Because current day SoCs are designed to have multiple features and functionalities, there is a need to have multiple software and firmware engines working alongside the hardware IP blocks to accomplish the required functionalities. While debug infrastructures need to support such multiple software sources, the output pipe may have limited bandwidth. Current systems have limited l/Os that are used in FFDs (Form Factor Devices), such as Tablets, Phones, Phablets etc., and hence pose bandwidth limitations for SoCs. This necessitates a mechanism to compress the trace data so that the system bandwidth and resources are used effectively for debug.
[0006] Many IP blocks include compression engines (e.g., logic and circuitry) used to perform compression when implementing the function for which the IP block was designed. For example, a compression engine designed to compress image data such as a graphics circuit, video processing circuit, and/or camera interface, will typically include logic/circuitry for performing run-length encoding. In addition, communication circuits will typically include logic and circuitry for performing various types of packet- based compression. Audio circuits such as DSPs may also include forms of compression logic/circuitry. Currently, these compression engines are left unused when performing testing operations such as debug traces on the SoC.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:
[0008] FIG. 1 A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-order
issue/execution pipeline according to embodiments of the invention;
[0009] FIG. 1 B is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to
embodiments of the invention;
[0010] FIG. 2 is a block diagram of a single core processor and a multicore processor with integrated memory controller and graphics according to embodiments of the invention; [0011] FIG. 3 illustrates a block diagram of a system in accordance with one embodiment of the present invention;
[0012] FIG. 4 illustrates a block diagram of a second system in accordance with an embodiment of the present invention;
[0013] FIG. 5 illustrates a block diagram of a third system in accordance with an embodiment of the present invention;
[0014] FIG. 6 illustrates a block diagram of a system on a chip (SoC) in accordance with an embodiment of the present invention;
[0015] FIG. 7 illustrates a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention;
[0016] FIG. 8 illustrates a trace/debug architecture on which embodiments of the invention may be used;
[0017] FIG. 9 illustrates an architecture in accordance with one embodiment of the invention;
[0018] FIG. 10 illustrates a method in accordance with one embodiment of the invention;
[0019] FIG. 11 illustrates one embodiment of a packet-based compression circuit and/or logic;
[0020] FIG. 12 illustrates one embodiment of a data compression circuit and/or logic; and
[0021] FIG. 13 illustrates encoding compression data stored within a packet header in one embodiment.
DETAILED DESCRIPTION
[0022] In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention described below. It will be apparent, however, to one skilled in the art that the embodiments of the invention may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form to avoid obscuring the underlying principles of the embodiments of the invention.
[0023] EXEMPLARY PROCESSOR ARCHITECTURES AND DATA TYPES
[0024] Figure 1 A is a block diagram illustrating both an exemplary in-order fetch, decode, retire pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 1 B is a block diagram illustrating both an exemplary embodiment of an in-order fetch, decode, retire core and an exemplary register renaming, out-of-order issue/execution
architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in Figures 1 A-B illustrate the in-order portions of the pipeline and core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core.
[0025] In Figure 1 A, a processor pipeline 100 includes a fetch stage 102, a length decode stage 104, a decode stage 106, an allocation stage 108, a renaming stage 1 10, a scheduling (also known as a dispatch or issue) stage 1 12, a register read/memory read stage 1 14, an execute stage 1 16, a write back/memory write stage 1 18, an exception handling stage 122, and a commit stage 124.
[0026] Figure 1 B shows processor core 190 including a front end unit 130 coupled to an execution engine unit 150, and both are coupled to a memory unit 170. The core 190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 190 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.
[0027] The front end unit 130 includes a branch prediction unit 132 coupled to an instruction cache unit 134, which is coupled to an instruction translation lookaside buffer (TLB) 136, which is coupled to an instruction fetch unit 138, which is coupled to a decode unit 140. The decode unit 140 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 190 includes a microcode ROM or other medium that stores microcode for certain macro! nstructions (e.g., in decode unit 140 or otherwise within the front end unit 130). The decode unit 140 is coupled to a rename/allocator unit 152 in the execution engine unit 150. [0028] The execution engine unit 150 includes the rename/allocator unit 152 coupled to a retirement unit 154 and a set of one or more scheduler unit(s) 156. The scheduler unit(s) 156 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 156 is coupled to the physical register file(s) unit(s) 158. Each of the physical register file(s) units 158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. in one embodiment, the physical register file(s) unit 158 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 158 is overlapped by the retirement unit 154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 154 and the physical register file(s) unit(s) 158 are coupled to the execution cluster(s) 160. The execution cluster(s) 160 includes a set of one or more execution units 162 and a set of one or more memory access units 164. The execution units 162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 156, physical register file(s) unit(s) 158, and execution cluster(s) 160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order. [0029] The set of memory access units 164 is coupled to the memory unit 170, which includes a data TLB unit 172 coupled to a data cache unit 174 coupled to a level 2 (L2) cache unit 1 76. in one exemplary embodiment, the memory access units 164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 172 in the memory unit 170. The instruction cache unit 134 is further coupled to a level 2 (L2) cache unit 176 in the memory unit 170. The L2 cache unit 1 76 is coupled to one or more other levels of cache and eventually to a main memory.
[0030] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 100 as follows: 1 ) the instruction fetch 138 performs the fetch and length decoding stages 102 and 104; 2) the decode unit 140 performs the decode stage 106; 3) the rename/allocator unit 152 performs the allocation stage 108 and renaming stage 1 10; 4) the scheduler unit(s) 156 performs the schedule stage 1 12; 5) the physical register file(s) unit(s) 158 and the memory unit 170 perform the register read/memory read stage 114; the execution cluster 160 perform the execute stage 1 16; 6) the memory unit 170 and the physical register file(s) unit(s) 158 perform the write back/memory write stage 1 18; 7) various units may be involved in the exception handling stage 122; and 8) the retirement unit 154 and the physical register file(s) unit(s) 158 perform the commit stage 124.
[0031] The core 190 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein, in one embodiment, the core 190 includes logic to support a packed data instruction set extension (e.g., AVX1 , AVX2, and/or some form of the generic vector friendly instruction format (U=0 and/or U=1 ), described below), thereby allowing the operations used by many multimedia applications to be performed using packed data.
[0032] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel®
Hyperthreading technology). [0033] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 134/174 and a shared L2 cache unit 176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1 ) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be externa! to the core and/or the processor.
[0034] Figure 2 is a block diagram of a processor 200 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in Figure 2 illustrate a processor 200 with a single core 202A, a system agent 210, a set of one or more bus controller units 216, while the optional addition of the dashed lined boxes illustrates an alternative processor 200 with multiple cores 202A-N, a set of one or more integrated memory controller unit(s) 214 in the system agent unit 210, and special purpose logic 208.
[0035] Thus, different implementations of the processor 200 may include: 1 ) a CPU with the special purpose logic 208 being integrated graphics and/or scientific
(throughput) logic (which may include one or more cores), and the cores 202A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 202A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 202A-N being a large number of general purpose in-order cores. Thus, the processor 200 may be a general- purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 200 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.
[0036] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 206, and external memory (not shown) coupled to the set of integrated memory controller units 214. The set of shared cache units 206 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 212 interconnects the integrated graphics logic 208, the set of shared cache units 206, and the system agent unit 210/integrated memory controller unit(s) 214, alternative embodiments may use any number of we!l-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 206 and cores 202-A-N.
[0037] In some embodiments, one or more of the cores 202A-N are capable of multi-threading. The system agent 210 includes those components coordinating and operating cores 202A-N. The system agent unit 210 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and
components needed for regulating the power state of the cores 202A-N and the integrated graphics logic 208. The display unit is for driving one or more externally connected displays.
[0038] The cores 202A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 202A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set. In one embodiment, the cores 202A-N are heterogeneous and include both the "small" cores and "big" cores described below.
[0039] Figures 3-6 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.
[0040] Referring now to Figure 3, shown is a block diagram of a system 300 in accordance with one embodiment of the present invention. The system 300 may include one or more processors 310, 315, which are coupled to a controller hub 320. In one embodiment the controller hub 320 includes a graphics memory controller hub (GMCH) 390 and an Input/Output Hub (IOH) 350 (which may be on separate chips); the GMCH 390 includes memory and graphics controllers to which are coupled memory 340 and a coprocessor 345; the IOH 350 is couples input/output (I/O) devices 360 to the GMCH 390. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 340 and the coprocessor 345 are coupled directly to the processor 310, and the controller hub 320 in a single chip with the IOH 350.
[0041] The optional nature of additional processors 315 is denoted in Figure 3 with broken lines. Each processor 310, 315 may include one or more of the processing cores described herein and may be some version of the processor 200.
[0042] The memory 340 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 320 communicates with the processor(s) 310, 315 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as
QuickPath Interconnect (QPI), or similar connection 395.
[0043] In one embodiment, the coprocessor 345 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like, in one embodiment, controller hub 320 may include an integrated graphics accelerator.
[0044] There can be a variety of differences between the physical resources 310, 315 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.
[0045] In one embodiment, the processor 310 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 310 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 345.
Accordingly, the processor 310 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 345. Coprocessor(s) 345 accept and execute the received coprocessor instructions.
[0046] Referring now to Figure 4, shown is a block diagram of a first more specific exemplary system 400 in accordance with an embodiment of the present invention. As shown in Figure 4, multiprocessor system 400 is a point-to-point interconnect system, and includes a first processor 470 and a second processor 480 coupled via a point-to- point interconnect 450. Each of processors 470 and 480 may be some version of the processor 200. In one embodiment of the invention, processors 470 and 480 are respectively processors 310 and 315, while coprocessor 438 is coprocessor 345. In another embodiment, processors 470 and 480 are respectively processor 310 coprocessor 345.
[0047] Processors 470 and 480 are shown including integrated memory controller (IMC) units 472 and 482, respectively. Processor 470 also includes as part of its bus controller units point-to-point (P-P) interfaces 476 and 478; similarly, second processor 480 includes P-P interfaces 486 and 488. Processors 470, 480 may exchange information via a point-to-point (P-P) interface 450 using P-P interface circuits 478, 488. As shown in Figure 4, fMCs 472 and 482 couple the processors to respective memories, namely a memory 432 and a memory 434, which may be portions of main memory locally attached to the respective processors.
[0048] Processors 470, 480 may each exchange information with a chipset 490 via individual P-P interfaces 452, 454 using point to point interface circuits 476, 494, 486, 498. Chipset 490 may optionally exchange information with the coprocessor 438 via a high-performance interface 439. In one embodiment, the coprocessor 438 is a special- purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.
[0049] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.
[0050] Chipset 490 may be coupled to a first bus 416 via an interface 496. In one embodiment, first bus 416 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.
[0051] As shown in Figure 4, various I/O devices 414 may be coupled to first bus 416, along with a bus bridge 418 which couples first bus 416 to a second bus 420. In one embodiment, one or more additional processor(s) 415, such as coprocessors, high- throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics
accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 416. In one embodiment, second bus 420 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 420 including, for example, a keyboard and/or mouse 422, communication devices 427 and a storage unit 428 such as a disk drive or other mass storage device which may include instructions/code and data 430, in one embodiment. Further, an audio I/O 424 may be coupled to the second bus 420. Note that other architectures are possible. For example, instead of the point-to-point architecture of Figure 4, a system may implement a multi-drop bus or other such architecture.
[0052] Referring now to Figure 5, shown is a block diagram of a second more specific exemplary system 500 in accordance with an embodiment of the present invention. Like elements in Figures 4 and 5 bear like reference numerals, and certain aspects of Figure 4 have been omitted from Figure 5 in order to avoid obscuring other aspects of Figure 5.
[0053] Figure 5 illustrates that the processors 470, 480 may include integrated memory and I/O control logic ("CL") 472 and 482, respectively. Thus, the CL 472, 482 include integrated memory controller units and include i/O control logic. Figure 5 illustrates that not only are the memories 432, 434 coupled to the CL 472, 482, but also that I/O devices 514 are also coupled to the control logic 472, 482. Legacy I/O devices 515 are coupled to the chipset 490.
[0054] Referring now to Figure 6, shown is a block diagram of a SoC 600 in accordance with an embodiment of the present invention. Similar elements in Figure 2 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 6, an interconnect unit(s) 602 is coupled to: an application processor 610 which includes a set of one or more cores 202A-N and shared cache unit(s) 206; a system agent unit 210; a bus controller unit(s) 216; an integrated memory controller unit(s) 214; a set or one or more coprocessors 620 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 630; a direct memory access (DMA) unit 632; and a display unit 640 for coupling to one or more externa! displays. In one embodiment, the coprocessor(s) 620 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.
[0055] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. [0056] Program code, such as code 430 illustrated in Figure 4, may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.
[0057] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.
[0058] One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.
[0059] Such machine-readable storage media may include, without limitation, non- transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.
[0060] Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. [0061] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.
[0062] Figure 7 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 7 shows a program in a high level language 702 may be compiled using an x86 compiler 704 to generate x86 binary code 706 that may be natively executed by a processor with at least one x86 instruction set core 716. The processor with at least one x86 instruction set core 716 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1 ) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 704 represents a compiler that is operable to generate x86 binary code 706 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 716. Similarly, Figure 7 shows the program in the high level language 702 may be compiled using an alternative instruction set compiler 708 to generate alternative instruction set binary code 710 that may be natively executed by a processor without at least one x86 instruction set core 714 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 712 is used to convert the x86 binary code 706 into code that may be natively executed by the processor without an x86 instruction set core 714. This converted code is not likely to be the same as the alternative instruction set binary code 710 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 712 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 706.
[0063] APPARATUS AND METHOD FOR TRACE/DEBUG DATA COMPRESSION
[0064] As mentioned, because current day SoCs are designed to have multiple features and functionalities, there is a need to have multiple software and firmware engines working alongside the hardware IP blocks to accomplish the required functionalities. While debug infrastructures need to support such multiple software sources, the output pipe may have limited bandwidth, necessitating a mechanism to compress the trace data so that the system bandwidth and resources are used effectively for debug.
[0065] Many IP blocks include compression engines (e.g., logic and circuitry) used to perform compression when implementing the function for which the IP block was designed. For example, a compression engine designed to compress image data such as a graphics circuit, video processing circuit, and/or camera interface, will typically include logic/circuitry for performing run-length encoding (RLE). As is known by those of skill in the art, RLE is a form of lossless data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count. In addition, some IP blocks such as imaging circuits, audio circuits and/or communication circuits may include logic and circuitry for performing packet-based compression. Currently, these compression engines are left unused when performing testing operations such as debug traces on the SoC, processor, ASIC, or other type of chip.
[0066] One embodiment of the invention reduces the high bandwidth consumed by debug traces by using existing compression engines in a chip's IP blocks to compress the trace data. The traces generated from any system debug trace operations may be efficiently encoded using existing compression engines in a lossless manner so that the underlying debug data can be extracted and used by external tools and software.
[0067] There are various compression engines which may be used in accordance with embodiments of the invention including those utilized in current tracing tools used for signal-level debugging and trace-level debugging, graphics and image compression engines, audio compression engines, and/or any other types of compression engines (e.g. embedded controllers, signal processors, co-processors, ASICs) that exist on a typical SoC.
[0068] Figure 8 shows an exemplary trace/debug architecture of an SoC. A plurality of different traces are illustrated including a hardware trace 801 , a processor trace 802, a JTAG (Joint Test Action Group) mechanism for configuration and low- bandwidth trace, an Intel Architecture (IA)/core trace 804 and one or more software traces 805. In one embodiment, the hardware trace 801 may include tracing operations performed with respect to any hardware circuit within the SoC. The processor trace 802 may include tracing of instructions executed by a processor. For example, it may be desirable to track jump and/or branch instructions to determine how the instruction stream is progressing over time (which is a bandwidth-intensive type of trace). The JTAG module 803 may implement one or more tracing operations in accordance with JTAG standards. As is known by those of skill in the art, JTAG implements standards for on-chip instrumentation in electronic design automation (EDA). The IA/core trace module 804 may implement specific traces on an IA (Intel Architecture) core components. It should be noted, however, that the underlying principles of the invention are not limited to IA. In addition, a variety of different application-specific software traces 805, 812 may be designed and implemented in accordance with the embodiments of the invention. The particular example in Figure 8 illustrates south complex hardware traces 81 1 directed to components of the SoC responsible for I/O operations (e.g., USB, PCIe, serial ATA, and BIOS interactions). It should be noted, however, that the underlying principles of the invention are not limited to this specific implementation.
[0069] Some implementations may be used in a semiconductor device designed according to the Intel™ on-chip system fabric (IOSF)™ specification, which provides a standardized on-die interconnect protocol for attaching IP blocks. This embodiment may include a primary fabric 813 and a secondary fabric 814 to support communication between various system components. For example, each fabric can be impiemented as a bus, a hierarchical bus, a cascaded hub, or so forth. In one embodiment, the primary interface fabric 813 is used for all in-band communication between agents and memory, e.g., between a host processor such as a central processing unit (CPU) and an agent implemented on behalf of a system component (e.g., a particular IP block). Primary interface fabric 813 may further enable communication of peer transactions between agents and supported fabrics. All transaction types including memory, input output (IO), configuration, and in-band messaging can be delivered via primary interface fabric 813. Thus the primary interface fabric 813 may act as a high performance interface for data transferred between peers and/or communications with upstream components.
[0070] In various implementations, primary interface fabric 813 may implement a split transaction protocol to achieve maximum concurrency. That is, this protocol provides for a request phase, a grant phase, and a command and data phase. In addition, in one embodiment, the primary interface fabric 813 supports the concept of distinct channels to provide a mechanism for independent data flows throughout the system.
[0071] The secondary interface fabric 814 (also sometimes referred to as a
"sideband" fabric) may be a standard mechanism for communicating all out-of-band information. In this way, special-purpose wires designed for a given implementation can be avoided, enhancing the ability to reuse IP blocks across a wide variety of chips. Thus, in contrast to an IP block that uses dedicated wires to handle out-of-band communications such as status, interrupt, power management, fuse distribution, configuration shadowing, test modes and so forth, a secondary interface fabric 814 standardizes all out-of-band communication (e.g., according to the IOSF specification), promoting modularity and reducing validation requirements for IP reuse across different designs. In general, secondary interface fabric 814 may be used to communicate nonperformance critical information, rather than for performance critical data transfers, which typically may be communicated via the primary interface fabric 814.
[0072] As illustrated, trace results may be collected and distributed across one or more buses 800, 810 of the SoC and provided to a trace aggregator 820 which may be implemented in hardware, software, or any combination thereof. The aggregated results are then transmitted over one or more debug ports 830-831 which provide the results to an application (e.g., a debug/test application). The ports 830-831 may be implemented using any type of data communication technology including wireless (e.g., 802.1 1 ports, Bluetooth ports, etc) and wired (e.g., Ethernet, USB, etc). As illustrated, the traces collected from the various units may be sent to system memory 825 as well as debug ports 830-831 (e.g., via the primary fabric 813).
[0073] One embodiment of an architecture for utilizing existing compression facilities within an SoC (or other type of data processor) is illustrated in Figure 9. In one embodiment, packet compression circuit/logic 910 performs packet-based compression on tracing data 905. Compression resource selection circuitry/logic 913 selects one or more compression resources 91 1 existing within the SoC to perform the compression. Significantly, in one embodiment, the existing compression resources 91 1 are not designed specifically for trace/debug compression. Rather, the compression resources 91 1 may normally be used to provide compression functions for applications other than compression of trace/debug packets. By way of example, the existing compression resources 91 1 may comprise circuitry/logic within a graphics circuit (for rendering graphics images), a video processing circuit (for encoding/decoding video data), an audio circuit (for encoding/decoding audio signals), an I/O circuit, or any other circuit within the SoC capable of performing compression operations. Thus, the techniques described herein result in an efficient utilization of existing SoC compression resources without requiring any additional circuitry/logic in the SoC.
[0074] In one embodiment, the tracing data 905 is received in the form of debug/tracing packets containing data related to current debug/tracing operations. The packets may comprise DW (double word), QW (quad word) or on larger packets and information related to the packet size may be encoded in the packet header. In one embodiment, to perform the compression, the selected compression resources 911 identify repeated packets and substitute a single compressed packet in place of the repeated packets, thereby significantly reducing bandwidth. The number of packets represented by the single compressed packet may be encoded within the header of the single compressed packet by the packet compression circuit/logic 910. The single compressed packet may then be sent to a finer level of data compression performed by data compression circuit/logic 915.
[0075] In one embodiment, the data compression circuit/logic 915 performs data compression on the compressed packets received from the packet compression circuit/logic 910. In one embodiment, compression resource selection logic/circuitry 914 identifies one or more existing compression resources 916 to be used to perform the data compression. The existing compression resources 916 may include some or all of the same compression resources 91 1 used for packet-based compression or may utilize an entirely different set of compression resources (i.e., depending on the type of compression being implemented). In one embodiment, the existing compression resources 916 perform run length encoding (RLE) of the packets compressed by the packet compression circuit/logic 910. As such, in this embodiment, the existing compression resources 916 may be selected based on their capacity to efficiently perform run length encoding operations. As one example, many frame compression circuits within image/video processing units are equipped with circuitry for performing run length encoding. [0076] In one embodiment, a compression encoding (CE) circuit/logic 920 encodes the compression parameters used by the packet compression 910 and/or data compression 915 within a packet header (i.e., so that the packets can be properly decoded). Note that the "packet" and "header" referred to here is different from the compressed packet/ header generated by the packet compression circuit/logic 910. Thus, to distinguish between the two packets, the packet output by the CE 920 will be referred to herein as an "output packet." The CE circuit/logic 920 may encode various different compression parameters within the output packet header such as an indication as to whether each output packet has been compressed, whether packet and/or data compression has been used, the number of packets which have been compressed into a single packet by the packet compression 910, whether run length encoding has been used and/or the run length encoding parameters. The parameters coded by the CE circuit/logic 920 are then used to decode the output packet which may be passed through the trace outputs 930 to an application running on a test/debug system.
[0077] It should be noted that the underlying principles of the invention are not limited to any particular order in which the various forms of compression are performed. So, for example, in Figure 9, the data compression circuit/logic 915 may compress data within each packet and the packet compression circuit/logic 910 may then performing packet compression on the packets containing the compressed data. Similarly, the encoding compression circuit/logic 920 may update the compression parameters for each packet directly following the packet compression by the packet compression circuit/logic 910 but before data compression is performed.
[0078] Moreover, while both the packet compression 910 and data compression 915 are illustrated as relying on existing compression resources 91 1 and 916, respectively, in one embodiment, the packet compression 910 may use its own, dedicated compression resources while the data compression 915 uses the existing compression resources 916. Alternatively, the data compression 915 may use its own, dedicated compression resources while the packet compression uses the existing compression resources 91 1 .
[0079] Figure 10 illustrates a method in accordance with one embodiment of the invention. The method may be implemented within the context of the architectures described above, but is not limited to any particular architecture.
[0080] At 1001 , a trace generator generates data related to trace/debug operations performed on an SoC (or other type of data processor). At 1002, a determination is made as to whether compression resources are available to compress the trace/debug data. For example, as discussed in greater detail below, compression engine usage information may be collected to determine whether a particular compression resource is available (i.e., existing on the chip/SoC and not currently in use or overloaded). If not, then the trace/debug data is provided to the output control unit at 1007 in an
uncompressed format. If so, then at 1003, the compression resources are selected. For example, certain compression resources may be capable of compressing the trace/debug data more efficiently or effectively than other compression resources. This decision may be based on variables such as the type of compression resources, the current state of the compression resources, and/or the current load on the compression resources.
[0081] At 1004, packet-based compression is performed. As mentioned, packet- based compression may include combining multiple repeated packets of the same type into a single packet. As mentioned, the packet header of the single packet may be updated to indicate the number of repeated packets represented by the single packet.
[0082] At 1005, data compression is performed on the compressed packets. For example, frame compression circuitry of an image processing component may be used to perform run length encoding of the compressed packets. As mentioned, RLE performs lossless data compression in which runs of data are stored as a single data value and count. Of course, various other forms of lossless data compression may be used, in addition to, instead of RLE including Huffman coding, block-sorting
compression, and Lempel-Ziv-Welch (LZW) compression, to name a few.
[0083] At 1006, compression encoding (CE) is performed which encodes the compression parameters used by the packet compression operation 1004 and/or the data compression operation 1005 (i.e., so that the packets can be properly decoded) and generates an output packet with the compression parameters stored in the header. For example, the CE operation may encode an indication as to whether each packet has been compressed, whether packet and/or data compression has been used, the number of packets which have been compressed into a single packet, whether run length encoding has been used, and/or the run length encoding parameters. The parameters coded by the CE operation are then used to decode the
encoded/compressed data at an output control unit to which the encoded/compressed data is provided at 1007.
[0084] It should be noted that the underlying principles of the invention are not limited to any particular order in which the various forms of compression are performed. So, for example, in Figure 9, the data compression circuit/logic 915 may compress data within each packet and the packet compression circuit/iogic 910 may then perform packet compression on the packets containing the compressed data. Similarly, the encoding compression (EC) operation may update the compression parameters for each packet directly following the packet compression operation 1004 but before the data compression operation is performed.
[0085] Figure 11 illustrates details of one embodiment which performs packet- based compression on trace/debug packets. As mentioned, the packet-based compression 1 1 10 may be performed on packets of various sizes including, but not limited to DW (Dword) and QW (quad word) packets. In one embodiment, hardware and/or software trace packets are collected from various software trace sources 1 105 and hardware trace sources 1 106 and then checked for repeated packets. In one embodiment, packet-based compression 1 1 10 compresses the packets using existing SoC compression resources and sends out one packet in place of two or more repeated packets. In one embodiment, the packet-based compression 1 1 10 may perform its compression in accordance with configuration and control 1 120 information/signals which may be input by a user (e.g., via a user interface). For example, the user may specify the number of repeated packets which will trigger a compression operation. In one embodiment, the compressed packet header is updated to include an indication of the number of repeated packets represented by the compressed packet, which is then sent to the trace output 1 130. While only packet-based compression is illustrated in Figure 11 , the trace output 1 130 may also be subjected to data compression, as discussed above.
[0086] Figure 12 illustrates additional details of one embodiment of the data compression circuit/logic including a trace streaming circuit 1215 that collects tracing data from a variety of sources including software tracing data 1205, real-time processor tracing data 1206 which may include both Instruction and data tracing, and hardware tracing data 1207. In one embodiment, the trace streaming unit 1215 detects if there are compression resources available for compression and, if available, intelligently allocates compression jobs to the resources. In the particular example sown in Figure 12, the compression resources include compression engines A-C 1230-1232. In one embodiment, the trace streaming unit 1215 receives compression engine data 1220 related to the current state of each of the compression engines 1230-1232 to determine which compression engines to select. This data may include, for example, compression idle information (Cil) identifying whether the compression engines 1230-1232 are in an idle or active state. The compression engine data 1220 may be maintained by the trace steaming unit 1215 within a resource usage database 1216 which it may then query to render its selections. It may then use this data in combination with compression resource policies specified by a policy manger to intelligently allocate jobs to each of the compression engines 1230-1232. For example, a low power" debug policy may instruct the trace streaming circuit 1215 to optimize power usage while enabling the compression engines. In contrast, a "high bandwidth" policy may instruct the trace streaming unit 1215 to select those compression engines which will complete the compression jobs the most efficiently. In this embodiment, a compression engine which is idle or OFF may be enabled in order to complete the job in the shortest amount of time. Another policy may specify a power budget and the trace streaming unit 1215 may select the most efficient set of compression engines 1230-1232 capable of performing the compression within the power budget. In addition, a policy may specify that specific types of jobs should be allocated to specific compression engines. Yet another policy may perform a load balancing function based on the relative load on each of the compression engines 1230-1232 (e.g., attempting to distribute the load evenly across the compression engines). Various additional or alternative policies may be specified in this manner while still complying with the underlying principles of the invention. The policy manager 1217 may be configured to meet the specific
requirements of the user and/or SoC on which it is implemented.
[0087] In one embodiment, a trace collector 1240 receives compression information from the trace streaming circuit 1215 (e.g., indicating the portions of trace data compressed by difference compression engines) and reconstructs the traces. In an embodiment in which data compression is performed before packet-based
compression, the trace results collected by the trace collector may be passed through a packet-based compression circuit 910. In one embodiment, the trace collector 1240 performs reconstruction of trace packets based on an ID which it extracts from the packets (e.g., associating each packet with a particular trace based on the ID used).
[0088] Figure 13 illustrates an exemplary output packet comprising a header 1301 and data 1302 and shows the details of the compression parameters which may be encoded in the header 1301. in particular, a compression (CMP) field 131 1 provides an indication as to whether any compression has been applied to the packet. As mentioned, in some cases, compression may not be used (e.g., because resources are not available). A packet compressed (PC) field 1312 indicates whether packet-based compression has been performed and a PC_CNT field 1313 indicates the number of packets which have been combined into a single compressed packet. A RLE field 1314 indicates whether run length encoding has been applied (i.e., for data compression) and an RLE_CNT field 1315 indicates the run length encoding count. For example, the count value may indicate the number of consecutive data elements which have been converted to a single data value and count.
[0089] While some embodiments of the invention are described above in the context of an SoC architecture, the underlying principles of the invention are not limited to an SoC implementation and may be implemented in any architecture in which compression resources are utilized for compressing trace/debug data. By way of example, and not limitation, the underlying principles of the invention may be implemented on CPUs, ASICs, and FPGAs.
[0090] In the foregoing specification, the embodiments of invention have been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
[0091] Embodiments of the invention may include various steps, which have been described above. The steps may be embodied in machine-executable instructions which may be used to cause a general-purpose or special-purpose processor to perform the steps. Alternatively, these steps may be performed by specific hardware components that contain hardwired logic for performing the steps, or by any
combination of programmed computer components and custom hardware components.
[0092] As described herein, instructions may refer to specific configurations of hardware such as application specific integrated circuits (ASICs) configured to perform certain operations or having a predetermined functionality or software instructions stored in memory embodied in a non-transitory computer readable medium. Thus, the techniques shown in the Figures can be implemented using code and data stored and executed on one or more electronic devices (e.g., an end station, a network element, etc.). Such electronic devices store and communicate (internally and/or with other electronic devices over a network) code and data using computer machine-readable media, such as non-transitory computer machine-readable storage media (e.g., magnetic disks; optical disks; random access memory; read only memory; flash memory devices; phase-change memory) and transitory computer machine-readable
communication media (e.g., electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, digital signals, etc.). In addition, such electronic devices typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (non-transitory machine- readable storage media), user input/output devices (e.g., a keyboard, a touchscreen, and/or a display), and network connections. The coupling of the set of processors and other components is typically through one or more busses and bridges (also termed as bus controllers). The storage device and signals carrying the network traffic
respectively represent one or more machine-readable storage media and machine- readable communication media. Thus, the storage device of a given electronic device typically stores code and/or data for execution on the set of one or more processors of that electronic device. Of course, one or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware. Throughout this detailed description, for the purposes of explanation, numerous specific details were set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the invention may be practiced without some of these specific details. In certain instances, well known structures and functions were not described in elaborate detail in order to avoid obscuring the subject matter of the present invention. Accordingiy, the scope and spirit of the invention should be judged in terms of the claims which follow.

Claims

CLAIMS What is claimed is:
1 . An apparatus comprising:
trace/debug circuitry and/or logic to implement trace/debug operations on a semiconductor chip to generate trace/debug data; and
compression resource selection circuitry and/or logic to select one or more compression resources of the semiconductor chip to compress the trace/debug data to generate compressed trace/debug data.
2. The apparatus as in claim 1 wherein the trace/debug data comprises a plurality of trace/debug packets, the apparatus further comprising:
a packet compression circuit to perform packet compression operations on the trace/debug data using the compression resources.
3. The apparatus as in claim 2 wherein the packet compression operations comprise generating a single compressed packet in place of multiple repeated trace/debug packets.
4. The apparatus as in claim 3 wherein the packet compression operations comprise identifying the number of repeated trace/debug packets and storing the number in a header of the single compressed packet.
5. The apparatus as in claim 1 further comprising:
a data compression circuit to perform data compression on the trace/debug data using the compression resources.
6. The apparatus as in claim 5 wherein the compression resources are to perform run length encoding compression on the trace/debug data.
7. The apparatus as in claim 1 wherein the trace/debug data comprises a plurality of trace/debug packets, the apparatus further comprising:
a packet compression circuit to perform packet compression operations on the trace/debug data using the compression resources to generate compressed trace/debug packets; and a data compression circuit to perform data compression on the compressed trace/debug packets using the compression resources to generate output packets.
8. The apparatus as in claim 7 further comprising:
compression encoding circuitry and/or logic to generate a header for the output packets, the header including a first indication as to whether the output packet is compressed, a second indication as to whether packet compression has been used, a third indication of a packet count, a fourth indication as to whether a particular type of data compression has been used, and a fifth indication comprising a variable related to the data compression.
9. The apparatus as in claim 8 wherein the particular type of data
compression comprises run length coding and wherein the variable comprise a RLE count value.
10. The apparatus as in claim 5 wherein the compression resource selection circuitry and/or Iogic is to select one or more of the compression resources based on a current state of each of the compression resources and a configurable policy.
1 1 . The apparatus as in claim 10 wherein the configurable policy comprises a !ow power policy in which compression resources are selected based on power usage or a high performance policy in which compression resources are selected based on how efficiently the compression resources can perform compression.
12. A method comprising:
performing trace/debug operations on a semiconductor chip to generate trace/debug data;
determining whether compression resources of the semiconductor chip are available to compress the trace/debug data;
selecting one or more of the compression resources to compress the trace/debug data to generate compressed trace/debug data.
13. The method as in claim 12 further comprising:
performing packet compression operations on the trace/debug data using the compression resources.
14. The method as in claim 13 wherein the packet compression operations comprise generating a single compressed packet in place of multiple repeated trace/debug packets.
15. The method as in claim 14 wherein the packet compression operations comprise identifying the number of repeated trace/debug packets and storing the number in a header of the single compressed packet.
16. The method as in claim 12 further comprising:
performing data compression on the trace/debug data using the compression resources.
17. The method as in claim 16 wherein the compression resources are to perform run length encoding compression on the trace/debug data.
18. The method as in claim 12 further comprising:
performing packet compression operations on the trace/debug data using the compression resources to generate compressed trace/debug packets; and
further performing data compression on the compressed trace/debug packets using the compression resources to generate output packets.
19. The method as in claim 18 further comprising:
generating a header for the output packets, the header including a first indication as to whether the output packet is compressed, a second indication as to whether packet compression has been used, a third indication of a packet count, a fourth indication as to whether a particular type of data compression has been used, and a fifth indication comprising a variable related to the data compression.
20. The method as in claim 19 wherein the particular type of data
compression comprises run length coding and wherein the variable comprise a RLE count value.
21 . The method as in claim 16 further comprising: seiecting one or more of the compression resources based on a current state of each of the compression resources and a configurable policy.
22. The method as in claim 21 wherein the configurable poiicy comprises a low power poiicy in which compression resources are selected based on power usage or a high performance policy in which compression resources are selected based on how efficiently the compression resources can perform compression.
23. A system comprising:
a memory to store instructions and data;
a processor to execute the instructions and process the data;
a graphics processor to perform graphics operations in response to graphics instructions;
a network interface to receive and transmit data over a network;
an interface for receiving user input from a mouse or cursor control device, the plurality of cores executing the instructions and processing the data responsive to the user input;
the processor comprising:
trace/debug circuitry and/or logic to implement trace/debug operations on a semiconductor chip to generate trace/debug data; and
compression resource selection circuitry and/or logic to select one or more compression resources of the semiconductor chip to compress the trace/debug data to generate compressed trace/debug data.
24. The system as in claim 23 wherein the trace/debug data comprises a plurality of trace/debug packets, the system further comprising:
a packet compression circuit to perform packet compression operations on the trace/debug data using the compression resources.
25. The system as in claim 24 wherein the packet compression operations comprise generating a single compressed packet in place of multiple repeated trace/debug packets.
PCT/US2017/021252 2016-04-21 2017-03-08 Apparatus and method for performing trace/debug data compression WO2017184265A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201641013855 2016-04-21
IN201641013855 2016-04-21

Publications (1)

Publication Number Publication Date
WO2017184265A1 true WO2017184265A1 (en) 2017-10-26

Family

ID=60116275

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/021252 WO2017184265A1 (en) 2016-04-21 2017-03-08 Apparatus and method for performing trace/debug data compression

Country Status (1)

Country Link
WO (1) WO2017184265A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019195149A1 (en) * 2018-04-03 2019-10-10 Xilinx, Inc. Debug controller circuit
EP4141680A1 (en) * 2021-08-30 2023-03-01 INTEL Corporation Debug data communication system for multiple chips

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060267818A1 (en) * 2005-05-16 2006-11-30 Manisha Agarwala Saving Resources by Deducing the Total Prediction Events
US20070294590A1 (en) * 2006-05-16 2007-12-20 Texas Instruments Incorporated Compression scheme to reduce the bandwidth requirements for continuous trace stream encoding of system performance
US20130107895A1 (en) * 2011-11-02 2013-05-02 Qualcomm Incorporated Systems and methods for compressing headers and payloads
WO2013066422A1 (en) * 2011-10-21 2013-05-10 Uniloc Luxembourg S.A. Traceback packet transport protocol
US20150112938A1 (en) * 2013-02-15 2015-04-23 Compellent Technologies Data replication with dynamic compression

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060267818A1 (en) * 2005-05-16 2006-11-30 Manisha Agarwala Saving Resources by Deducing the Total Prediction Events
US20070294590A1 (en) * 2006-05-16 2007-12-20 Texas Instruments Incorporated Compression scheme to reduce the bandwidth requirements for continuous trace stream encoding of system performance
WO2013066422A1 (en) * 2011-10-21 2013-05-10 Uniloc Luxembourg S.A. Traceback packet transport protocol
US20130107895A1 (en) * 2011-11-02 2013-05-02 Qualcomm Incorporated Systems and methods for compressing headers and payloads
US20150112938A1 (en) * 2013-02-15 2015-04-23 Compellent Technologies Data replication with dynamic compression

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019195149A1 (en) * 2018-04-03 2019-10-10 Xilinx, Inc. Debug controller circuit
US10789153B2 (en) 2018-04-03 2020-09-29 Xilinx, Inc. Debug controller circuit
CN112292670A (en) * 2018-04-03 2021-01-29 赛灵思公司 Debug controller circuit
CN112292670B (en) * 2018-04-03 2024-03-08 赛灵思公司 Debug controller circuit
EP4141680A1 (en) * 2021-08-30 2023-03-01 INTEL Corporation Debug data communication system for multiple chips

Similar Documents

Publication Publication Date Title
US11093277B2 (en) Systems, methods, and apparatuses for heterogeneous computing
US9727345B2 (en) Method for booting a heterogeneous system and presenting a symmetric core view
US11740902B2 (en) Apparatus and method for configuring sets of interrupts
TWI546671B (en) A method, apparatus and system for a modular on-die coherent interconnect
US20200241980A1 (en) Systems and methods for in-field core failover
US20160378628A1 (en) Hardware processors and methods to perform self-monitoring diagnostics to predict and detect failure
US20200349045A1 (en) Technology For Providing Out-Of-Band Processor Telemetry
US11921574B2 (en) Apparatus and method for fault handling of an offload transaction
US10042729B2 (en) Apparatus and method for a scalable test engine
CN112148368A (en) Apparatus and method for modifying address, data or program code associated with an offloaded instruction
WO2017184265A1 (en) Apparatus and method for performing trace/debug data compression
US9418024B2 (en) Apparatus and method for efficient handling of critical chunks
US10394564B2 (en) Local closed loop efficiency control using IP metrics
US20240220408A1 (en) Dynamic allocation schemes in memory side cache for bandwidth and performance optimization
US11321144B2 (en) Method and apparatus for efficiently managing offload work between processing units
US20230315483A1 (en) Processor core subsystem with open-standard network-on-chip ports
CN116339971A (en) Dynamic asymmetric resources
US20180349137A1 (en) Reconfiguring a processor without a system reset

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17786292

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17786292

Country of ref document: EP

Kind code of ref document: A1