EP3791197B1 - Bus synchronization system - Google Patents

Bus synchronization system Download PDF

Info

Publication number
EP3791197B1
EP3791197B1 EP19799685.3A EP19799685A EP3791197B1 EP 3791197 B1 EP3791197 B1 EP 3791197B1 EP 19799685 A EP19799685 A EP 19799685A EP 3791197 B1 EP3791197 B1 EP 3791197B1
Authority
EP
European Patent Office
Prior art keywords
bus
sync
test
resources
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP19799685.3A
Other languages
German (de)
French (fr)
Other versions
EP3791197A1 (en
EP3791197A4 (en
Inventor
Michael C. Panis
Jeffrey S. Benagh
Richard Pye
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Teradyne Inc
Original Assignee
Teradyne Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teradyne Inc filed Critical Teradyne Inc
Publication of EP3791197A1 publication Critical patent/EP3791197A1/en
Publication of EP3791197A4 publication Critical patent/EP3791197A4/en
Application granted granted Critical
Publication of EP3791197B1 publication Critical patent/EP3791197B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2205Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested
    • G06F11/221Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested to test buses, lines or interfaces, e.g. stuck-at or open line faults
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/31725Timing aspects, e.g. clock distribution, skew, propagation delay
    • G01R31/31726Synchronization, e.g. of test, clock or strobe signals; Signals in different clock domains; Generation of Vernier signals; Comparison and adjustment of the signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/3181Functional testing
    • G01R31/3183Generation of test inputs, e.g. test vectors, patterns or sequences
    • G01R31/318307Generation of test inputs, e.g. test vectors, patterns or sequences computer-aided, e.g. automatic test program generator [ATPG], program translations, test program debugging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/3181Functional testing
    • G01R31/3183Generation of test inputs, e.g. test vectors, patterns or sequences
    • G01R31/318314Tools, e.g. program interfaces, test suite, test bench, simulation hardware, test compiler, test program languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/2205Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested
    • G06F11/2236Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing using arrangements specific to the hardware being tested to test CPU or processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/22Detection or location of defective computer hardware by testing during standby operation or during idle time, e.g. start-up testing
    • G06F11/26Functional testing
    • G06F11/273Tester hardware, i.e. output processing circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • G06F13/423Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus with synchronous protocol

Definitions

  • This specification relates generally to a bus synchronization system.
  • ATE Automatic test equipment
  • DUT device under test
  • testing a DUT involves multiple tests.
  • the ATE may include multiple instrument modules, each of which may contain one or more instrument module resources that may be configured to perform one or more of the tests.
  • each test may require different resources to be used to test the same DUT, and different tests may require different combinations of resources.
  • multiple DUTs may be tested at the same time.
  • multiple resources may be dedicated to testing a particular DUT.
  • multiple resources may be shared to perform testing on multiple DUTs.
  • US 2016/0227004 describes semiconductor devices that include a plurality of Serializer-Deserializer interfaces that provide a plurality of serial data paths between them.
  • the plurality of Serializer-Deserializer interfaces and the plurality of serial data interfaces may be clocked from a clock signal derived from the clock circuit.
  • the plurality of independently adjustable calibration circuits may be configured to compensate for timing differences across the plurality of serial data paths.
  • US 5,717,704 describes that a local trigger signal generator is to be provided for each of a plurality of test instruments in a test system.
  • US 2006/0085157 describes a test apparatus that has multiple instruments that are synchronized with respect to one another so that a trigger signal may be generated in response to events occurring at different instruments.
  • the events may correspond to events defined within a test program or events detected at a device under test.
  • a partial trigger signal is generated by each of the different instruments, and the partial trigger signals are used in generating the trigger signal.
  • Different offset delays are applied to the partial trigger signals so that the partial trigger signals generated by the different instruments are synchronized with respect to each other.
  • a system for controlling one or more test instruments to test one or more integrated circuits includes a master clock and a controller.
  • the test instruments are connected to form a communication ring.
  • the master clock is connected to each test instrument and provides a clock signal to the one or more test instruments.
  • the controller is connected to the communication ring and is configured to align counters of test instruments to derive a common clock time value from the clock signal.
  • the controller is further configured to generate and send data words that specify a test event to be performed, a common clock time value, and at least one of the test instruments.
  • An example of the claimed system is a bus synchronization system that comprises a computer bus, a host computer to execute test flows, and instrument modules.
  • An instrument module comprises resources and a processing device. Resources operated on by a test flow define a domain.
  • the host computer is configured to output commands including a sync command in the test flow to the instrument modules.
  • the sync command is for causing the instrument module to provide a status to the computer bus and to pause the processing device.
  • Statuses from the instrument modules in the domain are aggregated on the computer bus.
  • Information is distributed to the instrument modules based on the statuses aggregated.
  • the processing device is configured to resume executing commands based on the information.
  • the example system may include one or more of the following features, either alone or in combination.
  • the information may be distributed after all instrument modules in the domain have encountered a sync command.
  • the host computer may be programmed to send the commands to the instrument modules via a communication bus that is different from the computer bus. Aggregating the status and distributing the information may be performed independent of the host computer. At least some of the commands may instruct resources in the domain to perform operations.
  • the instrument module may comprise a first type of endpoint device to provide status to the computer bus.
  • the first type of endpoint device may comprise a contributing endpoint device.
  • the contributing endpoint device may be configured to receive the information from the computer bus and to generate a signal based on the information to trigger operation of one or more of the resources.
  • the instrument module may comprise a second type of endpoint device.
  • the second type of endpoint device may comprise a non-contributing endpoint device to receive the information from the computer bus and to generate a signal based on the information to trigger operation of one or more of the resources.
  • the host computer may be programmed to execute a test program that includes multiple, separate instruction flows.
  • the multiple, separate instruction flows may include the test flow.
  • An endpoint device may be configured to subscribe to one or more of the multiple, separate flows.
  • the endpoint device may be configured to generate a signal to provide to resources in the domain. The signal may be used to trigger the resource to perform an action for which the resource has been previously armed.
  • An offset may be added to the signal to control signal timing relative to receipt of the information.
  • the endpoint device may comprise a transmitter to implement output to the computer bus, and a receiver to implement receiving from the computer bus.
  • a status may comprise a pass or fail status of a test performed by the processor.
  • the status may comprise bits that are encoded in time-division-multiple-access fashion onto the computer bus using a periodic frame comprised of multiple bits.
  • the periodic frame may be characterized by one or more of headers, trailers, cyclic redundancy checks, or 8b/10b encoding.
  • At least some of the bits of the information may represent a system time alignment signal to set system clock counters on the instruments to at least one of a specified value or a specified time that is in a payload on the computer bus.
  • the computer bus may comprise at least one of a wired-OR bus; point-to-point connections and logic gates; non-contact, wireless or optical signaling media; or a combination of one or more of: a wired-OR bus; point-to-point connections and logic gates; non-contact, and wireless or optical signaling media.
  • the information may be received over the computer bus.
  • the information may be received over a communication bus that is different than the computer bus.
  • the sync command in the test flow may immediately precede, in the test flow, a command requiring action or measurement vis-à-vis a device under test by the test flow. At least part of the test flow may be controllable not to be synchronized.
  • Advantages of the example systems may include one or more of the following.
  • Providing one or more processors (for example, a single processor) for a small number of resources may allow that group of resources to operate independently and in parallel with all other resources.
  • providing one or more processors (for example, a single processor) for small number of resources may also reduce communications latency between the processor(s) and resource(s).
  • the synchronization system addresses possible synchronization issues associated with providing one or more processors as described.
  • the synchronization system also allows finer granularity than one group of resources per processor to be synchronized to any other groups of resources in the system.
  • the synchronization system may also eliminate the need for a central controller to implement synchronization as described herein.
  • the systems and techniques and processes described herein, or portions thereof, can be implemented as/controlled by a computer program product that includes instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices to control (e.g., coordinate) the operations described herein.
  • the systems and techniques and processes described herein, or portions thereof, can be implemented as an apparatus, method, or electronic system that can include one or more processing devices and memory to store executable instructions to implement various operations.
  • bus synchronization system Described herein are example implementations of a bus synchronization system and components thereof.
  • the bus synchronization system is incorporated into a test system, such as ATE; however, the bus synchronization system is not limited to use with a test system or to testing in general.
  • An example test system includes multiple instrument modules (or simply, “instruments”) for performing testing on DUTs.
  • Each instrument module includes one or more resources, such as radio frequency (RF) signal generators, microwave signal generators, processors, power supplies, memory, and so forth.
  • RF radio frequency
  • an instrument module resource may be, or include, any appropriate type of electronic hardware device or software for receiving, transmitting, processing, storing, or otherwise acting upon digital data, analog signals, or both digital data and analog signals.
  • the resources are each controlled by one or more module embedded processors (MEP or "processing unit") on an instrument module.
  • MEP module embedded processors
  • each block represents a time slot in which execution occurs.
  • the commands shown in Fig. 1 could be executed by the resources in order 101 of Fig. 2 , which is not the intended order.
  • the example synchronization process described herein addresses this issue, and enables commands in the same test flow to be executed in an appropriate order (e.g., the commands of row 1 then row 2 then row 3 then row 4).
  • a host computer on the test system executes a test flow for a DUT.
  • the test flow may include, for example, commands or other instructions for implementing testing on all or part of the DUT.
  • an example DUT such as a semiconductor device, may include multiple components, such as digital components, analog components, wireless components, and so forth.
  • the test flow is to execute on the different components concurrently.
  • the different components may require different instrument module resources for testing, such as analog testing resources, digital testing resources, wireless testing resources, and so forth.
  • These resources required by the test flow constitute a test domain that that may require synchronization.
  • synchronization may be implemented so that different tests performed on different components of the DUT occur contemporaneously or concurrently.
  • the instrument module resources that comprise a test domain may reside on the same instrument module, on different instrument modules, or on a combination of the same instrument module and different instrument modules
  • the resources that comprise a test domain may be a subset of all instrument module resources in the test system.
  • Each of these multiple test domains may include different combination(s) of instrument module resources.
  • Different instrument module resources, corresponding to different test domains may be synchronized independently using the process described herein.
  • a test domain having instrument module resources that are synchronized may map to, and may be referred to as, a "sync" (synchronized) domain.
  • each instrument module includes at least one MEP.
  • the MEP may be configured to, e.g., programmed to, execute commands to implement testing.
  • Each instrument module may include one endpoint that constitutes an interface, for example, a hardware interface, to a synchronization (sync) bus, and a sync resource driver (SRD) that constitutes an interface, for example, a software interface, to the sync bus.
  • SRD sync resource driver
  • the sync bus may be implemented using one or more time-division multiplexed (TDM) computer buses or other appropriate transmission media.
  • the sync bus is separate from, and independent of, communications media used to transmit commands for the test flows.
  • the sync bus may include logic gates built into its distribution hardware, which may have a tree topology, with a sync bus master at a root of the tree. As described herein, the sync bus master is configured to synchronize instrument module resources in the same sync domain.
  • the sync bus is primarily responsible for synchronization. The host computer is not directly involved in synchronization.
  • the host computer's roles include determining a series of commands that will be executed by instrument modules in a test domain, and placing appropriate synchronization commands - called sync barriers - in the proper positions within each series of commands for a sync domain.
  • the position of the synchronization command can be determined by a user of the test system.
  • the host computer also inserts commands that tell the endpoints to subscribe instrument modules to and from sync domains.
  • all instruments are configured to subscribe, automatically, to all or some sync domains and, therefore, there is no need for the host computer to insert commands that tell the endpoints to subscribe instrument modules to and from sync domain.
  • the host computer communicates a series of commands for a test flow to each instrument module having resources required by the test flow over communication media other than the sync bus.
  • each instrument module For each of the instrument modules, its MEP stores the series of commands in a queue in computer memory.
  • the queue may be dedicated to a particular sync domain. Thus, the commands are not received from the host computer.
  • the commands include sync barrier commands.
  • DIB device interface board
  • a DIB is the interface between the test system and the DUT, for example, a board to which the DUT is mated and through which signals pass between the DUT and the test system.
  • DIB-visible is a generic name given to any command that could have an effect that could be observed at the DUT or, more precisely the DIB, or vice-versa, where a result of a measurement could depend on what happens on the DIB.
  • synchronization is not limited to commands that require action or measurement vis-à-vis the DUT.
  • the host computer places a sync barrier command (or simply, "sync barrier”) in a series of commands immediately before a DIB-visible command or before any other type of command that requires synchronization.
  • the sync barrier command is placed in the test flow of an instrument module in advance, for example, by a test engineer who developed the test program.
  • the MEP retrieves them from the queue, and executes the command or performs whatever other operation is necessary with respect to the commands.
  • the MEP outputs, via an appropriate endpoint, a "sync barrier reached” command (or simply, “sync barrier reached”) to the sync bus.
  • the sync barrier reached command may be comprised of one or multiple status bits.
  • the MEP suspends execution of commands in the queue.
  • the MEP has completed a portion of the test program and it indicates, via the sync barrier reach command, that it has completed the portion of the test program.
  • the MEP waits to receive an aggregated status information (e.g., aggregated status bits) indicating that all other processing units have completed their respective portions of the test program.
  • the sync bus master On the sync bus - independent of the host computer - the sync bus master combines "sync barrier reached” commands from each of the endpoints of the instrument modules in the same sync domain. For example, the sync bus master may perform a logical "AND" of all of received "sync barrier reached” commands, or perform other appropriate processing.
  • the sync bus master determines that each of the endpoints of the instrument modules in the same sync domain have output the "sync barrier reached” command, the sync bus master outputs a "sync barrier crossed" command (or simply, “sync barrier crossed") over the sync bus.
  • the "sync barrier crossed" command may be a single status bit or may comprise multiple status bits, and may constitute an aggregated status of the received "sync barrier reached” commands.
  • Each sync bus endpoint in that sync domain receives the "sync barrier crossed" command, and generates a trigger signal to trigger operation of the instrument module resources in that domain.
  • the trigger signal triggers operation of the instrument module resources to operate synchronously.
  • the actual operation performed by each of the instrument module resources may be different.
  • the MEP also resumes execution of commands in the queue following receipt of the "sync barrier crossed" command.
  • the example bus synchronization system allows all instrument module resources in the same test/sync domain to operate at the same time. Furthermore, the example bus synchronization system can be used by multiple instrument module resources that operate independently of each other. As such, central coordination, for example by the host computer, among multiple different resources is unnecessary to perform synchronization in some examples.
  • each instrument module may include more than one endpoint.
  • the endpoints in an instrument module include a contributing endpoint.
  • a contributing endpoint includes hardware that is configured to receive, from the instrument module's MEP, a synchronization status, such as the "sync barrier reached" command, and to provide that synchronization status to the sync bus.
  • the status output to the sync bus may be, or include, one or more bits representing pass or fail status of a test performed using MEPs in the test system.
  • the output of data representing pass or fail status or other types of status can be triggered at any time, independent of a sync barrier command.
  • the contributing endpoint is also configured to receive information that indicates when all instrument module resources in a sync domain are "ready", e.g., the "sync barrier crossed” command, and to provide this information to the instrument module resources in the same sync domain.
  • the instrument module may also include, zero, one, or multiple non-contributing endpoints.
  • a non-contributing endpoint includes hardware that is configured to receive information that indicates when all instrument module resources in a sync domain are "ready”, e.g., "sync barrier crossed", and to provide this information to the instrument module resources.
  • the non-contributing endpoint does not transmit over the sync bus, nor does it provide information to the MEP.
  • the status provided by the contributing endpoint comprises bits that are encoded in time-division-multiple-access fashion onto a serial data bus - e.g., the sync bus - using a periodic frame comprised of multiple bits.
  • the frame may include optional headers, trailers, cyclic redundancy checks, may use 8b/10b encoding, and may employ any appropriate mechanisms that may be associated with serial data transmission.
  • the frame may be used to transmit or to receive other types of information over the same physical wires, for example, using a frame type indicator in the frame header to specify which type of information is being transmitted.
  • Fig. 3 shows an example order 102 in which commands, including "sync barriers" applied by the bus synchronization system described herein, are encountered.
  • each block represents a time slot in which execution occurs.
  • the "sync barriers" are encountered, and synchronization of instrument module resources in the same sync domain is implemented. This results in the commands executing in order 104 of Fig. 4 (e.g, the commands of row 1 then row 2 then row 3 then row 4 then row 5 then row 6 then row 7).
  • the "sync barrier" is used to control when commands are executed on different resources A, B, and C, enabling the commands in the same sync domain to be executed in the appropriate order across multiple, independent resources.
  • the example of Figs. 3 and 4 shows that all resources will wait for all other resources in their test domain to finish executing commands and to reach a sync barrier before they execute additional commands.
  • Fig. 5 shows, for an instrument module 104, a MEP 105 containing an I/O (input/output) engine 106, a command queue 107, an SRD 108, and shared memory 109.
  • the instrument module contains a sync bus endpoint 110 (e.g., a contributing endpoint) containing a transmitter (TX) 111 and a receiver (RX) 112.
  • Fig. 5 also shows a sync bus 114, which includes a TDM bus and a sync bus master 115.
  • the endpoint may operate on commands for the multiple, different domains.
  • the MEP's I/O engine places commands received from the host computer into the command queue.
  • MEP 105 receives test flow commands from a host computer sent over a communication bus. This communication bus is not the sync bus, but rather may be an Ethernet bus or any other appropriate communications media, including wired and wireless media.
  • commands on an instrument module are pre-stored, e.g., in a command queue, and are not received from a host computer.
  • MEP 105 includes one or more processing devices configured to, e.g., programmed to, control a single instrument module of the test system. Examples of processing devices are described herein.
  • For each of a MEP's test domains there is a separate command queue.
  • One or more SRDs running on the MEP retrieve commands for a sync domain from a command queue and process/execute the commands.
  • the SRD when the next command in a queue to be executed is a "sync barrier", the SRD sets a state of the shared memory to "sync barrier not crossed", since a previous crossing might have left the state set to "sync barrier crossed". The SRD then instructs the sync bus endpoint that a "sync barrier” has been reached, and starts waiting for the shared memory to indicate that the "sync barrier” has been crossed.
  • the sync bus endpoint may delay 116 transmitting this status if it is not ready; otherwise, the sync bus endoint sets the sync bus endpoint transmit status to "sync barrier reached", and transmits this command onto the sync bus.
  • the sync bus transmits the command to the sync bus master, as described herein.
  • the sync bus master aggregates sync barrier status (e.g., "sync barrier reached” commands) received from multiple, e.g., all, resources in a sync domain.
  • sync barrier status e.g., "sync barrier reached” commands
  • the sync bus master provides information, e.g., the "sync barrier crossed” command to sync bus receivers in that sync domain.
  • the sync bus master may aggregate (e.g., logically ANDs) the statuses from all endpoints in the sync domain and produces one result per sync domain.
  • the resulting "sync barrier crossed" status is TRUE if all the sync bus endpoints in the test domain report "sync barrier reached”.
  • the "sync barrier crossed" command is then sent from the sync bus master to each of the sync bus endpoint receivers over the sync bus, which all receive that command.
  • sync bus endpoint receiver When a sync bus endpoint receiver at an instrument module detects the "sync barrier crossed" command, the sync bus endpoint receiver performs two operations in this example.
  • the sync bus endpoint receiver sets the sync bus endpoint transmitter's status to "sync barrier not reached” and writes “sync barrier crossed” to shared memory.
  • the SRD which has been waiting for this status change, then allows the MEP to process subsequent commands in the queue.
  • MEPs of different instrument modules - all in the same sync domain - are able to synchronize operation.
  • the sync bus endpoint transmitter may hold its "sync bus reached status" until it is acknowledged by the sync bus master with a "sync barrier crossed" command.
  • Each sync bus endpoint may also be configured to receive sync bus commands, such as "sync barrier status crossed", from the sync bus, and to generate a trigger signal for the resources on the instrument module.
  • each resource may be configured to execute a specific subset of commands in response to a trigger signal.
  • the trigger signal triggers resources on different modules so that they perform actions at the same time, not just in the correct order.
  • a trigger signal may be used to trigger a resource to perform an action for which the resource has been previously armed.
  • a sync domain creates only one trigger signal at a time, although a single trigger signal can be applied to multiple receiver outputs with different time delays.
  • Each of these trigger signals can be applied to one of a number of (e.g., 32) instrument module resources.
  • a trigger signal for a sync domain can be applied to multiple instrument module resources, or a trigger signal for a sync domain can be applied to a single instrument module resource.
  • Each endpoint receiver output may also introduce a unique trigger signal offset delay. Even if two endpoint receiver outputs are associated with the same sync domain, their offsets can be programmed differently to compensate for different paths in instrument module resources.
  • each sync bus frame contains a header that indicates the start of the frame and a type of message represented by the frame. This header may be followed by payload data that may represent the sync barrier status for each of one or more available test domains.
  • the sync frame may be clocked by, and directly referenced to, a clock signal. The size of the payload is determined by the number of available test domains. The more test domains, the larger the payload, and the longer it takes to propagate signals through the test system.
  • the sync bus supports messages other than "sync barrier reached" and "sync barrier crossed".
  • the sync bus master can send a message instructing sync bus endpoints to update their time-of-day (TOD) clocks, and the sync bus endpoints can request the sync bus master to send them that update.
  • TOD clock is a system time alignment signal used to set system clock counters on each instrument to a specified value, such that all instruments set their clocks to the same specified value with exactly repeatable timing relative to the system clocks.
  • a sync bus endpoint transmitter can send a message, rather than sending its status, and at any appropriate time, the sync bus master can send a message rather than sending "sync barrier crossed".
  • the sync bus master when the sync bus master receives a frame without a status, the sync bus master responds with a frame type that also does not contain a status.
  • the sync bus endpoint receiver when a sync bus endpoint receiver receives a frame without a status, the sync bus endpoint receiver maintains its status from the previous frame. As a result, the previous frame's status is preserved.
  • a test system may have multiple MEPs - one per instrument module - or one MEP may serve multiple instrument modules.
  • the MEP may configure one instrument module's sync bus endpoint as a contributing endpoint and use it to provide sync barrier status.
  • the MEP may configure the sync bus endpoints on the other the modules to be non-contributing endpoints.
  • the MEP may configure all the sync bus endpoints to be contributing and communicate sync barrier status with each sync bus endpoint independently.
  • the synchronization system is configurable. It enables the same modules to be used in a lower cost system with slightly less functionality.
  • the example test system may support two related features: concurrent test flows and flow-per-site.
  • concurrent test flows requires that multiple sections of a DUT are independent enough to be tested at the same time.
  • a user of the test system may specify which tester resources are associated with each section of the DUT.
  • a test program may be written with separate flows, each only using resources associated with one section of the DUT. Even though the host computer runs through these flows serially, the MEPs execute commands for the multiple flows in parallel.
  • Flow-per-site is similar to concurrent test flows, but instead of a user writing different flows, a single flow executes differently depending on the DUT's test results.
  • the test program groups sites with the same results, and the test flow is executed serially, once for each group of sites. Commands executed during the flow may differ for each group. As the test program continues, flows may split again or rejoin. Resources executing a unique flow are considered members of a test domain. These resources operate synchronously with each other. Resources in different test domains may have limited, or no, synchronization to each other. A difference between the two features is that the test domains are known before test program execution for concurrent test flows, whereas they are dynamically created for flows-per-site.
  • the bus synchronization system may be configured so that a test engineer can disable automatic synchronization for at least part of a test program.
  • the bus synchronization system is configured to identify, automatically, any portions of a test program that require synchronization to occur. For example, the identification may be made absent test engineer input.
  • the sync bus may be implemented using a wired-OR bus, using point-to-point connections and logic gates, using appropriate non-contact, wireless or optical signaling, or using an any appropriate combination of these transmission media.
  • the sync bus may be implemented using any appropriate data communications pathway configured to communicate status to hardware or software, to aggregate the status, and to transmit the aggregated status or other information to all MEPs via the same pathway or via a different pathway.
  • bus synchronization need not be symmetric.
  • the "sync barrier crossed" signal, or other appropriate synchronization or other signals could be sent to the instrument modules over an Ethernet bus, rather than over the sync bus as described above.
  • the example bus synchronization system thus enables synchronized operation across multiple distributed MEPs, without the need for centralized control. This is possible even though the multiple distributed MEPs may take different amounts of time to execute their portions of a test program, and typically do not know in advance how long such processing will actually take. Thus, each MEP has flexibility in how long it will take to execute its portion of the test program - which might not be knowable in advance.
  • the bus synchronization system also is relatively low latency, which may be advantageous since, for example, some test programs can include thousands of synchronization events per second.
  • each of MEPs may be configured to run its own copy of a test program, to determine where sync barriers should be placed into the command queue among the commands, and to determine the sync domains to which an instrument module containing the MEP should subscribe.
  • This distributed approach could be implemented in lieu of, or in combination, with the approach described above, in which the host computer runs the test program and places the sync barrier commands in the proper positions within a series of commands for a sync domain.
  • the example systems described herein may be implemented by, and/or controlled using, one or more computer systems comprising hardware or a combination of hardware and software.
  • a system like the ones described herein may include various controllers and/or processing devices located at various points in the system to control operation of the automated elements.
  • a central computer may coordinate operation among the various controllers or processing devices.
  • the central computer, controllers, and processing devices may execute various software routines to effect control and coordination of the various automated elements.
  • the example systems described herein can be controlled, at least in part, using one or more computer program products, e.g., one or more computer program tangibly embodied in one or more information carriers, such as one or more non-transitory machine-readable media, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a computer, multiple computers, and/or programmable logic components.
  • one or more computer program products e.g., one or more computer program tangibly embodied in one or more information carriers, such as one or more non-transitory machine-readable media, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a computer, multiple computers, and/or programmable logic components.
  • a computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
  • Actions associated with implementing all or part of the testing can be performed by one or more programmable processors executing one or more computer programs to perform the functions described herein. All or part of the testing can be implemented using special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
  • FPGA field programmable gate array
  • ASIC application-specific integrated circuit
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only storage area or a random access storage area or both.
  • Elements of a computer include one or more processors for executing instructions and one or more storage area devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • Machine-readable storage media suitable for embodying computer program instructions and data include all forms of non-volatile storage area, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor storage area devices e.g., EPROM, EEPROM, and flash storage area devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • connection may imply a direct physical connection or a wired or wireless connection that includes or does not include intervening components but that nevertheless allows electrical signals to flow between connected components.
  • connection involving electrical circuitry that allows signals to flow, unless stated otherwise, is an electrical connection and not necessarily a direct physical connection regardless of whether the word "electrical” is used to modify "connection”.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Tests Of Electronic Circuits (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Description

    TECHNICAL FIELD
  • This specification relates generally to a bus synchronization system.
  • BACKGROUND
  • Automatic test equipment (ATE) includes electronics for sending signals to, and for receiving signals from, a device under test (DUT) in order to test the operation of the DUT. In some examples, testing a DUT involves multiple tests. The ATE may include multiple instrument modules, each of which may contain one or more instrument module resources that may be configured to perform one or more of the tests. In some examples, each test may require different resources to be used to test the same DUT, and different tests may require different combinations of resources. In some examples, multiple DUTs may be tested at the same time. In some examples, multiple resources may be dedicated to testing a particular DUT. In some examples, multiple resources may be shared to perform testing on multiple DUTs.
  • US 2016/0227004 describes semiconductor devices that include a plurality of Serializer-Deserializer interfaces that provide a plurality of serial data paths between them. The plurality of Serializer-Deserializer interfaces and the plurality of serial data interfaces may be clocked from a clock signal derived from the clock circuit. The plurality of independently adjustable calibration circuits may be configured to compensate for timing differences across the plurality of serial data paths.
  • US 5,717,704 describes that a local trigger signal generator is to be provided for each of a plurality of test instruments in a test system.
  • US 2006/0085157 describes a test apparatus that has multiple instruments that are synchronized with respect to one another so that a trigger signal may be generated in response to events occurring at different instruments. The events may correspond to events defined within a test program or events detected at a device under test. A partial trigger signal is generated by each of the different instruments, and the partial trigger signals are used in generating the trigger signal. Different offset delays are applied to the partial trigger signals so that the partial trigger signals generated by the different instruments are synchronized with respect to each other.
  • US 2005/0102592 describes circuit testing with ring-connected test instrument modules. A system for controlling one or more test instruments to test one or more integrated circuits includes a master clock and a controller. The test instruments are connected to form a communication ring. The master clock is connected to each test instrument and provides a clock signal to the one or more test instruments. The controller is connected to the communication ring and is configured to align counters of test instruments to derive a common clock time value from the clock signal. The controller is further configured to generate and send data words that specify a test event to be performed, a common clock time value, and at least one of the test instruments.
  • SUMMARY
  • The invention is defined in the appended independent claim 1, whereas preferred embodiments of the invention are defined in the appended dependent claims.
  • An example of the claimed system is a bus synchronization system that comprises a computer bus, a host computer to execute test flows, and instrument modules. An instrument module comprises resources and a processing device. Resources operated on by a test flow define a domain. The host computer is configured to output commands including a sync command in the test flow to the instrument modules. The sync command is for causing the instrument module to provide a status to the computer bus and to pause the processing device. Statuses from the instrument modules in the domain are aggregated on the computer bus. Information is distributed to the instrument modules based on the statuses aggregated. The processing device is configured to resume executing commands based on the information. The example system may include one or more of the following features, either alone or in combination.
  • The information may be distributed after all instrument modules in the domain have encountered a sync command. The host computer may be programmed to send the commands to the instrument modules via a communication bus that is different from the computer bus. Aggregating the status and distributing the information may be performed independent of the host computer. At least some of the commands may instruct resources in the domain to perform operations.
  • The instrument module may comprise a first type of endpoint device to provide status to the computer bus. The first type of endpoint device may comprise a contributing endpoint device. The contributing endpoint device may be configured to receive the information from the computer bus and to generate a signal based on the information to trigger operation of one or more of the resources.
  • The instrument module may comprise a second type of endpoint device. The second type of endpoint device may comprise a non-contributing endpoint device to receive the information from the computer bus and to generate a signal based on the information to trigger operation of one or more of the resources.
  • The host computer may be programmed to execute a test program that includes multiple, separate instruction flows. The multiple, separate instruction flows may include the test flow. An endpoint device may be configured to subscribe to one or more of the multiple, separate flows. The endpoint device may be configured to generate a signal to provide to resources in the domain. The signal may be used to trigger the resource to perform an action for which the resource has been previously armed. An offset may be added to the signal to control signal timing relative to receipt of the information. The endpoint device may comprise a transmitter to implement output to the computer bus, and a receiver to implement receiving from the computer bus.
  • A status may comprise a pass or fail status of a test performed by the processor. The status may comprise bits that are encoded in time-division-multiple-access fashion onto the computer bus using a periodic frame comprised of multiple bits. The periodic frame may be characterized by one or more of headers, trailers, cyclic redundancy checks, or 8b/10b encoding.
  • At least some of the bits of the information may represent a system time alignment signal to set system clock counters on the instruments to at least one of a specified value or a specified time that is in a payload on the computer bus. The computer bus may comprise at least one of a wired-OR bus; point-to-point connections and logic gates; non-contact, wireless or optical signaling media; or a combination of one or more of: a wired-OR bus; point-to-point connections and logic gates; non-contact, and wireless or optical signaling media.
  • The information may be received over the computer bus. The information may be received over a communication bus that is different than the computer bus. The sync command in the test flow may immediately precede, in the test flow, a command requiring action or measurement vis-à-vis a device under test by the test flow. At least part of the test flow may be controllable not to be synchronized.
  • Advantages of the example systems may include one or more of the following. Providing one or more processors (for example, a single processor) for a small number of resources may allow that group of resources to operate independently and in parallel with all other resources. In addition, providing one or more processors (for example, a single processor) for small number of resources may also reduce communications latency between the processor(s) and resource(s). The synchronization system addresses possible synchronization issues associated with providing one or more processors as described. The synchronization system also allows finer granularity than one group of resources per processor to be synchronized to any other groups of resources in the system. The synchronization system may also eliminate the need for a central controller to implement synchronization as described herein.
  • Any two or more of the features described in this specification, including in this summary section, can be combined to form implementations not specifically described herein.
  • The systems and techniques and processes described herein, or portions thereof, can be implemented as/controlled by a computer program product that includes instructions that are stored on one or more non-transitory machine-readable storage media, and that are executable on one or more processing devices to control (e.g., coordinate) the operations described herein. The systems and techniques and processes described herein, or portions thereof, can be implemented as an apparatus, method, or electronic system that can include one or more processing devices and memory to store executable instructions to implement various operations.
  • The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
  • DESCRIPTION OF THE DRAWINGS
    • Fig. 1 is a block diagram showing an example sequence of commands.
    • Fig. 2 is a block diagram showing an example order in which the sequence of commands of Fig. 1 may be executed.
    • Fig. 3 is a block diagram showing an example order in which commands, including "sync barriers" applied by a bus synchronization system, are encountered.
    • Fig. 4 is a block diagram showing an example order in which the sequence of commands of Fig. 3 may be executed.
    • Fig. 5 is a block diagram of components that may be included in an example bus synchronization system.
  • Like reference numerals in different figures indicate like elements.
  • DETAILED DESCRIPTION
  • Described herein are example implementations of a bus synchronization system and components thereof. In some implementations, the bus synchronization system is incorporated into a test system, such as ATE; however, the bus synchronization system is not limited to use with a test system or to testing in general.
  • An example test system includes multiple instrument modules (or simply, "instruments") for performing testing on DUTs. Each instrument module includes one or more resources, such as radio frequency (RF) signal generators, microwave signal generators, processors, power supplies, memory, and so forth. Generally, an instrument module resource (or simply, "resource") may be, or include, any appropriate type of electronic hardware device or software for receiving, transmitting, processing, storing, or otherwise acting upon digital data, analog signals, or both digital data and analog signals. The resources are each controlled by one or more module embedded processors (MEP or "processing unit") on an instrument module. The use of multiple MEPs allows the example test system to be split into multiple parts executing different test flows in parallel.
  • The use of multiple MEPs can lead to synchronization issues. In this regard, individual MEPs execute commands for a test flow from a command queue. While commands in the queue are executed in order, commands across multiple queues and across multiple resources execute in no defined order. Consider the sequence of commands 100 shown in Fig. 1, which are executable by different system resources A, B, and C that potentially reside on different instrument modules. In Figs. 1 and 2, each block represents a time slot in which execution occurs. In this example, absent synchronization, the commands shown in Fig. 1 could be executed by the resources in order 101 of Fig. 2, which is not the intended order. The example synchronization process described herein addresses this issue, and enables commands in the same test flow to be executed in an appropriate order (e.g., the commands of row 1 then row 2 then row 3 then row 4).
  • As an overview, in an example synchronization process, a host computer on the test system executes a test flow for a DUT. The test flow may include, for example, commands or other instructions for implementing testing on all or part of the DUT. In this regard, an example DUT, such as a semiconductor device, may include multiple components, such as digital components, analog components, wireless components, and so forth. In this example, the test flow is to execute on the different components concurrently. The different components, however, may require different instrument module resources for testing, such as analog testing resources, digital testing resources, wireless testing resources, and so forth. These resources required by the test flow constitute a test domain that that may require synchronization. For example, synchronization may be implemented so that different tests performed on different components of the DUT occur contemporaneously or concurrently.
  • The instrument module resources that comprise a test domain may reside on the same instrument module, on different instrument modules, or on a combination of the same instrument module and different instrument modules The resources that comprise a test domain may be a subset of all instrument module resources in the test system. In any given test system, there may be multiple test flows, e.g., used to test different components of a DUT or DUTs, and thus multiple test domains. Each of these multiple test domains may include different combination(s) of instrument module resources. Different instrument module resources, corresponding to different test domains, may be synchronized independently using the process described herein.
  • A test domain having instrument module resources that are synchronized may map to, and may be referred to as, a "sync" (synchronized) domain. For the examples described herein, the two terms may be used interchangeably. As noted, each instrument module includes at least one MEP. The MEP may be configured to, e.g., programmed to, execute commands to implement testing. Each instrument module may include one endpoint that constitutes an interface, for example, a hardware interface, to a synchronization (sync) bus, and a sync resource driver (SRD) that constitutes an interface, for example, a software interface, to the sync bus.
  • In some implementations, the sync bus may be implemented using one or more time-division multiplexed (TDM) computer buses or other appropriate transmission media. In some implementations, the sync bus is separate from, and independent of, communications media used to transmit commands for the test flows. The sync bus may include logic gates built into its distribution hardware, which may have a tree topology, with a sync bus master at a root of the tree. As described herein, the sync bus master is configured to synchronize instrument module resources in the same sync domain. In some examples, the sync bus is primarily responsible for synchronization. The host computer is not directly involved in synchronization. Rather, the host computer's roles include determining a series of commands that will be executed by instrument modules in a test domain, and placing appropriate synchronization commands - called sync barriers - in the proper positions within each series of commands for a sync domain. In some examples, the position of the synchronization command can be determined by a user of the test system. In some examples, the host computer also inserts commands that tell the endpoints to subscribe instrument modules to and from sync domains. In some examples, all instruments are configured to subscribe, automatically, to all or some sync domains and, therefore, there is no need for the host computer to insert commands that tell the endpoints to subscribe instrument modules to and from sync domain.
  • In an example, the host computer communicates a series of commands for a test flow to each instrument module having resources required by the test flow over communication media other than the sync bus. For each of the instrument modules, its MEP stores the series of commands in a queue in computer memory. The queue may be dedicated to a particular sync domain. Thus, the commands are not received from the host computer. The commands include sync barrier commands.
  • In some implementations, only those commands that require action or measurement vis-à-vis the DUT need be synchronized. Therefore, in some examples, only those command are preceded by a sync barrier command in an instruction stream of a test flow. Examples of such commands are device interface board (DIB) -visible commands. A DIB is the interface between the test system and the DUT, for example, a board to which the DUT is mated and through which signals pass between the DUT and the test system. DIB-visible is a generic name given to any command that could have an effect that could be observed at the DUT or, more precisely the DIB, or vice-versa, where a result of a measurement could depend on what happens on the DIB. Some setup commands may not require synchronization. For example, certain measurement parameters like sample rate or a number of samples to capture, or various parameters of pattern bursts, are not observable from the DIB and do not require synchronization. In some implementations, synchronization is not limited to commands that require action or measurement vis-à-vis the DUT.
  • In some implementations, the host computer places a sync barrier command (or simply, "sync barrier") in a series of commands immediately before a DIB-visible command or before any other type of command that requires synchronization. In some examples, the sync barrier command is placed in the test flow of an instrument module in advance, for example, by a test engineer who developed the test program. To execute the commands, the MEP retrieves them from the queue, and executes the command or performs whatever other operation is necessary with respect to the commands. When the MEP encounters a sync barrier in the queue, the MEP outputs, via an appropriate endpoint, a "sync barrier reached" command (or simply, "sync barrier reached") to the sync bus. The sync barrier reached command may be comprised of one or multiple status bits. At this point, the MEP suspends execution of commands in the queue. Thus, the MEP has completed a portion of the test program and it indicates, via the sync barrier reach command, that it has completed the portion of the test program. Before proceeding to the next portion of the test program, the MEP waits to receive an aggregated status information (e.g., aggregated status bits) indicating that all other processing units have completed their respective portions of the test program.
  • On the sync bus - independent of the host computer - the sync bus master combines "sync barrier reached" commands from each of the endpoints of the instrument modules in the same sync domain. For example, the sync bus master may perform a logical "AND" of all of received "sync barrier reached" commands, or perform other appropriate processing. When the sync bus master determines that each of the endpoints of the instrument modules in the same sync domain have output the "sync barrier reached" command, the sync bus master outputs a "sync barrier crossed" command (or simply, "sync barrier crossed") over the sync bus. The "sync barrier crossed" command may be a single status bit or may comprise multiple status bits, and may constitute an aggregated status of the received "sync barrier reached" commands.
  • Each sync bus endpoint in that sync domain receives the "sync barrier crossed" command, and generates a trigger signal to trigger operation of the instrument module resources in that domain. The trigger signal triggers operation of the instrument module resources to operate synchronously. The actual operation performed by each of the instrument module resources may be different. The MEP also resumes execution of commands in the queue following receipt of the "sync barrier crossed" command.
  • Thus, in this example, the example bus synchronization system allows all instrument module resources in the same test/sync domain to operate at the same time. Furthermore, the example bus synchronization system can be used by multiple instrument module resources that operate independently of each other. As such, central coordination, for example by the host computer, among multiple different resources is unnecessary to perform synchronization in some examples.
  • In the foregoing example, each instrument module may include more than one endpoint. The endpoints in an instrument module include a contributing endpoint. A contributing endpoint includes hardware that is configured to receive, from the instrument module's MEP, a synchronization status, such as the "sync barrier reached" command, and to provide that synchronization status to the sync bus. In some implementations, the status output to the sync bus may be, or include, one or more bits representing pass or fail status of a test performed using MEPs in the test system. In some examples, the output of data representing pass or fail status or other types of status can be triggered at any time, independent of a sync barrier command. The contributing endpoint is also configured to receive information that indicates when all instrument module resources in a sync domain are "ready", e.g., the "sync barrier crossed" command, and to provide this information to the instrument module resources in the same sync domain. The instrument module may also include, zero, one, or multiple non-contributing endpoints. A non-contributing endpoint includes hardware that is configured to receive information that indicates when all instrument module resources in a sync domain are "ready", e.g., "sync barrier crossed", and to provide this information to the instrument module resources. The non-contributing endpoint, however, does not transmit over the sync bus, nor does it provide information to the MEP.
  • In some implementations, the status provided by the contributing endpoint comprises bits that are encoded in time-division-multiple-access fashion onto a serial data bus - e.g., the sync bus - using a periodic frame comprised of multiple bits. In some implementations, the frame may include optional headers, trailers, cyclic redundancy checks, may use 8b/10b encoding, and may employ any appropriate mechanisms that may be associated with serial data transmission. In some implementations, the frame may be used to transmit or to receive other types of information over the same physical wires, for example, using a frame type indicator in the frame header to specify which type of information is being transmitted.
  • Referring also to Figs. 1 and 2, Fig. 3 shows an example order 102 in which commands, including "sync barriers" applied by the bus synchronization system described herein, are encountered. In Figs. 3 and 4, as was the case with respect to Figs. 1 and 2, each block represents a time slot in which execution occurs. Using the system described herein, the "sync barriers" are encountered, and synchronization of instrument module resources in the same sync domain is implemented. This results in the commands executing in order 104 of Fig. 4 (e.g, the commands of row 1 then row 2 then row 3 then row 4 then row 5 then row 6 then row 7). As shown, the "sync barrier" is used to control when commands are executed on different resources A, B, and C, enabling the commands in the same sync domain to be executed in the appropriate order across multiple, independent resources. The example of Figs. 3 and 4 shows that all resources will wait for all other resources in their test domain to finish executing commands and to reach a sync barrier before they execute additional commands.
  • Fig. 5 shows, for an instrument module 104, a MEP 105 containing an I/O (input/output) engine 106, a command queue 107, an SRD 108, and shared memory 109. The instrument module contains a sync bus endpoint 110 (e.g., a contributing endpoint) containing a transmitter (TX) 111 and a receiver (RX) 112. Fig. 5 also shows a sync bus 114, which includes a TDM bus and a sync bus master 115.
  • In the example of Fig. 5, the endpoint may operate on commands for the multiple, different domains. In this example, the MEP's I/O engine places commands received from the host computer into the command queue. MEP 105 receives test flow commands from a host computer sent over a communication bus. This communication bus is not the sync bus, but rather may be an Ethernet bus or any other appropriate communications media, including wired and wireless media. In some example, commands on an instrument module are pre-stored, e.g., in a command queue, and are not received from a host computer. In an example, MEP 105 includes one or more processing devices configured to, e.g., programmed to, control a single instrument module of the test system. Examples of processing devices are described herein. In an example, for each of a MEP's test domains, there is a separate command queue. One or more SRDs running on the MEP retrieve commands for a sync domain from a command queue and process/execute the commands.
  • In an example operation, when the next command in a queue to be executed is a "sync barrier", the SRD sets a state of the shared memory to "sync barrier not crossed", since a previous crossing might have left the state set to "sync barrier crossed". The SRD then instructs the sync bus endpoint that a "sync barrier" has been reached, and starts waiting for the shared memory to indicate that the "sync barrier" has been crossed. The sync bus endpoint may delay 116 transmitting this status if it is not ready; otherwise, the sync bus endoint sets the sync bus endpoint transmit status to "sync barrier reached", and transmits this command onto the sync bus. The sync bus transmits the command to the sync bus master, as described herein.
  • The sync bus master aggregates sync barrier status (e.g., "sync barrier reached" commands) received from multiple, e.g., all, resources in a sync domain. When all resources in the sync domain have reported "sync barrier reached", the sync bus master provides information, e.g., the "sync barrier crossed" command to sync bus receivers in that sync domain. As described, the sync bus master may aggregate (e.g., logically ANDs) the statuses from all endpoints in the sync domain and produces one result per sync domain. In this example, as noted, the resulting "sync barrier crossed" status is TRUE if all the sync bus endpoints in the test domain report "sync barrier reached". The "sync barrier crossed" command is then sent from the sync bus master to each of the sync bus endpoint receivers over the sync bus, which all receive that command.
  • When a sync bus endpoint receiver at an instrument module detects the "sync barrier crossed" command, the sync bus endpoint receiver performs two operations in this example. The sync bus endpoint receiver sets the sync bus endpoint transmitter's status to "sync barrier not reached" and writes "sync barrier crossed" to shared memory. The SRD, which has been waiting for this status change, then allows the MEP to process subsequent commands in the queue. As a result, MEPs of different instrument modules - all in the same sync domain - are able to synchronize operation.
  • In some implementations, after a sync bus endpoint transmitter reaches a sync barrier, the sync bus endpoint transmitter may hold its "sync bus reached status" until it is acknowledged by the sync bus master with a "sync barrier crossed" command.
  • Each sync bus endpoint may also be configured to receive sync bus commands, such as "sync barrier status crossed", from the sync bus, and to generate a trigger signal for the resources on the instrument module. For example, each resource may be configured to execute a specific subset of commands in response to a trigger signal. In some examples, the trigger signal triggers resources on different modules so that they perform actions at the same time, not just in the correct order. A trigger signal may be used to trigger a resource to perform an action for which the resource has been previously armed.
  • In some implementations, a sync domain creates only one trigger signal at a time, although a single trigger signal can be applied to multiple receiver outputs with different time delays. Each of these trigger signals can be applied to one of a number of (e.g., 32) instrument module resources. For example, a trigger signal for a sync domain can be applied to multiple instrument module resources, or a trigger signal for a sync domain can be applied to a single instrument module resource. Each endpoint receiver output may also introduce a unique trigger signal offset delay. Even if two endpoint receiver outputs are associated with the same sync domain, their offsets can be programmed differently to compensate for different paths in instrument module resources.
  • In some implementations, each sync bus frame contains a header that indicates the start of the frame and a type of message represented by the frame. This header may be followed by payload data that may represent the sync barrier status for each of one or more available test domains. In some examples, the sync frame may be clocked by, and directly referenced to, a clock signal. The size of the payload is determined by the number of available test domains. The more test domains, the larger the payload, and the longer it takes to propagate signals through the test system.
  • In some examples, the sync bus supports messages other than "sync barrier reached" and "sync barrier crossed". For example, the sync bus master can send a message instructing sync bus endpoints to update their time-of-day (TOD) clocks, and the sync bus endpoints can request the sync bus master to send them that update. A TOD clock is a system time alignment signal used to set system clock counters on each instrument to a specified value, such that all instruments set their clocks to the same specified value with exactly repeatable timing relative to the system clocks.
  • Thus, at any appropriate time, a sync bus endpoint transmitter can send a message, rather than sending its status, and at any appropriate time, the sync bus master can send a message rather than sending "sync barrier crossed". In some implementations, when the sync bus master receives a frame without a status, the sync bus master responds with a frame type that also does not contain a status. In some implementations, when a sync bus endpoint receiver receives a frame without a status, the sync bus endpoint receiver maintains its status from the previous frame. As a result, the previous frame's status is preserved.
  • In some implementations, a test system may have multiple MEPs - one per instrument module - or one MEP may serve multiple instrument modules. In the latter case, the MEP may configure one instrument module's sync bus endpoint as a contributing endpoint and use it to provide sync barrier status. In this case, the MEP may configure the sync bus endpoints on the other the modules to be non-contributing endpoints. In some implementations, the MEP may configure all the sync bus endpoints to be contributing and communicate sync barrier status with each sync bus endpoint independently. Thus, the synchronization system is configurable. It enables the same modules to be used in a lower cost system with slightly less functionality.
  • The example test system may support two related features: concurrent test flows and flow-per-site. In an example, concurrent test flows requires that multiple sections of a DUT are independent enough to be tested at the same time. A user of the test system may specify which tester resources are associated with each section of the DUT. A test program may be written with separate flows, each only using resources associated with one section of the DUT. Even though the host computer runs through these flows serially, the MEPs execute commands for the multiple flows in parallel.
  • Flow-per-site is similar to concurrent test flows, but instead of a user writing different flows, a single flow executes differently depending on the DUT's test results. The test program groups sites with the same results, and the test flow is executed serially, once for each group of sites. Commands executed during the flow may differ for each group. As the test program continues, flows may split again or rejoin. Resources executing a unique flow are considered members of a test domain. These resources operate synchronously with each other. Resources in different test domains may have limited, or no, synchronization to each other. A difference between the two features is that the test domains are known before test program execution for concurrent test flows, whereas they are dynamically created for flows-per-site.
  • In some implementations, the bus synchronization system may be configured so that a test engineer can disable automatic synchronization for at least part of a test program. In some implementations, the bus synchronization system is configured to identify, automatically, any portions of a test program that require synchronization to occur. For example, the identification may be made absent test engineer input.
  • In some implementations, the sync bus may be implemented using a wired-OR bus, using point-to-point connections and logic gates, using appropriate non-contact, wireless or optical signaling, or using an any appropriate combination of these transmission media. In some implementations, the sync bus may be implemented using any appropriate data communications pathway configured to communicate status to hardware or software, to aggregate the status, and to transmit the aggregated status or other information to all MEPs via the same pathway or via a different pathway.
  • In some implementations, bus synchronization need not be symmetric. For example, in some implementations, the "sync barrier crossed" signal, or other appropriate synchronization or other signals, could be sent to the instrument modules over an Ethernet bus, rather than over the sync bus as described above.
  • As described herein, the example bus synchronization system thus enables synchronized operation across multiple distributed MEPs, without the need for centralized control. This is possible even though the multiple distributed MEPs may take different amounts of time to execute their portions of a test program, and typically do not know in advance how long such processing will actually take. Thus, each MEP has flexibility in how long it will take to execute its portion of the test program - which might not be knowable in advance. The bus synchronization system also is relatively low latency, which may be advantageous since, for example, some test programs can include thousands of synchronization events per second.
  • In some implementations, each of MEPs may be configured to run its own copy of a test program, to determine where sync barriers should be placed into the command queue among the commands, and to determine the sync domains to which an instrument module containing the MEP should subscribe. This distributed approach could be implemented in lieu of, or in combination, with the approach described above, in which the host computer runs the test program and places the sync barrier commands in the proper positions within a series of commands for a sync domain.
  • The example systems described herein may be implemented by, and/or controlled using, one or more computer systems comprising hardware or a combination of hardware and software. For example, a system like the ones described herein may include various controllers and/or processing devices located at various points in the system to control operation of the automated elements. A central computer may coordinate operation among the various controllers or processing devices. The central computer, controllers, and processing devices may execute various software routines to effect control and coordination of the various automated elements.
  • The example systems described herein can be controlled, at least in part, using one or more computer program products, e.g., one or more computer program tangibly embodied in one or more information carriers, such as one or more non-transitory machine-readable media, for execution by, or to control the operation of, one or more data processing apparatus, e.g., a programmable processor, a computer, multiple computers, and/or programmable logic components.
  • A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a network.
  • Actions associated with implementing all or part of the testing can be performed by one or more programmable processors executing one or more computer programs to perform the functions described herein. All or part of the testing can be implemented using special purpose logic circuitry, e.g., an FPGA (field programmable gate array) and/or an ASIC (application-specific integrated circuit).
  • Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only storage area or a random access storage area or both. Elements of a computer (including a server) include one or more processors for executing instructions and one or more storage area devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more machine-readable storage media, such as mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Machine-readable storage media suitable for embodying computer program instructions and data include all forms of non-volatile storage area, including by way of example, semiconductor storage area devices, e.g., EPROM, EEPROM, and flash storage area devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • Any "electrical connection" as used herein may imply a direct physical connection or a wired or wireless connection that includes or does not include intervening components but that nevertheless allows electrical signals to flow between connected components. Any "connection" involving electrical circuitry that allows signals to flow, unless stated otherwise, is an electrical connection and not necessarily a direct physical connection regardless of whether the word "electrical" is used to modify "connection".
  • Elements of different implementations described herein may be combined to form other embodiments not specifically set forth above. Elements may be left out of the structures described herein without adversely affecting their operation. Furthermore, various separate elements may be combined into one or more individual elements to perform the functions described herein.

Claims (16)

  1. A system comprising:
    a computer bus (114);
    a host computer to execute test flows; and
    instrument modules, an instrument module comprising:
    resources; and
    a processing device (105);
    wherein resources operated on by a test flow define a domain;
    characterized in that
    the host computer is configured to output commands (102, 107) including a sync command in the test flow to the instrument modules, the sync command for causing the instrument module to provide a status to the computer bus and to pause the processing device;
    wherein statuses from the instrument modules in the domain are aggregated on the computer bus; and
    wherein information is distributed to the instrument modules based on the statuses aggregated; and
    wherein the processing device is configured to resume executing commands based on the information.
  2. The system of claim 1, wherein the information is distributed after all instrument modules in the domain have encountered a sync command.
  3. The system of claim 1, wherein the host computer is programmed to send the commands to the instrument modules via a communication bus that is different from the computer bus.
  4. The system of claim 1, wherein aggregating the status and distributing the information are performed independent of the host computer.
  5. The system of claim 1, wherein at least some of the commands instruct resources in the domain to perform operations.
  6. The system of claim 1, wherein the instrument module comprises an endpoint device (110) to provide status to the computer bus;
    wherein the endpoint device comprises a contributing endpoint device; and
    wherein the contributing endpoint device is configured to receive the information from the computer bus and to generate a signal based on the information to trigger operation of one or more of the resources.
  7. The system of claim 1, wherein the instrument module comprises an endpoint device; and
    wherein the endpoint device comprises a non-contributing endpoint device to receive the information from the computer bus and to generate a signal based on the information to trigger operation of one or more of the resources.
  8. The system of claim 1, wherein the host computer is programmed to execute a test program that includes multiple, separate instruction flows, the multiple, separate instruction flows including the test flow, and
    optionally wherein the instrument module comprises an endpoint device (110) and
    wherein the endpoint device is configured to subscribe to one or more of the multiple, separate flows.
  9. The system of claim 1, wherein the instrument module comprises an endpoint device (110); and
    wherein the endpoint device is configured to generate a signal to provide to resources in the domain, and optionally wherein either:
    a) the signal is to trigger the resource to perform an action for which the resource has been previously armed; or
    b) an offset may be added to the signal to control signal timing relative to receipt of the information.
  10. The system of claim 1, wherein the instrument module comprises an endpoint device (110); and
    wherein the endpoint device comprises a transmitter (111) configured to implement output to the computer bus, and a receiver (112) configured to implement receiving from the computer bus.
  11. The system of claim 1, wherein the status comprises a pass or fail status of a test performed by the processor.
  12. The system of claim 1, wherein the status comprises bits that are encoded in time-division-multiple-access fashion onto the computer bus using a periodic frame comprised of multiple bits, for example, wherein the periodic frame is characterized by one or more of headers, trailers, cyclic redundancy checks, or 8b/10b encoding.
  13. The system of claim 1, wherein at least some of the bits of the information represent a system time alignment signal to set system clock counters on the instruments to at least one of a specified value or a specified time that is in a payload on the bus.
  14. The system of claim 1, wherein the computer bus comprises at least one of a wired-OR bus; point-to-point connections and logic gates; non-contact, wireless or optical signaling; or a combination of one or more of: a wired-OR bus; point-to-point connections and logic gates; non-contact, and wireless or optical signaling.
  15. The system of claim 1, wherein the information is received over either the computer bus or a communication bus that is different than the computer bus.
  16. The system of claim 1, wherein the sync command in the test flow immediately precedes, in the test flow, a command requiring action or measurement vis-à-vis a device under test by the test flow.
EP19799685.3A 2018-05-10 2019-04-19 Bus synchronization system Active EP3791197B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/976,407 US10896106B2 (en) 2018-05-10 2018-05-10 Bus synchronization system that aggregates status
PCT/US2019/028247 WO2019217056A1 (en) 2018-05-10 2019-04-19 Bus synchronization system

Publications (3)

Publication Number Publication Date
EP3791197A1 EP3791197A1 (en) 2021-03-17
EP3791197A4 EP3791197A4 (en) 2021-06-30
EP3791197B1 true EP3791197B1 (en) 2022-09-21

Family

ID=68464711

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19799685.3A Active EP3791197B1 (en) 2018-05-10 2019-04-19 Bus synchronization system

Country Status (8)

Country Link
US (1) US10896106B2 (en)
EP (1) EP3791197B1 (en)
JP (2) JP2021523438A (en)
KR (1) KR20200142090A (en)
CN (1) CN112074747A (en)
SG (1) SG11202010328YA (en)
TW (1) TWI834661B (en)
WO (1) WO2019217056A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11853179B1 (en) * 2018-12-28 2023-12-26 Teledyne Lecroy, Inc. Detection of a DMA (direct memory access) memory address violation when testing PCIE devices
US11904890B2 (en) * 2020-06-17 2024-02-20 Baidu Usa Llc Lane change system for lanes with different speed limits

Family Cites Families (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4229792A (en) 1979-04-09 1980-10-21 Honeywell Inc. Bus allocation synchronization system
JPH02105961A (en) * 1988-10-14 1990-04-18 Nippon Telegr & Teleph Corp <Ntt> Multiprocessor synchronization system
US5235698A (en) 1989-09-12 1993-08-10 Acer Incorporated Bus interface synchronization control system
GB2257272B (en) 1991-06-29 1995-01-04 Genrad Ltd DC level generator
US5471136A (en) 1991-07-24 1995-11-28 Genrad Limited Test system for calculating the propagation delays in signal paths leading to a plurality of pins associated with a circuit
US5371880A (en) 1992-05-13 1994-12-06 Opti, Inc. Bus synchronization apparatus and method
US5615219A (en) 1995-11-02 1997-03-25 Genrad, Inc. System and method of programming a multistation testing system
US5604751A (en) 1995-11-09 1997-02-18 Teradyne, Inc. Time linearity measurement using a frequency locked, dual sequencer automatic test system
US5717704A (en) * 1996-04-16 1998-02-10 Ltx Corporation Test system including a local trigger signal generator for each of a plurality of test instruments
US5938780A (en) 1997-09-19 1999-08-17 Teradyne, Inc. Method for capturing digital data in an automatic test system
US6028439A (en) * 1997-10-31 2000-02-22 Credence Systems Corporation Modular integrated circuit tester with distributed synchronization and control
JP3953243B2 (en) 1998-12-29 2007-08-08 インターナショナル・ビジネス・マシーンズ・コーポレーション Synchronization method and apparatus using bus arbitration control for system analysis
US6389547B1 (en) 1999-03-19 2002-05-14 Sony Corporation Method and apparatus to synchronize a bus bridge to a master clock
US6550036B1 (en) 1999-10-01 2003-04-15 Teradyne, Inc. Pre-conditioner for measuring high-speed time intervals over a low-bandwidth path
DE60001254T2 (en) * 2000-06-16 2003-07-10 Agilent Technologies Inc Test device for integrated circuits with multi-port test functionality
US6651122B2 (en) * 2000-12-07 2003-11-18 Micron Technology, Inc. Method of detecting a source strobe event using change detection
US7017087B2 (en) 2000-12-29 2006-03-21 Teradyne, Inc. Enhanced loopback testing of serial devices
JP2002342108A (en) * 2001-05-15 2002-11-29 Mitsubishi Electric Corp System for constructing test atmosphere
US6754763B2 (en) * 2001-07-30 2004-06-22 Axis Systems, Inc. Multi-board connection system for use in electronic design automation
US7035755B2 (en) 2001-08-17 2006-04-25 Credence Systems Corporation Circuit testing with ring-connected test instrument modules
DE10157931C2 (en) 2001-11-26 2003-12-11 Siemens Ag Methods and devices for the synchronization of radio stations and time-synchronous radio bus system
US7334065B1 (en) 2002-05-30 2008-02-19 Cisco Technology, Inc. Multiple data bus synchronization
DE60331489D1 (en) * 2002-07-17 2010-04-08 Chronologic Pty Ltd Synchronized multi-channel USB
US6981192B2 (en) 2002-09-27 2005-12-27 Teradyne, Inc. Deskewed differential detector employing analog-to-digital converter
US7949777B2 (en) 2002-11-01 2011-05-24 Avid Technology, Inc. Communication protocol for controlling transfer of temporal data over a bus between devices in synchronization with a periodic reference signal
CN100456043C (en) * 2003-02-14 2009-01-28 爱德万测试株式会社 Method and apparatus for testing integrated circuits
JP2007502434A (en) * 2003-05-22 2007-02-08 テセダ コーポレーション Tester architecture for testing semiconductor integrated circuits.
JP4259390B2 (en) * 2004-04-28 2009-04-30 日本電気株式会社 Parallel processing unit
US7177777B2 (en) 2004-10-01 2007-02-13 Credence Systems Corporation Synchronization of multiple test instruments
US7454681B2 (en) * 2004-11-22 2008-11-18 Teradyne, Inc. Automatic test system with synchronized instruments
US7319936B2 (en) * 2004-11-22 2008-01-15 Teradyne, Inc. Instrument with interface for synchronization in automatic test equipment
US7769932B2 (en) 2005-09-09 2010-08-03 Honeywell International, Inc. Bitwise arbitration on a serial bus using arbitrarily selected nodes for bit synchronization
US7349818B2 (en) 2005-11-10 2008-03-25 Teradyne, Inc. Determining frequency components of jitter
US7668235B2 (en) 2005-11-10 2010-02-23 Teradyne Jitter measurement algorithm using locally in-order strobes
JP2007157303A (en) * 2005-12-08 2007-06-21 Advantest Corp Testing apparatus and testing method
TW200817931A (en) * 2006-07-10 2008-04-16 Asterion Inc System and method for performing processing in a testing system
TW200824693A (en) 2006-08-28 2008-06-16 Jazz Pharmaceuticals Inc Pharmaceutical compositions of clonazepam and methods of use thereof
US7528623B2 (en) 2007-02-02 2009-05-05 Teradyne, Inc. Distributing data among test boards to determine test parameters
US7673084B2 (en) 2007-02-20 2010-03-02 Infineon Technologies Ag Bus system and methods of operation using a combined data and synchronization line to communicate between bus master and slaves
US8037355B2 (en) 2007-06-07 2011-10-11 Texas Instruments Incorporated Powering up adapter and scan test logic TAP controllers
JP2009176116A (en) * 2008-01-25 2009-08-06 Univ Waseda Multiprocessor system and method for synchronizing multiprocessor system
WO2010132945A1 (en) * 2009-05-20 2010-11-25 Chronologic Pty. Ltd. Precision synchronisation architecture for superspeed universal serial bus devices
US8261119B2 (en) * 2009-09-10 2012-09-04 Advantest Corporation Test apparatus for testing device has synchronization module which synchronizes analog test module to digital test module based on synchronization signal received from digital test module
US8423314B2 (en) * 2009-11-18 2013-04-16 National Instruments Corporation Deterministic reconfiguration of measurement modules using double buffered DMA
CN101834664B (en) * 2010-04-29 2013-01-23 西安电子科技大学 SDH (Synchronous Digital Hierarchy) multi-domain comprehensive test device and test method thereof
JP2013531779A (en) * 2010-05-05 2013-08-08 テラダイン、 インコーポレイテッド System for simultaneous testing of semiconductor devices
US8504864B2 (en) 2010-12-01 2013-08-06 GM Global Technology Operations LLC Data sensor coordination using time synchronization in a multi-bus controller area network system
CN102122995A (en) * 2010-12-20 2011-07-13 北京航空航天大学 Wireless distributed automatic test system (WDATS)
CN102857383A (en) * 2011-06-28 2013-01-02 鸿富锦精密工业(深圳)有限公司 Synchronism detection control method and system
US10311010B2 (en) * 2011-10-05 2019-06-04 Analog Devices, Inc. Two-wire communication systems and applications
US10048304B2 (en) * 2011-10-25 2018-08-14 Teradyne, Inc. Test system supporting simplified configuration for controlling test block concurrency
US8914563B2 (en) 2012-02-28 2014-12-16 Silicon Laboratories Inc. Integrated circuit, system, and method including a shared synchronization bus
JP6362277B2 (en) 2012-06-01 2018-07-25 ブラックベリー リミテッドBlackBerry Limited A universal synchronization engine based on a probabilistic method for lock assurance in multi-format audio systems
US8850258B2 (en) 2012-06-20 2014-09-30 Intel Corporation Calibration for source-synchronous high frequency bus synchronization schemes
US8947537B2 (en) 2013-02-25 2015-02-03 Teradyne, Inc. Rotatable camera module testing system
CN103257910B (en) * 2013-04-26 2016-08-03 北京航空航天大学 Can be used for the embedded reconfigurable general-utility test platform of LXI of on-the-spot test
US9830298B2 (en) 2013-05-15 2017-11-28 Qualcomm Incorporated Media time based USB frame counter synchronization for Wi-Fi serial bus
EP2984780A4 (en) 2014-06-10 2016-10-05 Halliburton Energy Services Inc Synchronization of receiver units over a control area network bus
US9397670B2 (en) * 2014-07-02 2016-07-19 Teradyne, Inc. Edge generator-based phase locked loop reference clock generator for automated test system
KR20170010007A (en) * 2014-07-28 2017-01-25 인텔 코포레이션 Semiconductor device tester with dut data streaming
US9577818B2 (en) * 2015-02-04 2017-02-21 Teradyne, Inc. High speed data transfer using calibrated, single-clock source synchronous serializer-deserializer protocol
US10012721B2 (en) * 2015-02-19 2018-07-03 Teradyne, Inc. Virtual distance test techniques for radar applications
CN105242160A (en) * 2015-10-20 2016-01-13 珠海格力电器股份有限公司 Synchronous testing method and testing system of multiple household electric appliances
US10345418B2 (en) 2015-11-20 2019-07-09 Teradyne, Inc. Calibration device for automatic test equipment
US9917667B2 (en) 2015-12-21 2018-03-13 Hamilton Sundstrand Corporation Host-to-host test scheme for periodic parameters transmission in synchronized TTP systems
US10128783B2 (en) 2016-05-31 2018-11-13 Infineon Technologies Ag Synchronization of internal oscillators of components sharing a communications bus
CN106130680B (en) 2016-06-23 2018-03-27 北京东土科技股份有限公司 Industry internet field layer wideband bus clock synchronization realizing method
CN107332749B (en) 2017-07-05 2020-09-22 北京东土科技股份有限公司 Synchronization method and device based on industrial internet field layer broadband bus architecture

Also Published As

Publication number Publication date
CN112074747A (en) 2020-12-11
JP2024073458A (en) 2024-05-29
SG11202010328YA (en) 2020-11-27
TW201947400A (en) 2019-12-16
TWI834661B (en) 2024-03-11
WO2019217056A1 (en) 2019-11-14
KR20200142090A (en) 2020-12-21
EP3791197A1 (en) 2021-03-17
US10896106B2 (en) 2021-01-19
JP2021523438A (en) 2021-09-02
US20190347175A1 (en) 2019-11-14
EP3791197A4 (en) 2021-06-30

Similar Documents

Publication Publication Date Title
US8725489B2 (en) Method for testing in a reconfigurable tester
US7043390B2 (en) Circuit testing with ring-connected test instruments modules
KR101297513B1 (en) General purpose protocol engine
US8805636B2 (en) Protocol aware digital channel apparatus
US8307235B2 (en) Cross controller clock synchronization
JP2007212453A (en) Trigger distribution device
JP2024073458A (en) Bus Synchronous System
EP1482395B1 (en) Transfer clocks for a multi-channel architecture
CN108155979A (en) A kind of detection device
TWI245912B (en) Circuit testing with ring-connected test instrument modules
Fraas A comparative study of computer interfacing techniques for military training devices
JP2000338182A (en) Circuit testing system

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20201127

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20210528

RIC1 Information provided on ipc code assigned before grant

Ipc: G01R 31/317 20060101AFI20210521BHEP

Ipc: G01R 31/3183 20060101ALI20210521BHEP

Ipc: G06F 13/42 20060101ALI20210521BHEP

Ipc: G06F 11/22 20060101ALI20210521BHEP

Ipc: G06F 11/273 20060101ALI20210521BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20220531

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602019019878

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1520191

Country of ref document: AT

Kind code of ref document: T

Effective date: 20221015

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20220921

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221221

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1520191

Country of ref document: AT

Kind code of ref document: T

Effective date: 20220921

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221222

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230123

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20230121

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230412

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602019019878

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230307

Year of fee payment: 5

26N No opposition filed

Effective date: 20230622

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230419

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20230430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230430

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230430

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230419

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230419

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240229

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220921

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240308

Year of fee payment: 6