WO2011061878A1 - Multicore system, multicore system control method and program stored in a non-transient readable medium - Google Patents
Multicore system, multicore system control method and program stored in a non-transient readable medium Download PDFInfo
- Publication number
- WO2011061878A1 WO2011061878A1 PCT/JP2010/004911 JP2010004911W WO2011061878A1 WO 2011061878 A1 WO2011061878 A1 WO 2011061878A1 JP 2010004911 W JP2010004911 W JP 2010004911W WO 2011061878 A1 WO2011061878 A1 WO 2011061878A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- program
- dma transfer
- synthesized
- data
- audio data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2038—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with a single idle spare processing component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2043—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share a common memory address space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
- G06F13/20—Handling requests for interconnection or transfer for access to input/output bus
- G06F13/28—Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
- G06F11/2028—Failover techniques eliminating a faulty processor or activating a spare
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
- G06F11/2033—Failover techniques switching over of hardware resources
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/1062—Data buffering arrangements, e.g. recording or playback buffers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/1062—Data buffering arrangements, e.g. recording or playback buffers
- G11B2020/10629—Data buffering arrangements, e.g. recording or playback buffers the buffer having a specific structure
- G11B2020/10638—First-in-first-out memories [FIFO] buffers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/1062—Data buffering arrangements, e.g. recording or playback buffers
- G11B2020/10675—Data buffering arrangements, e.g. recording or playback buffers aspects of buffer control
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/1062—Data buffering arrangements, e.g. recording or playback buffers
- G11B2020/10675—Data buffering arrangements, e.g. recording or playback buffers aspects of buffer control
- G11B2020/1074—Data buffering arrangements, e.g. recording or playback buffers aspects of buffer control involving a specific threshold value
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
- G11B20/10527—Audio or video recording; Data buffering arrangements
- G11B2020/1062—Data buffering arrangements, e.g. recording or playback buffers
- G11B2020/10814—Data buffering arrangements, e.g. recording or playback buffers involving specific measures to prevent a buffer underrun
Definitions
- the present invention relates to a multi-core system, a control method for the multi-core system, and a non-transitory readable medium storing a program.
- MPU Micro Processing Units
- CPU Central Processing Unit
- IRQ Interrupt ⁇ ⁇ ⁇ ⁇ ReQuest
- the MPU can set which interrupt request is assigned to which CPU core based on the register setting.
- a technique is known in which audio data is transferred from a CPU in the MPU 50 to an ADAC (Audio DAC) 51 via an I2S (Inter-IC Sound) bus.
- the sound data is, for example, PCM (Pulse Code Modulation) sound.
- I2S is a serial communication format manufactured by PHILIPS (registered trademark), which constitutes an interface device for audio data.
- PCM audio and compressed audio ⁇ -law, ADPCM, etc.
- an I2C (Inter-Integrated Circuit) bus is a device control serial bus developed by PHILIPS (registered trademark).
- the ADAC 51 converts audio data into stereo audio.
- the DAC is a D / A converter. Analog sound (stereo) output from the ADAC 51 is reproduced on the speaker.
- an I2S device having a FIFO (First In In First Out) buffer is known.
- Such an I2C device dequeues the audio data stored in the FIFO buffer and outputs it to the ADAC via the I2S bus.
- an I2S that can generate an interrupt hereinafter referred to as “FIFO boundary interrupt”.
- FIFO boundary interrupt There is a device. Generally, this interrupt is used for PIO (Programmed Input / Output) transfer.
- the applicant of the present application discloses a technique for operating a plurality of OSs on a single multi-core CPU in Japanese Patent Application No. 2009-190103 filed earlier.
- this technique when an I2C device is shared by the plurality of OSs, even if the OS that controls the I2S device becomes inoperable due to kernel panic or freeze, the other OS can control the I2C device. Thereby, occurrence of sound interruption can be prevented with a simple configuration.
- FIG. 9A shows the flow of audio data and processing.
- the OS on the main system side operates by using a DMA transfer completion interrupt as a trigger, and performs audio data audio mixing processing and DMA transfer request processing. Thereby, the audio data after the audio mixing processing is DMA-transferred to the FIFO.
- a plurality of OSs receive a DMA (Direct Memory Access) transfer interrupt.
- the OS on the standby side operates using a DMA transfer completion interrupt as a trigger, and sets an HW (Hardware) timer.
- the OS on the main system side performs the audio mixing processing of the audio data and the DMA transfer request processing to cancel the HW timer.
- the OS on the main system side becomes inoperable by the timeout of the HW timer.
- the HW timer times out, the OS on the main system side becomes inoperable. Therefore, the OS on the main system side is switched to the OS on the standby system side, and the audio mixing process and the DMA transfer request process are continued. This prevents the FIFO buffer from becoming empty and causing sound interruptions.
- the standby-side OS that detects the inoperability of the primary-side OS continuously performs voice mixing processing or the like on behalf of the primary-side OS.
- a processing method of a DMA transfer interrupt thread for performing audio mixing processing or the like can be selected according to reception of a DMA transfer completion interrupt. Yes.
- one of the processing methods of patterns A and B is selected according to the magnitude relationship between the audio mixing processing time and the DMA transfer interval ( ⁇ I2S underrun error occurrence time).
- the audio mixing processing time and the DMA transfer interval can be calculated from the specifications of the mounting environment.
- the specifications of the mounting environment are, for example, the number of FIFO buffers of the I2S device, the audio sampling frequency, the DMA transfer size, and the CPU clock.
- FIG. 9B shows the relationship between the audio mixing processing time and the I2S underrun error occurrence time under the selection conditions of patterns A and B.
- pattern A is selected.
- a DMA transfer start request process is performed following the audio mixing process.
- the HW timer after the audio mixing process by the main OS is not canceled and the OS is switched to the standby OS, and after the audio mixing process is performed by the standby OS, the DMA transfer is performed. This is because the DMA transfer can be performed before the I2S underrun error occurrence time elapses.
- pattern B is selected.
- the audio mixing process is performed following the DMA transfer start request process.
- this condition is met, the HW timer after the audio mixing process by the main OS is not canceled and the OS is switched to the standby OS, and after the audio mixing process is performed by the standby OS, the DMA transfer is performed. This is because the DMA transfer is performed after the I2S underrun error occurrence time elapses.
- the DMA transfer start request process is performed before the audio mixing process. Therefore, as shown in FIG. 10, the audio data is prepared by prefetching one packet (DMA transfer size). That is, the audio data is double buffered. However, in pattern B, the audio data is prefetched for one packet. Therefore, when playing a video with a video player that performs image and sound synchronization, the audio data that is actually being played and the audio data that the video player recognizes as being played are shifted by one packet, which reduces the accuracy of the image and sound synchronization. There was a problem of doing.
- FIG. 11 illustrates a case where the OS on the main system side is OS 72 and the OS on the standby system side is OS 71.
- the OS 71 has a mixed counter 712.
- the OS 72 has a mixed counter 722.
- an application program (hereinafter, abbreviated as “APP” in the figure) 721 that operates on the OS 72 on the main system side enqueues audio data into the sound queue 82.
- the application program 721 reproduces a moving image having a sampling frequency of 48000 Hz and including 16-bit stereo sound.
- the application program 721 sends out audio data, for example, in units of 4096 bytes and at intervals of (1024/48000) seconds.
- the software mixer 723 operating on the OS 72 on the main system side mixes the audio data dequeued from the sound queues 81 and 82.
- the software mixer 723 stores the audio data after mixing in the DMA transfer buffer 73 as audio data for DMA transfer.
- the audio data stored in the DMA transfer buffer 73 is transferred to the FIFO 74 of the I2S device.
- the audio data dequeued from the FIFO 74 is reproduced by the ADAC 75.
- the software mixer 723 converts audio data having different sampling frequencies and quantization bits output from a plurality of application programs into single audio data. Each time the audio mixer mixes the audio data, the software mixer 723 counts up the counter value of the mixed sample counter 722 by the number of samples of the audio data generated by mixing.
- the application program 721 operating on the OS 72 on the main system side refers to the counter value of the mixed sample number counter 722 to calculate the transmission interval of the audio data and synchronize the image sound. That is, the application program 721 reproduces an image corresponding to the number of samples of audio data that has been mixed when reproducing a moving image by performing image and sound synchronization. Therefore, in the processing method in which the audio data is prefetched for one packet and mixed as in the pattern B, the reproduced image is greatly deviated from the sound that is actually output, and the accuracy of the image and sound synchronization is lowered. End up.
- an object of the present invention is to provide a multi-core system, a control method for a multi-core system, and a program capable of preventing the occurrence of sound interruption while suppressing a decrease in accuracy of image and sound synchronization. That is.
- the multi-core system operates on the first processor core, mixes the first and second audio data, and stores the mixed synthesized audio data in the DMA transfer buffer.
- the main system program, the standby system program that operates on the second processor core and operates as a backup system of the main system program, and the synthesized voice data transferred from the DMA transfer buffer are sequentially stored and stored.
- a voice output unit that reproduces the synthesized voice data, and the standby program stores the synthesized voice data stored in the voice output unit in a storage amount of the synthesized voice data stored in the DMA transfer buffer. If the predetermined data amount determined according to the amount has not been reached, mixing of the synthesized speech data executed by the main program and By taking over the paid and executes.
- the control method of the multi-core system operates on the first processor core, mixes the first and second audio data, and transfers the mixed synthesized audio data to the DMA transfer buffer.
- a main system program stored in the first processor core a standby system program operating on the second processor core and operating as a standby system of the main system program, and synthesized voice data transferred from the DMA transfer buffer.
- a multi-core system control method comprising: an audio output unit that reproduces the stored synthesized audio data; and a storage amount of the synthesized audio data stored in the DMA transfer buffer is stored in the audio output unit.
- the program according to the third aspect of the present invention sequentially stores the synthesized voice data to be transferred, and DMA transfer for storing the synthesized voice data to be transferred to the voice output unit for reproducing the stored synthesized voice data.
- a standby program that operates as a standby system for the main program on a processor core different from the main program that stores synthesized voice data obtained by mixing the first and second audio data in the buffer. Determining whether the amount of synthesized voice data stored in the DMA transfer buffer reaches a predetermined data amount determined according to the amount of synthesized voice data stored in the voice output unit And when it is determined that the predetermined amount of data has been reached, the standby system program executes the synthesized speech data that was executed by the main system program. And performing by taking over the mixing and storage of data, the characterized in that to be executed by the processor core.
- FIG. 10 is a sequence diagram for explaining audio mixing processing and DMA transfer processing in a state where the main system OS according to the embodiment of the present invention is operating normally.
- FIG. 10 is a sequence diagram for explaining audio mixing processing and DMA transfer processing in a state where the main-system OS according to the embodiment of the present invention becomes inoperable.
- FIG. 1 is a block diagram showing an outline of the hardware configuration of the multi-core system according to the present embodiment.
- the multi-core system 2 includes processor cores 61 and 62, a DMA transfer buffer 63, and an audio output unit 64.
- the processor core 61 operates a program 610 that operates as a main system.
- the processor core 62 operates a program 620 that operates as a standby system.
- the DMA transfer buffer 63 stores the synthesized voice data mixed by the programs 61 and 62.
- the voice output unit 64 sequentially stores the synthesized voice data transferred from the DMA transfer buffer 63 and reproduces the stored synthesized voice data.
- the program 610 mixes the first and second audio data, and stores the mixed synthesized audio data in the DMA transfer buffer 63.
- the program 61 is a program that operates as a main system.
- the program 620 operates as a standby system for the main program.
- the main program 610 mixes the first and second audio data and stores the mixed synthesized audio data in the DMA transfer buffer 63.
- the synthesized voice data stored in the DMA transfer buffer 63 reaches a certain amount of data
- the synthesized voice data stored in the DMA transfer buffer 63 is transferred to the voice output unit 64.
- the voice output unit 64 sequentially stores the synthesized voice data transferred from the DMA transfer buffer 63 and reproduces the stored synthesized voice data.
- the standby program 620 determines whether the amount of synthesized voice data stored in the DMA transfer buffer 63 has reached a predetermined data amount determined according to the amount of synthesized voice data stored in the voice output unit 64. Determine whether or not. If the predetermined amount of data has not been reached, the main program 610 takes over and executes the mixing and storage of the synthesized voice data that was executed 610.
- FIG. 2 is a block diagram showing a hardware configuration of the multi-core system according to the present embodiment.
- the multi-core system 1 includes a built-in MPU 10, an audio output device 20, and an SDRAM (Synchronous Dynamic Random Access Memory) 30.
- the SDRAM functions as an external shared memory.
- the MPU 10 is an integrated circuit (IC) in which a multi-core CPU and peripheral devices (I2S device 13, I2C device 14, and DMAC 15) are integrated into one chip.
- Each of the CPUs 11 and 12 includes one or a plurality of CPU cores.
- the CPU 11 includes one CPU core
- the CPU 12 includes three CPU cores.
- the CPUs 11 and 12 may be composed of a plurality of multi-core CPUs.
- the CPU core corresponds to the processor cores 61 and 62.
- the MPU 10 operates a plurality of OSs on the multi-core CPU.
- the OS 110 operates on the CPU 11.
- the OS 120 operates on the CPU 12.
- the OS 110 and the OS 120 are different types of OSs.
- the OS 110 is a real-time OS such as ⁇ ITRON and the OS 120 is a high function embedded OS.
- the high function embedded OS is, for example, embedded Linux (registered trademark) or Windows® CE (registered trademark).
- an application program 111 (abbreviated as “APP” in the drawing) and a sound driver 112 which is a device driver operate. Audio data output from the application program 111 is input to the sound driver 112.
- application programs 121, 122, and 123 (in the drawing, abbreviated as “APP”, respectively), a sound server 124, and a sound driver 125 that is a device driver operate.
- the audio data output from the application program 121 and the application program 122 is input to the sound server 124.
- the sound data output from the sound server 124 and the sound data output from the application program 123 are input to the sound driver 125.
- the I2S device 13 transmits audio data to the ADAC & AMP 21 via the I2S bus (I2S bus).
- the I2S device 13 is a one-system device.
- the I2S device 13 includes a FIFO 131.
- the I2S device 13 stores audio data in the FIFO 131.
- the I2S device 13 handles stereo PCM as audio data.
- the I2C device 14 is a serial bus for device control.
- the I2C device 14 is used for reading from and writing to a register included in the ADAC.
- a DMAC (DMA Controller) 15 controls DMA transfer between the SDARM 30 connected to the outside of the MPU 10 and other devices.
- audio data is transferred from the inter-OS shared memory 40 on the SDRAM 30 to the FIFO 131 of the I2S device 13 using one channel of the DMAC 15.
- the sound driver 125 on the OS 120 side normally controls audio mixing processing and peripheral devices (I2S device 13, I2C device 14, and DMAC 15).
- the audio output device 20 includes an ADAC & AMP 21 and a speaker 22.
- the ADAC & AMP 21 constitutes an external interface of the audio output device 20.
- the ADAC & AMP 21 converts and amplifies audio data transmitted via the I2S bus into an analog signal.
- the ADAC & AMP 21 reproduces an analog signal using the speaker 22.
- the I2S device 13 and the audio output device 20 correspond to the audio output unit 64.
- the SDRAM 30 is a volatile memory (RAM) connected to the outside of the MPU 10 via a bus.
- RAM volatile memory
- a memory space shared by the OS of the MPU 10 is secured as the OS shared memory 40.
- a sound queue (sound queue 41, sound queue 42, sound queue 43) and a DMA transfer buffer 44 are set.
- the sound cue is a ring buffer that stores audio data output by the application program.
- a number of cues corresponding to the number of application programs that output audio data are created as sound cues.
- three sound cues (sound cue 41, sound cue 42, and sound cue 43) corresponding to the application program 111 of OS 110, the sound server 124 of OS 120, and the application program 123 of OS 120 are created.
- the sound data of the sound cues (sound cues 41, sound cues 42, and sound cues 43) are subjected to sound mixing processing and then stored in the DMA transfer buffer 44.
- the sound cue may be configured using cueing means other than the ring buffer.
- FIG. 3 is a block diagram showing a functional configuration of the multi-core system according to the present embodiment.
- the sound driver 112 shown in FIG. 2 is divided into a higher application I / F unit 113 and a driver core unit 114. Further, the sound driver 125 is divided into a higher application I / F unit 126 and a driver core unit 127.
- the upper application I / F unit 113 has a sampling rate conversion function 116.
- the upper application I / F unit 126 has a sampling rate conversion function 130.
- the upper application I / F unit 113 stores the audio data output from the application program 111 in the sound queue 41 in the inter-OS shared memory 40.
- the sampling rate conversion function 116 of the upper application I / F unit 113 performs sampling rate conversion and quantization bit number conversion on the audio data received from the application program 111 as necessary.
- the upper application I / F unit 113 stores the converted audio data in the sound queue 41 in the inter-OS shared memory 40.
- the upper application I / F unit 126 stores the audio data output from the sound server 124 and the application program 123 in the sound queues 42 and 43 in the inter-OS shared memory 40.
- the sampling rate conversion function 130 of the upper application I / F unit 126 needs to convert the sampling rate and the quantization bit number for the audio data received from the sound server 124 and the application program 123. Do it accordingly.
- the upper application I / F unit 126 stores the converted audio data in the sound queues 42 and 43 in the inter-OS shared memory 40.
- the sampling rate conversion functions 116 and 130 convert, for example, 48 kHz 23 bit sound into 44.1 kHz 16 bit sound.
- the driver core units 114 and 127 mix audio data stored in each sound queue of the inter-OS shared memory 40.
- the driver core units 114 and 127 transfer the audio data after the audio mixing process to the FIFO 131 of the I2S device 13 using DMA transfer.
- a functional part that is a part of the sound drivers 112 and 125 and controls the audio mixing processing and the I2S device 13 and the I2C device 14 is defined as the driver core units 114 and 125.
- audio data output from a plurality of application programs running on each OS is mixed and copied to a DMA transfer area (DMA transfer buffer 44) as a single audio data. Refers to processing.
- DMA transfer buffer 44 DMA transfer buffer 44
- the driver core unit of one OS (main system) among a plurality of OSs operates during system operation.
- the other driver core units of the OS (standby system) are normally in a standby state.
- the standby system operates only when the OS on the main system side performs a kernel panic or freeze.
- the main system refers to a system having a driver core unit that operates in a normal state.
- the standby system refers to a system having a driver core unit that operates when the main system becomes inoperable.
- the main system and the standby system are the main system and the standby system for the sound reproduction function (part of the function of the sound driver), and are not the main system OS and the standby system OS. That is, the OS on the standby system side refers to an OS having a function as a backup system related to the sound reproduction function.
- the software mixer 128 is a function called by the driver core unit 127.
- the software mixer 128 mixes the audio data stored in each sound cue.
- the software mixer 128 stores the audio data after mixing in the DMA transfer buffer 44.
- the common interrupt control unit 16 allocates an interrupt request (IRQ) to each CPU core.
- IRQ interrupt request
- the common interrupt control unit 16 is basically provided as a hardware function in the MPU 10. If the common interrupt control unit 16 is not installed as a hardware function in the MPU 10, it can be implemented as a software function.
- Interrupt requests input to the common interrupt control unit 16 are a DMA transfer completion interrupt issued by the DMAC 15 and a FIFO boundary interrupt issued by the I2S device 13 when the DMA transfer of audio data is completed. If there is no DMA transfer request within a certain time from the occurrence of the DMA transfer completion interrupt, the buffer of the FIFO 131 of the I2S device 13 becomes empty and an I2S underrun error occurs.
- the driver management unit 115 is a function group called by the interrupt handler of the OS on the standby side.
- the driver management unit 115 performs a switching process for setting the HW timer 17, setting the FIFO boundary interrupt ON / OFF, and switching the operation using the main driver core unit to the operation using the standby driver core unit.
- the HW timer 17 refers to a hardware timer provided in the MPU 10.
- the plurality of OSs 110 and 120 receive a DMA transfer completion interrupt.
- the received DMA transfer completion interrupt reaches the DMA transfer completion interrupt thread 129.
- the DMA transfer completion interrupt thread 129 operates using a voice data DMA transfer completion interrupt as a trigger, and performs voice data voice mixing processing and DMA transfer request processing.
- the audio data after the audio mixing process stored in the DMA transfer buffer 44 is DMA-transferred from the SDRAM 30 to the FIFO 131.
- either the pattern A or the pattern B is selected based on the relationship between the audio mixing process time and the I2S underrun error in the OS 120 on the main system side. .
- pattern A is selected.
- the processing is performed in the order of performing the DMA transfer start request processing following the audio mixing processing. That is, in this case, voice mixing processing is performed prior to DMA transfer start request processing.
- pattern B is selected.
- the processing is performed in the order of performing the audio mixing processing following the DMA transfer start request processing. That is, in this case, a DMA transfer start request process is performed prior to the audio mixing process.
- a double buffering process for audio data is required.
- the selection condition “voice mixing processing time ⁇ DMA transfer interval ⁇ (voice mixing processing time ⁇ 2)” when the selection condition “voice mixing processing time ⁇ DMA transfer interval ⁇ (voice mixing processing time ⁇ 2)” is satisfied, the selection condition “DMA transfer interval ⁇ (voice mixing processing time ⁇ 2)”. "Is satisfied, the pattern A is selected. That is, the selection conditions for pattern B are exactly “voice mixing processing time ⁇ DMA transfer interval” and “DMA transfer interval ⁇ (voice mixing processing time ⁇ 2)”.
- the monitoring function of the OS 120 on the main system side of the multi-core system according to the embodiment of the present invention will be described.
- the pattern A is selected when the selection condition “DMA transfer interval> (voice mixing processing time ⁇ 2)” and the pattern B are selected will be described.
- the driver management unit 115 operated by the interrupt handler included in the standby-side OS 110 sets the HW timer 17.
- the DMA transfer completion interrupt reaches the DMA transfer completion interrupt thread 129 of the OS 120 on the primary system side
- the DMA transfer completion interrupt thread 129 releases the set HW timer 17.
- the set HW timer 17 times out without being released. Therefore, it becomes possible to monitor the inoperability of the OS 120 on the main system side based on the presence or absence of timeout of the HW timer 17.
- the driver management unit 115 operated by the interrupt handler included in the OS 110 on the standby side permits reception of the FIFO boundary interrupt of the I2S device 13.
- the driver management unit 115 determines whether the number of audio data samples stored in the DMA transfer buffer 44 has reached a predetermined threshold value. . As a result of the determination, if the number of audio data samples does not reach a predetermined threshold calculated from the number of audio data samples stored in the FIFO 131, the driver management unit 115 uses the main driver core unit 127. A switching process for switching the operation to the operation using the spare driver core unit 114 is performed.
- the predetermined threshold is the number of samples of audio that can complete the execution of the remaining audio mixing processing and DMA transfer request processing until all the remaining audio data stored in the FIFO 131 is dequeued.
- This is a reference value for determining whether data is stored in the DMA transfer buffer 44. That is, if the number of samples of audio data stored in the DMA transfer buffer 44 has reached a predetermined threshold value, even if the remaining audio mixing processing and DMA transfer request processing are executed from that point, the I2C underflow A run error will not occur.
- the number of samples of audio data stored in the FIFO 131 can be specified by referring to a register of the FIFO 131, for example. Also, the size of the audio data dequeued from the FIFO 131 can be determined by the number of times the FIFO boundary interrupt has occurred. Furthermore, the number of samples of audio data dequeued from the FIFO 131 can be known from the size. For this reason, the number of samples of audio data stored in the FIFO 131 may be calculated from the number of times that the FIFO boundary interrupt has occurred.
- the standby driver core unit 114 takes over and executes the audio mixing process performed halfway by the driver core unit 127 on the main system side.
- the spare driver core unit 114 may specify the number of audio data samples mixed by the main driver core unit 127 by referring to the number of audio data samples stored in the DMA transfer buffer 44. it can. As a result, it is possible to specify the audio data to be taken over and started from the audio data stored in the sound cues 41, 42, and 43.
- the number of samples of audio data stored in the DMA transfer buffer 44 may be specified by the software mixer 128 storing the value in the inter-OS shared memory 40.
- the audio data stored in the buffer 44 may be identified by counting the number of samples.
- FIG. 4 is a diagram illustrating the relationship between the number of FIFO boundary interrupts and the threshold value.
- FIG. 4 shows the case where the audio sampling frequency is about 48000 Hz, the I2S underrun error occurrence time is 1200 ⁇ s, the audio mixing processing time is 700 ⁇ s, and the DMA transfer unit is 1024 samples. It is a figure which shows an example of the threshold value determined to.
- the value of about 48000 Hz is a frequency used for audio output in an MPEG (Moving / Picture / Experts / Group) moving image.
- the FIFO 131 has 64 stages and exemplifies a case where 150 ⁇ s is required to transmit 8 stages of data (1 / about 48000 Hz ⁇ 18.75 ⁇ s for transmission of one stage).
- one sample of audio data is stored in one stage.
- the size of the mixed audio data is 4 bytes. That is, the case where the FIFO boundary interrupt occurs every time the audio data stored in the FIFO 131 is reduced by 32 (4 bytes ⁇ 8 stages) bytes is illustrated.
- FIFO boundary interrupt count in FIG. 4 indicates the number of interrupts starting from the reception of the DMA transfer completion interrupt.
- the “time when the FIFO boundary interrupt occurs” indicates the time from the time of receiving the DMA transfer completion interrupt.
- the threshold value can be calculated from the number of samples of audio data stored in the FIFO 131 and the audio sampling frequency.
- the threshold value is “1024”.
- the driver management unit 115 can refer to, for example, a calculation formula for calculating a threshold from the number of remaining samples in the FIFO 131 and the audio sampling frequency, or a table for specifying the threshold from the number of remaining samples in the FIFO 131 as shown in FIG. By making this, the threshold value can be specified.
- This calculation formula or table can be referred to by the driver management unit 115 by having the driver management unit 115 or the inter-OS shared memory 40, for example.
- FIG. 5 is a sequence diagram for explaining audio mixing processing and DMA transfer processing in a state where the main system OS 120 is operating normally.
- the FIFO boundary interrupt is output to the driver management unit 115 via the common interrupt control unit 16 as shown in FIG. 3, but this point is omitted in FIGS.
- the common interrupt control unit 16 transmits an interrupt request (IRQ) to a plurality of CPU cores (S101).
- IRQ interrupt request
- the driver management unit 115 operating on the standby side OS 110 outputs a FIFO boundary interrupt start request to the I2S device 13 (S102).
- the I2S device 13 starts outputting the FIFO boundary interrupt.
- the driver management unit 115 acquires the number of samples of the audio data stored in the DMA transfer buffer 44.
- the driver management unit 115 determines whether or not the acquired number of samples has reached the threshold (S103).
- the driver management unit 115 switches the operation to the standby driver core unit 114 because the main driver core 127 operates normally and performs audio mixing processing. Do not do.
- the driver core unit 127 on the main system side outputs a FIFO boundary interrupt stop request to the I2S device when the audio mixing process ends normally (S104).
- the I2S device 13 stops the output of the FIFO boundary interrupt.
- FIG. 6 is a sequence diagram for explaining audio mixing processing and DMA transfer processing in a state where the OS 120 on the main system side becomes inoperable. Note that the processing in S201 and S202 shown in FIG. 6 is the same as the processing in S101 and S101 shown in FIG.
- the driver management unit 115 When receiving the output of the FIFO boundary interrupt, the driver management unit 115 acquires the number of samples of the audio data stored in the DMA transfer buffer. The driver management unit 115 determines whether or not the acquired number of samples has reached the threshold (S203). Here, in FIG. 6, when the OS 120 on the main system side becomes inoperable, the audio mixing process is not performed, and the number of samples does not reach the threshold value. If the number of samples has not reached the threshold value, the driver management unit 115 switches the operation to the standby driver core unit 114 because the main driver core unit 127 is not operating normally. Specifically, the driver management unit 115 requests the driver core unit 114 on the standby side to start the audio mixing process. As a result, the driver core unit 114 on the standby side starts the audio mixing process.
- the standby driver core unit 114 outputs a FIFO boundary interrupt stop request to the I2S device after switching the operation (S204).
- the I2S device 13 stops the output of the FIFO boundary interrupt.
- the spare driver core unit 114 takes over the audio mixing processing of the audio data from the point where the OS 120 on the main system side becomes inoperable.
- a parameter front2 is added to each sound cue (ring buffer) in addition to general ring buffer parameters (front, rear).
- front is a dequeuing parameter used by the driver core unit 127 on the main system side.
- rear is an enqueue parameter used by the upper application I / F units 126 and 113 of each OS.
- front2 is a dequeue parameter used by the driver core unit 114 on the standby side.
- the parameter front2 is used to maintain the consistency of audio data when the main system OS 120 becomes inoperable during the audio mixing process. For example, when the standby OS 110 is waiting, the main driver core unit 127 uses the first full interrupt in the FIFO 131 of the I2S device 13 after the DMA transfer start request as a trigger to transfer the DMA transfer. The front2 is counted for the number of times.
- the front2 counting method is not limited to this, and a simpler method may be used. For example, a configuration may be adopted in which a DMA transfer completion interrupt thread on the main system side counts front2 at the exit of the thread. That is, the full interrupt in the FIFO 131 of the I2S device 13 may be prohibited and executed.
- the DMA transfer completion interrupt thread 129 dequeues audio data from the position indicated by using front (portion indicated by hatching in FIG. 7). This dequeue process is counted using the front. That is, every time the audio data is dequeued, the front is counted by the number of samples of the dequeued audio data.
- the driver core unit 127 on the main system side performs audio mixing processing of a plurality of audio data output from the sound queue using the called software mixer function.
- the audio data after the audio mixing process stored in the DMA transfer buffer 44 is DMA-transferred from the SDRAM 30 to the FIFO 131 of the I2S device 13.
- the number of DMA transfers is counted using front2. That is, every time DMA transfer is performed, the front 2 is counted by the number of samples of the audio data transferred by DMA.
- the front 2 changes following the front after starting the front and the front 2 from the same position.
- the OS 120 on the main system side becomes inoperable, the dequeuing of the audio data stops halfway and the front count stops.
- the DMA transfer is not performed when the OS 120) on the main system side becomes inoperable, the count of front2 is stopped and the position of front2 is not updated.
- the switched driver core unit 114 on the standby system side uses the called software mixer function. Assume that the audio data dequeue is started from the position indicated by the front. In this situation, if the audio data dequeue is started from the position indicated by using the front, the audio data that may be lost is dequeued, and the consistency of the audio data cannot be guaranteed. Therefore, when the OS 120 on the main system side becomes inoperable and the front and front 2 are stopped, the switched driver core unit 114 on the standby system side uses the called software mixer function from the position indicated by the front 2. Start dequeuing audio data. In this way, since dequeuing can be started from the audio data before the main system OS 120 becomes inoperable, it is possible to guarantee the consistency of the audio data before and after switching.
- the DMA transfer buffer 44 is referred to and the DMA transfer is performed.
- the front 2 is counted by the number of samples obtained by decrementing the number of audio data samples stored in the buffer 44 by -1. Then, the audio data dequeue is started from the position indicated by the front2.
- the audio data that may be missing is the audio data that the driver core unit 127 on the main system side has stored after the last audio mixing. It becomes data. Therefore, in this way, in the driver core unit 114 on the standby side, the audio mixing process is taken over and executed from the audio data that the main driver core unit 127 has finally stored and mixed with the audio. Thus, it is possible to guarantee the consistency of audio data before and after switching.
- the driver core unit 114 on the standby side has a position at least one sample before the position indicated by the amount of audio data stored in the DMA transfer buffer 44 among the positions of the sound queues 41, 42, and 43. Then, the voice mixing process is taken over and executed.
- the voice mixing processing time is taken over before and after switching from the main side to the standby side by executing the voice mixing processing.
- the audio mixing processing time can be almost one time. Therefore, even if pattern A is selected when the condition of “voice mixing processing time ⁇ DMA transfer interval ⁇ (voice mixing processing time ⁇ 2)” is satisfied, the standby side does not generate an I2S underrun error. Switching to the system can be performed.
- the driver core unit 114 on the standby system side which has become the new main system side, is performing the audio mixing process, and the driver core that was on the main system side
- the base address of the peripheral device (I2S device 13 or DMAC 15) on the shared memory (SDRAM 30) recognized by the OS 120 on the main system side is temporarily stored on the SDRAM 30 by the driver management unit 110 on the standby system side. Change to an invalid address such as the used area. As a result, the OS 120 that was the main system before the switching can be prohibited from accessing the register.
- audio output from the other OS side can be performed. It is possible to prevent interruption. At that time, even if “audio mixing processing time ⁇ DMA transfer interval ⁇ (audio mixing processing time ⁇ 2)”, the switching from the main system to the standby system is performed by the pattern A that does not reduce the accuracy of the image and sound synchronization. Is possible. Therefore, it is possible to prevent the occurrence of sound interruption while suppressing a decrease in the accuracy of the image / sound synchronization.
- the determination as to whether or not switching from the main system to the standby system is necessary is based on the number of samples of audio data stored in the inter-OS shared memory. It is possible only by referring. Therefore, the processing speed is not reduced.
- the period during which the FIFO boundary interrupt is permitted is also a period of the order of several hundreds of DMA transfer cycles every several tens of ms. Therefore, there is very little impact on performance on the standby side.
- the sound driver is separated into the upper application I / F unit and the driver core unit, and the upper application I / F unit has a sampling rate conversion function. Therefore, even if only the driver core unit on the main system side becomes inoperable, the sampling rate conversion process in the host application I / F unit on the main system can be continued. Therefore, by switching the execution of the audio mixing process to the spare driver core unit, it is possible to continuously reproduce the audio data generated by the OS on the main system side.
- the present invention is not limited to the above-described embodiment, and can be modified as appropriate without departing from the spirit of the present invention.
- the MPU 10 may include three or more OSs. In such a case, there are a plurality of OSs on the standby side.
- Non-transitory computer readable media include various types of tangible storage media (tangible storage medium).
- Examples of non-transitory computer-readable media include magnetic recording media (eg flexible disks, magnetic tapes, hard disk drives), magneto-optical recording media (eg magneto-optical discs), CD-ROMs (Read Only Memory), CD-Rs, CD-R / W, semiconductor memory (for example, mask ROM, PROM (Programmable ROM), EPROM (Erasable ROM), flash ROM, RAM (random access memory)) are included.
- the program may also be supplied to the computer by various types of temporary computer-readable media.
- Examples of transitory computer readable media include electrical signals, optical signals, and electromagnetic waves.
- the temporary computer-readable medium can supply the program to the computer via a wired communication path such as an electric wire and an optical fiber, or a wireless communication path.
- Multi-core system 10 50 MPU 11, 12 CPU 13 I2S device 14 I2C device 15 DMA controller 16 Common interrupt control unit 17 HW timer 20 Audio output device 21, 51 ADAC & AMP 22 Speaker 30 SRAM 40 OS shared memory 41, 42, 43, 81, 82 Sound queue 44, 63, 73 DMA transfer buffer 61, 62 Processor core 64 Audio output unit 71, 72, 110, 120 OS 74, 131 FIFO 75 ADAC 111, 121, 122, 123, 711, 721 Application 112, 125 Sound driver 113, 126 Upper application I / F unit 114, 127 Driver core unit 115 Driver management unit 116, 130 Sampling rate conversion function 124 Sound server 128 Software mixer 129 DMA transfer interrupt thread 610, 620 Program 712, 722 Mixed sample counter
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Bus Control (AREA)
- Communication Control (AREA)
- Circuit For Audible Band Transducer (AREA)
- Advance Control (AREA)
- Multi Processors (AREA)
Abstract
Description
プロセッサコア62は、予備系として動作するプログラム620を動作させる。
DMA転送用バッファ63は、プログラム61、62がミキシングした合成音声データを格納する。
音声出力部64は、DMA転送用バッファ63から転送される合成音声データを順次格納し、格納した合成音声データを再生する。
プログラム610は、第1及び第2の音声データをミキシングして、ミキシングした合成音声データをDMA転送用バッファ63に格納する。プログラム61は、主系として動作するプログラムである。
プログラム620は、主系プログラムの予備系として動作する。 The
The
The
The
The
The
主系のプログラム610は、第1及び第2の音声データをミキシングして、ミキシングした合成音声データをDMA転送用バッファ63に格納する。そして、DMA転送用バッファ63に格納された合成音声データが一定のデータ量に達したときに、DMA転送用バッファ63に格納された合成音声データが音声出力部64に転送される。音声出力部64は、DMA転送用バッファ63から転送された合成音声データを順次格納し、当該格納した合成音声データを再生する。 Next, an outline of processing of the multi-core system according to the embodiment of the present invention will be described.
The
次に、予備系側のOS110上において動作するドライバ管理部115は、共通割込み制御部16から割り込み要求(IRQ)を受信すると、FIFO境界割り込み開始要求をI2Sデバイス13に出力する(S102)。I2Sデバイス13は、FIFO境界割り込み開始要求の出力を受けると、FIFO境界割り込みの出力を開始する。 First, when the
Next, when receiving an interrupt request (IRQ) from the common interrupt
また、本実施の形態では、FIFO境界割り込みを許可している期間も、数十ms毎のDMA転送サイクルのうちの数百μsオーダーの期間である。そのため、予備系側でのパフォーマンスへの影響は非常に少ない。 Further, according to the present embodiment, when the standby boundary receives a FIFO boundary interrupt, the determination as to whether or not switching from the main system to the standby system is necessary is based on the number of samples of audio data stored in the inter-OS shared memory. It is possible only by referring. Therefore, the processing speed is not reduced.
In the present embodiment, the period during which the FIFO boundary interrupt is permitted is also a period of the order of several hundreds of DMA transfer cycles every several tens of ms. Therefore, there is very little impact on performance on the standby side.
10、50 MPU
11、12 CPU
13 I2Sデバイス
14 I2Cデバイス
15 DMAコントローラ
16 共通割り込み制御部
17 HWタイマー
20 音声出力装置
21、51 ADAC&AMP
22 スピーカー
30 SRAM
40 OS間共有メモリ
41、42、43、81、82 サウンドキュー
44、63、73 DMA転送用バッファ
61、62 プロセッサコア
64 音声出力部
71、72、110、120 OS
74、131 FIFO
75 ADAC
111、121、122、123、711、721 アプリ
112、125 サウンドドライバ
113、126 上位アプリI/F部
114、127 ドライバコア部
115 ドライバ管理部
116、130 サンプリングレート変換機能
124 サウンドサーバ
128 ソフトウェアミキサ
129 DMA転送割り込みスレッド
610、620 プログラム
712、722 ミキシング済みサンプル数カウンタ 1
11, 12 CPU
13
22
40 OS shared
74, 131 FIFO
75 ADAC
111, 121, 122, 123, 711, 721
Claims (11)
- 第1のプロセッサコア上で動作し、第1及び第2の音声データをミキシングして、当該ミキシングした合成音声データをDMA転送用バッファに格納する主系プログラムと、
第2のプロセッサコア上で動作し、前記主系プログラムの予備系として動作する予備系プログラムと、
前記DMA転送用バッファから転送される合成音声データを順次格納し、当該格納した合成音声データを再生する音声出力手段と、を備え、
前記予備系プログラムは、前記DMA転送用バッファに格納される合成音声データの格納量が、前記音声出力手段に格納される合成音声データの格納量に応じて定める所定のデータ量に達していない場合、前記主系プログラムが実行していた合成音声データのミキシング及び格納を引き継いて実行することを特徴とするマルチコアシステム。 A main program that operates on the first processor core, mixes the first and second audio data, and stores the mixed synthesized audio data in the DMA transfer buffer;
A standby program that operates on the second processor core and operates as a standby system of the main program;
Voice output means for sequentially storing the synthesized voice data transferred from the DMA transfer buffer and reproducing the stored synthesized voice data;
In the standby program, the amount of synthesized voice data stored in the DMA transfer buffer does not reach a predetermined data amount determined according to the amount of synthesized voice data stored in the voice output means A multi-core system that performs the mixing and storing of the synthesized voice data executed by the main program. - 前記予備系プログラムは、前記音声出力手段に格納されている合成音声データの格納量が所定の単位減少する毎に、前記DMA転送用バッファに格納される合成音声データの格納量が、前記音声出力手段に格納されている合成音声データの減少量に応じて定める所定の閾値に達しているか否かを判定し、当該所定の閾値に達してない場合に、前記主系プログラムが実行していた合成音声データのミキシング及び格納を引き継いて実行することを特徴とする請求項1に記載のマルチコアシステム。 Each time the amount of synthesized speech data stored in the speech output means decreases by a predetermined unit, the backup program stores the amount of synthesized speech data stored in the DMA transfer buffer as the speech output. It is determined whether or not a predetermined threshold value determined according to the amount of decrease in the synthesized voice data stored in the means has been reached, and if the predetermined threshold value has not been reached, the synthesis that the main system program has executed The multi-core system according to claim 1, wherein the mixing and storing of the audio data is succeeded and executed.
- 前記音声出力手段は、前記DMA転送用バッファから転送される合成音声データをエンキューして、前記再生をする合成音声データがデキューされるFIFOバッファを有することを特徴とする請求項1に記載のマルチコアシステム。 2. The multi-core according to claim 1, wherein the voice output unit includes a FIFO buffer that enqueues the synthesized voice data transferred from the DMA transfer buffer and dequeues the synthesized voice data to be reproduced. system.
- 前記音声出力手段は、前記FIFOバッファに格納されている合成音声データが所定の単位デキューされる毎に、前記予備系プログラムにFIFO境界割り込みを出力し、
前記予備系プログラムは、前記音声出力手段からのFIFO割り込みに応じて、前記DMA転送用バッファに格納される合成音声データの格納量が、前記FIFOバッファに格納されている合成音声データの格納量に応じて定める所定の閾値に達しているか否かを判定し、当該所定の閾値に達していない場合に、前記主系プログラムが実行していた合成音声データのミキシング及び格納を引き継いて実行することを特徴とする請求項3に記載のマルチコアシステム。 The voice output means outputs a FIFO boundary interrupt to the standby system program every time the synthesized voice data stored in the FIFO buffer is dequeued by a predetermined unit,
In response to a FIFO interrupt from the voice output means, the backup program stores the amount of synthesized voice data stored in the DMA transfer buffer to the amount of synthesized voice data stored in the FIFO buffer. It is determined whether or not a predetermined threshold value is reached, and when the predetermined threshold value is not reached, the mixing and storing of the synthesized speech data executed by the main system program is executed. The multi-core system according to claim 3, wherein - 前記マルチコアシステムは、前記DMA転送用バッファに格納された合成音声データを前記FIFOバッファに転送して、当該転送が完了した場合に、DMA転送完了割り込みを前記予備系プログラムに出力するDMAコントローラを更に有し、
前記予備系プログラムは、前記DMAコントローラからの前記DMA転送完了割り込みに応じて、前記FIFO境界割り込みの出力を有効にするFIFO境界割り込み開始要求を前記音声出力手段に出力し、
前記主系プログラムは、前記合成音声データのミキシング及び格納を完了した場合に、前記FIFO境界割り込みの出力を無効にするFIFO境界割り込み停止要求を前記音声出力手段に出力することを特徴とする請求項4に記載のマルチコアシステム。 The multi-core system further includes a DMA controller that transfers the synthesized voice data stored in the DMA transfer buffer to the FIFO buffer and outputs a DMA transfer completion interrupt to the standby program when the transfer is completed. Have
In response to the DMA transfer completion interrupt from the DMA controller, the standby system program outputs a FIFO boundary interrupt start request for enabling the output of the FIFO boundary interrupt to the audio output means.
The main program outputs a FIFO boundary interrupt stop request for invalidating the output of the FIFO boundary interrupt to the voice output means when the mixing and storing of the synthesized voice data are completed. 4. The multi-core system according to 4. - 前記主系プログラムは、第1のオペレーティングシステムと、前記第1のオペレーティングにおいて動作する第1のサウンドドライバを含み、
前記予備系プログラムは、第2のオペレーティングシステムと、前記第2のオペレーティングにおいて動作する第2のサウンドドライバを含み、
前記第2のサウンドドライバは、前記第1のサウンドドライバが実行していた合成音声データのミキシング及び格納を引き継いで実行することを特徴とする請求項1乃至5のいずれか1項に記載のマルチコアシステム。 The main system program includes a first operating system and a first sound driver that operates in the first operating system,
The spare system program includes a second operating system and a second sound driver that operates in the second operating system,
The multi-core according to any one of claims 1 to 5, wherein the second sound driver takes over and performs mixing and storage of the synthesized voice data executed by the first sound driver. system. - 前記マルチコアシステムは、前記1の音声データが格納される第1のリングバッファと、前記第2の音声データが格納される第2のリングバッファとをさらに備え、
前記主系プログラムのサウンドドライバは、前記第1及び第2のリングバッファから前記第1及び第2の音声データを順次取得し、
前記第2のサウンドドライバは、前記合成音声データのミキシング及び格納を引き継いで実行する場合に、前記第1及び第2のリングバッファの位置のうち、前記DMAバッファに格納されている合成音声データの格納量が示す位置よりも少なくとも1サンプル前の位置から、前記第1及び第2の音声データのミキシング及び格納を引き継いで実行することを特徴とする請求項6に記載のマルチコアシステム。 The multi-core system further includes a first ring buffer in which the first audio data is stored, and a second ring buffer in which the second audio data is stored,
The sound driver of the main program sequentially acquires the first and second audio data from the first and second ring buffers,
The second sound driver, when taking over and executing the mixing and storing of the synthesized voice data, of the synthesized voice data stored in the DMA buffer among the positions of the first and second ring buffers. 7. The multi-core system according to claim 6, wherein mixing and storage of the first and second audio data are succeeded and executed from a position at least one sample before a position indicated by a storage amount. - 前記主系プログラムは、前記第1の音声データを生成する第1のアプリケーションプログラムを更に含み、
前記予備系プログラムは、前記第2の音声データを生成する第2のアプリケーションプログラムを更に含み、
前記第1のサウンドドライバは、前記第1のアプリケーションプログラムが生成した第1の音声データのサンプリングレートを変換して前記第1のリングバッファに格納するインタフェース手段と、前記合成音声データのミキシング及び格納を実行するドライバコア手段とを含み、
前記第2のサウンドドライバは、前記第2のアプリケーションプログラムが生成した第2の音声データのサンプリングレートを変換して前記第2のリングバッファに格納するインタフェース手段と、前記合成音声データのミキシング及び格納を実行するドライバコア手段と含むことを特徴とする請求項7に記載のマルチコアシステム。 The main program further includes a first application program for generating the first audio data,
The preliminary program further includes a second application program for generating the second audio data,
The first sound driver converts the sampling rate of the first audio data generated by the first application program and stores it in the first ring buffer; and mixes and stores the synthesized audio data A driver core means for executing
The second sound driver converts the sampling rate of the second audio data generated by the second application program and stores it in the second ring buffer; and mixes and stores the synthesized audio data The multi-core system according to claim 7, further comprising: driver core means for executing - 前記音声出力手段は、I2Sデバイスと、DAコンバータを含む音声出力装置とを含み、
前記I2Sデバイスは、前記DMA転送用バッファから転送される合成音声データをエンキューして、前記再生をする合成音声データがデキューされるFIFOバッファを有し、
前記音声出力装置は、前記FIFOバッファからデキューされた合成音声データをアナログ信号に変換して再生することを特徴とする請求項1に記載のマルチコアシステム。 The audio output means includes an I2S device and an audio output device including a DA converter,
The I2S device includes a FIFO buffer that enqueues the synthesized voice data transferred from the DMA transfer buffer and dequeues the synthesized voice data to be reproduced.
The multi-core system according to claim 1, wherein the audio output device converts the synthesized audio data dequeued from the FIFO buffer into an analog signal and reproduces the analog signal. - 第1のプロセッサコア上で動作する主系プログラムが、第1及び第2の音声データをミキシングして、当該ミキシングした合成音声データをDMA転送用バッファに格納し、
音声出力手段が、前記DMA転送用バッファから転送される合成音声データを順次格納し、当該格納した合成音声データを再生し、
第2のプロセッサコア上で動作し、前記主系プログラムの予備系として動作する予備系プログラムが、前記DMA転送用バッファに格納される合成音声データの格納量が、前記音声出力手段に格納される合成音声データの格納量に応じて定める所定のデータ量に達しているか否かを判定し、
前記所定のデータ量に達していると判定した場合に、前記予備系プログラムが、前記主系プログラムが実行していた合成音声データのミキシング及び格納を引き継いて実行するマルチコアシステムの制御方法。 A main program operating on the first processor core mixes the first and second audio data, and stores the mixed synthesized audio data in the DMA transfer buffer;
The voice output means sequentially stores the synthesized voice data transferred from the DMA transfer buffer, reproduces the stored synthesized voice data,
The amount of synthesized speech data stored in the DMA transfer buffer is stored in the audio output means for the standby system program operating on the second processor core and operating as a standby system for the main system program. It is determined whether or not a predetermined amount of data determined according to the amount of synthesized speech data stored has been reached,
A control method for a multi-core system, in which, when it is determined that the predetermined amount of data has been reached, the standby program takes over and executes mixing and storing of synthesized speech data executed by the main program. - 主系プログラムとは異なるプロセッサコア上で、当該主系プログラムの予備系として動作するプログラムであって、
転送されてくる合成音声データを順次格納し、当該格納した合成音声データを再生する音声出力手段に対して転送され、前記主系プログラムが第1及び第2の音声データをミキシングした合成音声データが格納されるDMA転送用バッファに格納される合成音声データの格納量が、前記音声出力手段に格納される合成音声データの格納量に応じて定める所定のデータ量に達しているか否かを判定する処理と、
前記所定のデータ量に達していると判定した場合に、前記主系プログラムが実行していた合成音声データのミキシング及び格納を引き継いて実行する処理と、を前記プロセッサコアに実行させることを特徴とするプログラムが格納された非一時的な可読媒体。 A program that operates as a standby system for the main program on a processor core different from the main program,
The synthesized voice data that is transferred is sequentially stored, transferred to the voice output means for reproducing the stored synthesized voice data, and the synthesized voice data obtained by mixing the first and second voice data by the main program is obtained. It is determined whether or not the amount of synthesized voice data stored in the DMA transfer buffer to be stored has reached a predetermined data amount determined according to the amount of synthesized voice data stored in the voice output means. Processing,
When it is determined that the predetermined amount of data has been reached, the processor core performs processing to take over and execute mixing and storage of the synthesized speech data executed by the main program, A non-transitory readable medium in which a program to be stored is stored.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/502,601 US8892230B2 (en) | 2009-11-18 | 2010-08-04 | Multicore system, control method of multicore system, and non-transitory readable medium storing program |
CN201080052247.8A CN102667745B (en) | 2009-11-18 | 2010-08-04 | Multicore system, multicore system control method and program stored in a non-transient readable medium |
JP2011541787A JP5382133B2 (en) | 2009-11-18 | 2010-08-04 | Multi-core system, control method and program for multi-core system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-262545 | 2009-11-18 | ||
JP2009262545 | 2009-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011061878A1 true WO2011061878A1 (en) | 2011-05-26 |
Family
ID=44059366
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/004911 WO2011061878A1 (en) | 2009-11-18 | 2010-08-04 | Multicore system, multicore system control method and program stored in a non-transient readable medium |
Country Status (4)
Country | Link |
---|---|
US (1) | US8892230B2 (en) |
JP (1) | JP5382133B2 (en) |
CN (1) | CN102667745B (en) |
WO (1) | WO2011061878A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016537717A (en) * | 2013-12-23 | 2016-12-01 | インテル・コーポレーション | System-on-chip (SoC) with multiple hybrid processor cores |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5382133B2 (en) * | 2009-11-18 | 2014-01-08 | 日本電気株式会社 | Multi-core system, control method and program for multi-core system |
US9236064B2 (en) * | 2012-02-15 | 2016-01-12 | Microsoft Technology Licensing, Llc | Sample rate converter with automatic anti-aliasing filter |
JP6122135B2 (en) * | 2012-11-21 | 2017-04-26 | コーヒレント・ロジックス・インコーポレーテッド | Processing system with distributed processor |
CN104516779B (en) * | 2013-09-27 | 2020-03-24 | 联想(北京)有限公司 | System switching method and chip |
US10467696B1 (en) * | 2015-07-31 | 2019-11-05 | Integral Development Corp. | Timing mechanisms to enhance the security of online networks |
US11074921B2 (en) * | 2017-03-28 | 2021-07-27 | Sony Corporation | Information processing device and information processing method |
TWI643185B (en) * | 2017-04-26 | 2018-12-01 | 瑞昱半導體股份有限公司 | Audio processing device and method |
CN109313566B (en) * | 2017-12-27 | 2022-06-07 | 深圳前海达闼云端智能科技有限公司 | Audio playing method and device of virtual machine and mobile terminal |
JP6695955B1 (en) | 2018-11-27 | 2020-05-20 | レノボ・シンガポール・プライベート・リミテッド | Signal processing device, control method, and program |
DE102019203130A1 (en) * | 2019-03-07 | 2020-09-10 | Continental Automotive Gmbh | Seamless audio delivery in a multiprocessor audio system |
CN111258937B (en) * | 2020-01-23 | 2021-08-03 | 烽火通信科技股份有限公司 | Transmission method and system of ring type linked list DMA |
CN111338998B (en) * | 2020-02-20 | 2021-07-02 | 深圳震有科技股份有限公司 | FLASH access processing method and device based on AMP system |
CN111427817B (en) * | 2020-03-23 | 2021-09-24 | 深圳震有科技股份有限公司 | Method for sharing I2C interface by dual cores of AMP system, storage medium and intelligent terminal |
CN111427806A (en) * | 2020-03-23 | 2020-07-17 | 深圳震有科技股份有限公司 | Method for sharing serial port by dual-core AMP system, storage medium and intelligent terminal |
CN115696173A (en) * | 2022-09-14 | 2023-02-03 | 合肥杰发科技有限公司 | Chip, vehicle sound source playing method, vehicle-mounted equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006085386A (en) * | 2004-09-15 | 2006-03-30 | Sony Corp | Information processor and method, and program |
JP2006146937A (en) * | 2004-11-24 | 2006-06-08 | Toshiba Corp | Method and system for performing real-time processing of data |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5357511A (en) * | 1993-03-22 | 1994-10-18 | Peak Audio, Inc. | Distributed processing in a digital audio mixing network |
JP3193525B2 (en) * | 1993-05-31 | 2001-07-30 | キヤノン株式会社 | Information processing device |
WO1996012255A1 (en) * | 1994-10-12 | 1996-04-25 | Technical Maintenance Corporation | Intelligent digital audiovisual playback system |
US5850628A (en) * | 1997-01-30 | 1998-12-15 | Hasbro, Inc. | Speech and sound synthesizers with connected memories and outputs |
JPH11196386A (en) * | 1997-10-30 | 1999-07-21 | Toshiba Corp | Computer system and closed caption display method |
FR2849327A1 (en) * | 2002-12-20 | 2004-06-25 | St Microelectronics Sa | Audio and video data decoding process for set-top box, involves loading portions of flow of audio and video data in buffer memories, and supplying audio and video data to audio decoder and video decoder respectively for decoding data |
US7529467B2 (en) * | 2004-02-28 | 2009-05-05 | Samsung Electronics Co., Ltd. | Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method for reproducing text-based subtitle stream recorded on the storage medium |
JP4605700B2 (en) | 2004-07-28 | 2011-01-05 | 武蔵精密工業株式会社 | Correction method of tooth trace on tooth surface of gear |
US7774512B2 (en) * | 2005-02-08 | 2010-08-10 | Sony Computer Entertainment Inc. | Methods and apparatus for hybrid DMA queue and DMA table |
CA2615471C (en) * | 2005-07-15 | 2014-10-21 | Mattel Inc. | Interactive electronic device with digital and analog data links |
US7590772B2 (en) * | 2005-08-22 | 2009-09-15 | Apple Inc. | Audio status information for a portable electronic device |
US7814166B2 (en) * | 2006-01-27 | 2010-10-12 | Sony Computer Entertainment Inc. | Methods and apparatus for virtualizing an address space |
DE102006055930A1 (en) | 2006-11-27 | 2008-05-29 | Siemens Ag | Medical image processing system for image data set of e.g. heart, of patient, has processing device with memory for storing image-data sets, where processing units of device are connected with memory indirectly and access memory |
US8037221B2 (en) * | 2008-01-16 | 2011-10-11 | International Business Machines Corporation | Dynamic allocation of DMA buffers in input/output adaptors |
JP2009190103A (en) | 2008-02-13 | 2009-08-27 | Hitachi High-Tech Control Systems Corp | Semiconductor conveyor |
US20090248300A1 (en) * | 2008-03-31 | 2009-10-01 | Sony Ericsson Mobile Communications Ab | Methods and Apparatus for Viewing Previously-Recorded Multimedia Content from Original Perspective |
CN100562864C (en) | 2008-06-03 | 2009-11-25 | 浙江大学 | A kind of implementation method of chip-on communication of built-in isomerization multicore architecture |
JP4683116B2 (en) * | 2008-11-12 | 2011-05-11 | ソニー株式会社 | Information processing apparatus, information processing method, information processing program, and imaging apparatus |
JP5099090B2 (en) * | 2009-08-19 | 2012-12-12 | 日本電気株式会社 | Multi-core system, multi-core system control method, and multi-processor |
WO2011027302A1 (en) * | 2009-09-02 | 2011-03-10 | Plurality Ltd. | Associative distribution units for a high flow-rate synchronizer/scheduler |
JP5382133B2 (en) * | 2009-11-18 | 2014-01-08 | 日本電気株式会社 | Multi-core system, control method and program for multi-core system |
-
2010
- 2010-08-04 JP JP2011541787A patent/JP5382133B2/en not_active Expired - Fee Related
- 2010-08-04 CN CN201080052247.8A patent/CN102667745B/en not_active Expired - Fee Related
- 2010-08-04 US US13/502,601 patent/US8892230B2/en not_active Expired - Fee Related
- 2010-08-04 WO PCT/JP2010/004911 patent/WO2011061878A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006085386A (en) * | 2004-09-15 | 2006-03-30 | Sony Corp | Information processor and method, and program |
JP2006146937A (en) * | 2004-11-24 | 2006-06-08 | Toshiba Corp | Method and system for performing real-time processing of data |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016537717A (en) * | 2013-12-23 | 2016-12-01 | インテル・コーポレーション | System-on-chip (SoC) with multiple hybrid processor cores |
Also Published As
Publication number | Publication date |
---|---|
US20120221134A1 (en) | 2012-08-30 |
US8892230B2 (en) | 2014-11-18 |
CN102667745A (en) | 2012-09-12 |
JPWO2011061878A1 (en) | 2013-04-04 |
JP5382133B2 (en) | 2014-01-08 |
CN102667745B (en) | 2015-04-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5382133B2 (en) | Multi-core system, control method and program for multi-core system | |
US8719628B2 (en) | Multi-core system, method of controlling multi-core system, and multiprocessor | |
US6898723B2 (en) | Method for verifying clock signal frequency of computer sound interface that involves checking whether count value of counter is within tolerable count range | |
EP1903447B1 (en) | Audio processor, input/output processing apparatus, and information processing apparatus | |
WO2013185636A1 (en) | Method for controlling interruption in data transmission process | |
US20210109681A1 (en) | Nvme-based data writing method, apparatus, and system | |
JP2008090375A (en) | Interrupt control system and storage control system using the same | |
WO2007099613A1 (en) | Command selecting method and device, and command inputting method and device | |
CN109599133B (en) | Language audio track switching method and device, computer equipment and storage medium | |
US6427181B1 (en) | Method of and apparatus for processing information, and providing medium | |
JP2011524574A (en) | Method and system for measuring task load | |
US20060095637A1 (en) | Bus control device, arbitration device, integrated circuit device, bus control method, and arbitration method | |
US7861012B2 (en) | Data transmitting device and data transmitting method | |
JP5375650B2 (en) | Multi-core system, control method and program for multi-core system | |
US7321945B2 (en) | Interrupt control device sending data to a processor at an optimized time | |
EP1366421B1 (en) | Digital signal processor interrupt accelerator | |
TW591510B (en) | Control method for data transfer control unit | |
JP2006302343A (en) | Information recording and reproducing device | |
JP2010092493A (en) | Interface device and packet transfer method | |
JP2018520398A (en) | Improved transmission of multimedia streams | |
JP2011095966A (en) | Access controller | |
JP2008108126A (en) | Data transfer control device and bus access arbitration system therefor | |
CN113455011B (en) | System and method for data management in media devices | |
JP2005292375A (en) | Audio-reproducing device and clock-frequency control method | |
US20220374375A1 (en) | Method of operating audio subsystem for usb module, system-on-chip performing the same and method of operating system-on-chip using the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080052247.8 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10831278 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13502601 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011541787 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10831278 Country of ref document: EP Kind code of ref document: A1 |