Specific embodiment
In order to solve present in the process of prior art middle width strip CDMA detection of preamble system task and scheduling scheme
Hardware resource utilization is low, the slow problem of processing speed, the invention provides a kind of preamble detection task processes dispatching party
Method and device, by task process by way of three class pipeline so that the hardware resource of leading loading, detection of preamble computing hard
The hardware resource that part resource, detection of preamble result report can use for different tasks in section at the same time simultaneously, carries
The high disposal ability of the utilization rate of hardware resource and detection of preamble hardware.By overtime drop mechanism is adopted to task, can
Improve abnormal automatic recovery ability and the vigorousness of detection of preamble system hardware.Below in conjunction with accompanying drawing and embodiment, to the present invention
It is further elaborated.It should be appreciated that specific embodiment described herein, only in order to explain the present invention, does not limit
The present invention.
Embodiment of the method
According to embodiments of the invention, there is provided a kind of preamble detection task processes dispatching method,
Fig. 4 is the structural representation of the preamble detection task process dispatching method of the embodiment of the present invention, as shown in figure 4, appointing
Business request FIFO is used for storing the task requests of software arrangements;Task responds FIFO and returns to specifying of software for storage hardware
The response message of task;Task resolution unit is used for parsing task requests, extracts mission bit stream, monitor task configuration error, protects
Deposit task status information in task status table, wherein, task status table is mainly used in preserving the status information of task run;Appoint
Business processing and control element (PCE) is used for producing that task parsing enables signal and task regular check enables signal, that is, control leading load and
Detection of preamble computing, and signal related to the read-write of task status table to task resolution unit and task regular check unit is entered
Row distribution or merging.Task regular check unit mainly completes the entry-into-force time of task and whether current system time mates one
Cause property inspection, the task that more the new task entry-into-force time has arrived at upper once run when entry-into-force time, produce task run
Deadline and other tasks carrying information, and these information are saved in tasks carrying queue, wherein, tasks carrying queue
For preserving the execution information of executed task.
Discarding task FIFO is used for preserving the mission bit stream that leading load phase and detection of preamble operation stages abandon;Leading
Loading Control unit is used for completing the leading Loading Control of 4096chips;Task flow control unit is taken responsibility the flowing water control of business
System;The search window data that detection of preamble control unit is responsible for during detection of preamble computing loads, detection of preamble operation control;Leading inspection
Survey arithmetic element and be responsible for the items computing such as the related progressive of detection of preamble, frequency deviation compensation, coherent accumulation;Detection of preamble arithmetic element
There are 2 leading buffer areas for ping-pong operation inside, can be used for the leading water operation loading and detection of preamble computing between,
In addition also has a search window buffer area, for caching the search window loading from antenna data memory block every time;Task parameters
Table is used for preserving all task parameters needed for detection of preamble computing of software arrangements.Antenna data memory block is one piece and is available for not
Same antenna data asks the memory block for storing front data of source share and access.Result reporting unit is used for each frequency
The data packing of each 16 maximum ADP of each inclined signature and one noise of each frequency deviation reports software processes.
Fig. 2 is the flow chart of the preamble detection task process dispatching method of the embodiment of the present invention, as shown in Fig. 2 according to this
The preamble detection task of inventive embodiments processes dispatching method and includes processing as follows:
Step 201, is parsed to preamble detection task and regular check, and the mission bit stream after parsing is stored in task shape
In state table, and the preamble detection task of timing arrival is sent to tasks carrying queue;Above-mentioned tasks carrying queue is
FIFO (First Input First Output, referred to as FIFO) data buffer.
Specifically, step 201 specifically includes following process:
Step 2011, as shown in figure 4, preamble detection task is configured in task request queue;Wherein, above-mentioned task please
Queue is asked to be data fifo buffer.
Step 2012, within the predetermined period 1, parses to the preamble detection task in task request queue, will obtain
In task status table, wherein, preamble detection task information includes the preamble detection task information Store taking:When task comes into force
Between, task parameters memory block, task run mark, AICH Acquisition Indication Channel send timing;It should be noted that only above-mentioned
In one cycle, can execution step 2012 operation, i.e. the preamble detection task in task request queue is parsed,
If the preamble detection task in task request queue in this cycle has not been resolved, need when the next period 1
When, can proceed to process.In actual applications, one section of guard time can be left between period 1 and second round,
This time can be used for also not parsed in certain task requests current, but in the case of the period 1 has arrived, continues
The parsing of this task requests is finished.
It should be noted that above-mentioned period 1 and second round may be referred to Fig. 6 understood, as shown in fig. 6, wherein
M parses enable stage lasting clock periodicity (above-mentioned period 1) for task requests, and n enables the stage for task regular check
Lasting clock periodicity (above-mentioned second round).Between m clock cycle and n clock cycle, leave certain guard time,
This time can be used for also not checking out in certain preamble detection task current, but in the case of clock cycle m has arrived,
Continue to finish the parsing of this preamble detection task;Eliminate this guard time in figure 6.
Step 2013, the task life within predetermined second round, to the preamble detection task of the preservation in task status table
The effect time is timed inspection, and the tasks carrying information transmission of the preamble detection task of task entry-into-force time arrival is taken office
Business execution queue;Wherein, the number of times that tasks carrying information includes task ID, this execution of task has added up, task complete to end
Time and task parameters memory block.It should be noted that only within above-mentioned second round, can execution step 2013
Operation, i.e. inspection is timed to the task entry-into-force time in task status table.
Wherein, step 2013 specifically includes following process:
Step 1, obtains the task entry-into-force time of corresponding preamble detection task from task status table according to predefined procedure;
Step 2, judges whether the task entry-into-force time is consistent with current system time, if the judgment is Yes, execution step
3, otherwise, execution step 4;
Step 3, in the case that the determination task entry-into-force time is consistent with current system time, calculates preamble detection task
The time next time come into force, the accumulative number of times of this execution of task, determined according to AICH_Transmission_timing
Task completes deadline, and the accumulative number of times of time that task is come into force next time, this execution of task, task run mark
Will, task parameters memory block update in task status table, and tasks carrying information is sent to tasks carrying queue;Wherein,
The number of times that tasks carrying information includes task ID, this execution of task has added up, task complete deadline, task parameters storage
Area.
Step 4, in the case of determining that task entry-into-force time and current system time are inconsistent, from the beginning of step 1, weight
Execute above-mentioned steps again, all preamble detection task regular checks in task status table finish.
Step 202, obtains pending preamble detection task from tasks carrying queue, and before loading corresponding access
Lead;
Specifically, step 202 includes processing as follows:
1st, the leading Loading Control unit free shown in Fig. 4 and pre-set using table tennis mode of operation two before
In the case of leading the leading relief area at least one of relief area free time, obtain pending leading inspection from tasks carrying queue
The tasks carrying information of survey task;
2nd, task parameters are read from task parameters table according to corresponding tasks carrying information, calculate further according to task parameters
Load the leading base address of 4096chips in antenna data memory block, load corresponding access lead and delay to the leading of free time
Deposit in area;
Step 202 also includes processing as follows:When obtaining pending preamble detection task from tasks carrying queue, with
And during loading corresponding access lead, when judging whether current system time completes cut-off more than or equal to task
Between, if the judgment is Yes, then abandon current preamble detection task and access lead, and start in loading tasks execution queue
Next preamble detection task access lead;Otherwise, continue to be loaded according to corresponding tasks carrying information and task parameters
Corresponding access lead.
In sum, in embodiments of the present invention it is necessary first to task entry-into-force time and current in comparison task state table
System time concordance, if the two is consistent, this is held to calculate the time that this task comes into force next time and this task
According to AICH_Transmission_timing, the accumulative number of times of row, determines that task completes deadline, and by it and remaining
Tasks carrying information is sent to tasks carrying queue;If pending processing task in tasks carrying queue, and load leading
Hardware resource can use, then according to the principle of first in first out, read a task, load 4096chips leading.Leading loading
During, if task occurs time-out, abandon this task, load the leading of next task.
Step 203, after the completion of loading access lead, reads the search window data of preamble detection task, carries out leading inspection
Survey computing, and execution step 202 simultaneously, start the access to pending preamble detection task next in tasks carrying queue
Leading loaded;
Specifically, step 203 includes processing as follows:After the completion of loading access lead, if as shown in figure 4, leading inspection
Survey that control unit and detection of preamble arithmetic element are idle, then the base address antenna data memory block from the search window of this task
Start to load in the search window buffer area in detection of preamble arithmetic element for the search window data by phased manner, and correspond to from this task
Leading buffer area and search window buffer area in read data carry out detection of preamble computing.Meanwhile, if leading load control
The unit free processed and leading relief area in one of two leading relief areas using table tennis mode of operation pre-setting is idle
In the case of, using the time slot loading search window operation by phased manner, execution step 2, start under in tasks carrying queue
The access lead of one pending preamble detection task is loaded.
Step 203 also includes processing as follows:Read preamble detection task search window data when and carry out leading
During detection calculations, judge whether current system time completes deadline more than or equal to task, if the judgment is Yes, then stop
Only current detection of preamble computing, abandons preamble detection task, and proceeds by the detection of preamble of next preamble detection task
Computing;Otherwise, proceed current detection of preamble computing.
That is, completing the leading task of 4096chips if there are loading, then read the search window number of this task
According to beginning detection of preamble computing.Meanwhile, if leading Loading Control unit and the employing table tennis mode of operation pre-setting
The leading buffer area of one of two leading buffer areas idle, then using the time slot loading search window operation by phased manner
By in the leading leading buffer area being loaded into this free time of a leading task to be loaded.In the process, if
There is time-out in the task of carrying out detection of preamble computing, then abandon this task, restarts the next one and has loaded leading task
Detection of preamble computing.
Step 204, after the completion of detection of preamble computing, reports the result of detection of preamble computing, and execution step 202 simultaneously
With step 203, the preamble detection task starting to have completed leading loading to the next one carries out detection of preamble computing, and starts simultaneously at
The access lead of pending preamble detection task next in tasks carrying queue is loaded, it should be noted that
Loading completes leading task and is placed in task flow control unit.
That is, carrying out while reporting of detection of preamble operation result after the completion of detection of preamble computing, under starting
The detection of preamble computing of one task of having completed leading loading and the leading loading of next leading task to be loaded.
The technical scheme of the embodiment of the present invention by preamble detection task process be divided into leading loading, detection of preamble computing,
Result reports three phases, and the hardware resource of three phases can be for different task time-sharing multiplexs.Fig. 3 is the embodiment of the present invention
Task handling process schematic diagram, as shown in figure 3, task process adopt three class pipeline mode, start leading inspection in task A
When surveying computing, task B carries out loading leading simultaneously;While task A starts result and reports, before task B can proceed by
Lead detection calculations, task C can load leading.In this way, in section at the same time, different hardware resources can be same
When work, t3, t4, t6 and t7 time period in such as Fig. 3, particularly in the t3 time period, the hardware resource in above three stage with
When work, process different tasks.Additionally, the technical scheme of the embodiment of the present invention is in the leading loading, leading of same task
The detection calculations stage, using overtime drop mechanism, abandons the idle task of left over by history in time it is ensured that detection of preamble system hardware
Normal work can voluntarily be recovered when abnormal conditions release in time.
Below in conjunction with accompanying drawing, the technique scheme of the embodiment of the present invention is illustrated.
Fig. 5 is the task process of the embodiment of the present invention and the detailed process figure of scheduling, as shown in figure 5, including as follows
Process:
Step 501, the maximum task quantity supported according to detection of preamble system, setting task process cycle is that (m+n) is individual
Clock cycle (that is, above-mentioned period 1+second round), as shown in fig. 6, wherein m continued for the task requests parsing enable stage
Clock periodicity (that is, in execution step 503 process cycle, the above-mentioned period 1), n be task regular check enable the stage
Lasting clock periodicity (that is, the cycle processing in execution step 504, above-mentioned second round).It should be noted that in m
Between clock cycle and n clock cycle, leave certain guard time, this time can be used in certain task requests current also
Do not check out, but in the case of clock cycle m has arrived, continue to finish the parsing of this task requests;Eliminate this in figure 6
Guard time;
Step 502, task requests task requests FIFO that embedded software is configured enter row cache, wherein, leading inspection
The type of survey task include newly-built, update, delete, read task responsive state;
Step 503, in the task request analysis stage, the task requests reading in task requests FIFO are parsed, and will
Task regular check stage and mission bit stream required for follow-up work process are saved in task status table.It should be noted that
If the task type of parsing is to read task status, directly by specified task status information write task response FIFO
In, it is supplied to software and reads, and terminate the operation of this task requests;
Step 504, in the task regular check stage, with task ID from small to large as order, is examined in each task
Whether the entry-into-force time is consistent with current system time, if unanimously, calculates time that this task comes into force next time and should
The accumulative number of times of this execution of task, calculates the cut-off of this execution of this task according to AICH_transmission_timing
Time, and the number of times that this execution of time and this task that this task is come into force next time has added up writes in task status table,
By task ID, the deadline of this execution of this task and this task, this executes accumulative number of times, task parameters memory block
Write in tasks carrying queue together Deng tasks carrying information, if it is inconsistent, continuing checking for when coming into force of next task
Between, until all tasks have checked;
Step 505, if load the leading hardware resource free time, and with ping pong scheme work in detection of preamble arithmetic element
The two leading relief areas at least one made can use, then load 4096chips leading in available leading buffer area, no
Then, after waiting the task of currently doing detection of preamble computing to complete computing, reload the leading of next task.From task
When reading out leading tasks carrying information to be loaded in execution queue, and load leading during, will be constantly
The exercise cut-off time of the current time of system and this task is compared, to determine whether task is overtime;
Step 506, if the current time of system is more than or equal to the exercise cut-off time of this task, shows current task
Have timed out, then abandon the leading of this task and loading, number of times accumulative for this execution of this task and task ID write are lost
Abandon in task FIFO, and start to load the leading of next task.Otherwise, continue to load the leading of this task;
Step 507, if the hardware resource of detection of preamble computing is idle, and has to load and completes leading task, then
Start to read search window data, carry out detection of preamble computing.Discontinuously read search window data and can ensure detection of preamble computing
It is carried out continuously.In the gap reading search window data, carry out the leading loading of next leading task to be loaded.In leading inspection
Survey in the whole process of computing, carry out the comparison of this tasks carrying deadline and current system time simultaneously, be somebody's turn to do with determining
Whether task is overtime;
Step 508, if the current time of system is more than or equal to the exercise cut-off time of this task, stops this task
Detection of preamble computing, abandon this task, by number of times accumulative for this execution of this task and task ID write discarding task
In FIFO, and start the process that the next one treats the task of detection of preamble computing;
Step 509, if task time-out in detection of preamble calculating process, this task will obtain final place
Reason result.In this stage, this task result is reported software and is further processed.Meanwhile, under can starting
The process of one task for the treatment of detection of preamble computing and the leading loading of next leading task to be loaded.
In sum, by means of the technical scheme of the embodiment of the present invention, improve the utilization rate of hardware resource, improve hard
Part processing speed, it is to avoid detection of preamble system continue to when abnormal condition releases to waste hardware resource run invalid
The problem of task, improves automatic recovery ability and the vigorousness of detection of preamble system hardware.
Device embodiment
According to embodiments of the invention, there is provided a kind of preamble detection task processes dispatching device, Fig. 7 is that the present invention is implemented
The preamble detection task of example processes the structural representation of dispatching device, as shown in fig. 7, detection of preamble according to embodiments of the present invention
Task processes dispatching device and includes:Parsing regular check module 70, leading load-on module 72, detection of preamble computing module 74, with
And reporting module 76, below the modules of the embodiment of the present invention are described in detail.
Parsing regular check module 70, for being parsed to preamble detection task and regular check, and by timing
The preamble detection task of arrival is sent to tasks carrying queue;Above-mentioned tasks carrying queue is data fifo buffer.
Parsing regular check module 70 specifically includes:
Configuration submodule, for being configured at preamble detection task in task request queue;Wherein, above-mentioned task requests team
It is classified as data fifo buffer.
Analyzing sub-module, for, within the predetermined period 1, solving to the preamble detection task in task request queue
Analysis, by the preamble detection task information Store obtaining in task status table, wherein, preamble detection task information includes:Task
Entry-into-force time, task parameters memory block, task run mark, AICH Acquisition Indication Channel send timing;It should be noted that only existing
The operation of analyzing sub-module in the above-mentioned period 1, can be executed, i.e. preamble detection task in task request queue is carried out
Parsing, if the preamble detection task in task request queue in this cycle has not been resolved, needs when next the
During one cycle, can proceed to process.In actual applications, a protection can be left between period 1 and second round
Time, this time can be used for also not parsed in certain task requests current, but in the case of the period 1 has arrived,
Continue to finish the parsing of this preamble detection task.
Timing check sub-module, for, within predetermined second round, appointing to the detection of preamble of the preservation in task status table
The task entry-into-force time of business is timed inspection, and the tasks carrying letter by the preamble detection task of task entry-into-force time arrival
Breath is sent to tasks carrying queue;Wherein, tasks carrying information includes the accumulative number of times of task ID, this execution of task, appoints
Business completes deadline and task parameters memory block.It should be noted that only within above-mentioned second round, can hold
The operation of row step 2013, i.e. inspection is timed to the task entry-into-force time in task status table.
Wherein, timing check sub-module specifically includes:
First acquisition submodule, for obtaining the life of corresponding preamble detection task from task status table according to predefined procedure
Effect time and this task executed number of times;
Whether the first judging submodule is consistent with current system time for judging the task entry-into-force time;
First process submodule, for, in the case that the determination task entry-into-force time is consistent with current system time, counting
Calculate next subtask entry-into-force time of preamble detection task, the accumulative number of times of this execution of this task, according to AICH_
Transmission_timing determines that task completes deadline, and time that task is come into force next time and task this hold
In the accumulative number of times write task status table of row, by task ID, the deadline of this execution of this task and this task this
The tasks carrying information such as the accumulative number of times of execution, task parameters memory block write in tasks carrying queue together;Appoint determining
In the case that business entry-into-force time and current system time is inconsistent, call successively acquisition submodule, the first judging submodule, with
And the first process submodule, all preamble detection task regular checks in task status table finish.
Leading load-on module 72, for obtaining pending preamble detection task from tasks carrying queue, and loads phase
The access lead answered;
Leading load-on module 72 specifically includes:
Second acquisition submodule, for loading the employing table tennis Working mould that leading hardware resource is idle and pre-sets
In the case of at least one in two leading relief areas of formula leading relief area free time, obtain from tasks carrying queue from treating
The tasks carrying information of the preamble detection task of reason;
Second processing submodule, reads task parameters according to corresponding tasks carrying information from task parameters table, then root
Calculate according to task parameters and load the leading base address of 4096chips in antenna data memory block, load corresponding access lead
To in idle leading buffer area;
Leading load-on module 72 also includes:
Second judging submodule, for when obtaining pending preamble detection task from tasks carrying queue and
During loading corresponding access lead, judge whether current system time completes deadline more than or equal to task,
If the judgment is Yes, then abandon preamble detection task and access lead, and start loading tasks and execute the next one in queue
The access lead of preamble detection task;Otherwise, second processing submodule is called to continue according to corresponding tasks carrying information and appoint
Business parameter loads corresponding access lead;
In sum, technique scheme according to embodiments of the present invention, first, the first judging submodule needs to compare appoints
The concordance of task entry-into-force time and current system time in business state table, if the two is consistent, first processes submodule then
Calculate next subtask entry-into-force time, the accumulative number of times of this execution of this task, according to AICH_Transmission_
Timing determines that task completes deadline, and the accumulative number of times of time that task is come into force next time, this execution of task,
Task run mark, task parameters memory block update in task status table, and task is completed deadline and remaining task
Execution information is sent to tasks carrying queue;If pending processing task in tasks carrying queue, and load leading hard
Part resource can use, and the second acquisition submodule, then according to the principle of first in first out, reads a task, and second processing submodule loads
4096chips is leading.During loading is leading, if it is determined that task occurs time-out, then abandoning should for the second judging submodule
Task, loads the leading of next task.
Detection of preamble computing module 74, for, after the completion of loading access lead, reading the search window of preamble detection task
Data, carries out detection of preamble computing, and calls leading load-on module 72 simultaneously;
Detection of preamble computing module 74 specifically includes:
3rd process submodule, for, after the completion of loading access lead, depositing in antenna data from the search window of this task
Base address in storage area starts to load in the search window buffer area in detection of preamble arithmetic element for the search window data by phased manner,
And reading data carries out detection of preamble computing from the corresponding leading buffer area of this task and search window buffer area, and search in reading
Leading load-on module 72 is called in the gap of rope window data;
Detection of preamble computing module 74 also includes:
3rd judging submodule, for read preamble detection task search window data when and carrying out leading inspection
When surveying computing, judge whether current system time completes deadline more than or equal to task, if the judgment is Yes, then stop
Current detection of preamble computing, abandons preamble detection task, and proceeds by the detection of preamble fortune of next preamble detection task
Calculate;Otherwise, the 3rd process submodule is called to proceed current detection of preamble computing.
That is, if there are loading the leading task of 4096chips that completes, the 3rd process submodule then reads this
The search window data of business, starts detection of preamble computing.Meanwhile, leading load-on module 72 start next to be loaded leading
The leading loading of task.In the process, the 3rd judging submodule is if it is determined that the task of carrying out detection of preamble computing is sent out
Raw time-out, then abandon this task, restarts the detection of preamble computing that the next one has loaded leading task.
Reporting module 76, for after the completion of leading detection calculations, reporting the result of detection of preamble computing, and before calling simultaneously
Lead load-on module 72 and detection of preamble computing module 74.
That is, after the completion of detection of preamble computing reporting module 76 carry out detection of preamble operation result report same
When, start the next one and complete the detection of preamble computing of task of leading loading and the leading of next leading task to be loaded
Load.
The technical scheme of the embodiment of the present invention by preamble detection task process be divided into leading loading, detection of preamble computing,
Result reports three phases, and the hardware resource of three phases can be for different task time-sharing multiplexs.Fig. 3 is the embodiment of the present invention
Task handling process schematic diagram, as shown in figure 3, task process adopt three class pipeline mode, start leading inspection in task A
When surveying computing, task B carries out loading leading simultaneously;While task A starts result and reports, before task B can proceed by
Lead detection calculations, task C can load leading.In this way, in section at the same time, different hardware resources can be same
When work, t3, t4, t6 and t7 time period in such as Fig. 3, particularly in the t3 time period, the hardware resource in above three stage with
When work, process different tasks.Additionally, the technical scheme of the embodiment of the present invention is in the leading loading, leading of same task
The detection calculations stage, using overtime drop mechanism, abandons the idle task of left over by history in time it is ensured that detection of preamble system hardware
Normal work can voluntarily be recovered when abnormal conditions release in time.
Below in conjunction with accompanying drawing, the technique scheme of the embodiment of the present invention is illustrated.
Fig. 4 is the task process of the embodiment of the present invention and the structural representation of scheduling, as shown in figure 4, task requests FIFO
For storing the task requests of software arrangements;Task responds the response that FIFO returns to the appointed task of software for storage hardware
Information;Task resolution unit is used for parsing task requests, extracts mission bit stream, monitor task configuration error, preserves task status
To in task status table, wherein, task status table is mainly used in preserving the status information of task run information;Task processing controls
Unit is used for producing that task parsing enables signal and task regular check enables signal, and by task resolution unit and task timing
The inspection unit signal related to the read-write of task status table is distributed or merges.Task regular check unit mainly completes to appoint
Whether the entry-into-force time of business and current system time mate consistency check, the task that more the new task entry-into-force time has arrived at
Upper once run when entry-into-force time and the accumulative number of times of this execution of this task, produce task run deadline and its
Its tasks carrying information, updates task status table, and tasks carrying information is sent in tasks carrying queue, wherein, task
Execution queue is used for preserving the execution information of executed task.
Discarding task FIFO is used for preserving the mission bit stream that leading load phase and detection of preamble operation stages abandon;Leading
Loading Control unit is used for completing the leading Loading Control of 4096chips;Task flow control unit is taken responsibility the flowing water control of business
System;The search window data that detection of preamble control unit is responsible for during detection of preamble computing loads, detection of preamble operation control;Leading inspection
Survey arithmetic element and be responsible for the items computing such as the related progressive of detection of preamble, frequency deviation compensation, coherent accumulation;Detection of preamble arithmetic element
There are 2 leading buffer areas for ping-pong operation inside, can be used for the leading water operation loading and detection of preamble computing between,
In addition also has a search window buffer area, for caching the search window loading from antenna data memory block every time;Task parameters
Table is used for preserving all task parameters needed for detection of preamble computing of software arrangements.Antenna data memory block is one piece and is available for not
Same antenna data asks the memory block for storing front data of source share and access.Result reporting unit is used for each frequency
The data packing of each 16 maximum ADP of each inclined signature and one noise of each frequency deviation reports software processes.
Fig. 5 is the task process of the embodiment of the present invention and the detailed process figure of scheduling, as shown in figure 5, including as follows
Process:
Step 501, the maximum task quantity supported according to detection of preamble system, setting task process cycle is that (m+n) is individual
Clock cycle (above-mentioned period 1+above-mentioned second round), as shown in fig. 6, wherein m continued for the task requests parsing enable stage
Clock periodicity (that is, in execution step 503 process cycle, the above-mentioned period 1), n be task regular check enable the stage
Lasting clock periodicity (that is, the cycle processing in execution step 504, above-mentioned second round).It should be noted that in m
Between clock cycle and n clock cycle, leave certain guard time, this time can be used in certain task requests current also
Do not check out, but in the case of clock cycle m has arrived, continue to finish the parsing of this task requests;Eliminate this in figure 6
Guard time;
Step 502, task requests task requests FIFO that embedded software is configured enter row cache, wherein, leading inspection
The type of survey task include newly-built, update, delete, read task responsive state;
Step 503, in the task request analysis stage, the task requests reading in task requests FIFO are parsed, and will
Task regular check stage and mission bit stream required for follow-up work process are saved in task status table.It should be noted that
If the task of parsing is to read task status request, directly by specified task status information write task response FIFO
In, it is supplied to software and reads, and terminate the operation of this task requests;
Step 504, in the task regular check stage, with task ID from small to large as order, is examined in each task
Whether the entry-into-force time is consistent with current system time, if unanimously, calculates the time that this task comes into force next time, and should
The accumulative number of times of this execution of task, calculates the cut-off of this execution of this task according to AICH_transmission_timing
Time, and the number of times that this execution of time and this task that this task is come into force next time has added up writes in task status table,
By task ID, the deadline of this execution of this task and this task, this executes accumulative number of times, task parameters memory block
Write in tasks carrying queue together Deng tasks carrying information, if it is inconsistent, continuing checking for when coming into force of next task
Between, until all tasks have checked;
Step 505, if load the leading hardware resource free time, and with ping pong scheme work in detection of preamble arithmetic element
The two leading relief areas at least one made can use, then load 4096chips leading in available leading buffer area, no
Then, after waiting the task of currently doing detection of preamble computing to complete computing, reload the leading of next task.From task
When reading out leading tasks carrying information to be loaded in execution queue, and load leading during, will be constantly
The exercise cut-off time of the current time of system and this task is compared, to determine whether task is overtime;
Step 506, if the current time of system is more than or equal to the exercise cut-off time of this task, shows current task
Have timed out, then abandon the leading of this task and loading, number of times accumulative for this execution of this task and task ID write are lost
Abandon in task FIFO, and start to load the leading of next task.Otherwise, continue to load the leading of this task;
Step 507, if the hardware resource of detection of preamble computing is idle, and has to load and completes leading task, then
Start to read search window data, carry out detection of preamble computing.Discontinuously read search window data and can ensure detection of preamble computing
It is carried out continuously.In the gap reading search window data, carry out the leading loading of next leading task to be loaded.In leading inspection
Survey in the whole process of computing, carry out the comparison of this tasks carrying deadline and current system time simultaneously, be somebody's turn to do with determining
Whether task is overtime;
Step 508, if the current time of system is more than or equal to the exercise cut-off time of this task, stops this task
Detection of preamble computing, abandon this task, by number of times accumulative for this execution of this task and task ID write discarding task
In FIFO, and start the process that the next one treats the task of detection of preamble computing;
Step 509, if task time-out in detection of preamble calculating process, this task will obtain final place
Reason result.In this stage, this task result is reported software and is further processed.Meanwhile, under can starting
The process of one task for the treatment of detection of preamble computing and the leading loading of next leading task to be loaded.
In sum, by means of the technical scheme of the embodiment of the present invention, improve the utilization rate of hardware resource, improve hard
Part processing speed, it is to avoid detection of preamble system continue to when abnormal condition releases to waste hardware resource run invalid
The problem of task, improves automatic recovery ability and the vigorousness of detection of preamble system hardware.
Although being example purpose, have been disclosed for the preferred embodiments of the present invention, those skilled in the art will recognize
Various improvement, increase and replacement are also possible, and therefore, the scope of the present invention should be not limited to above-described embodiment.