CN103458527B - Preamble detection task processing and dispatching method and device - Google Patents

Preamble detection task processing and dispatching method and device Download PDF

Info

Publication number
CN103458527B
CN103458527B CN201210176890.2A CN201210176890A CN103458527B CN 103458527 B CN103458527 B CN 103458527B CN 201210176890 A CN201210176890 A CN 201210176890A CN 103458527 B CN103458527 B CN 103458527B
Authority
CN
China
Prior art keywords
task
preamble
detection
tasks carrying
leading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210176890.2A
Other languages
Chinese (zh)
Other versions
CN103458527A (en
Inventor
江仁清
石义军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanechips Technology Co Ltd
Original Assignee
ZTE Corp
Shenzhen ZTE Microelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp, Shenzhen ZTE Microelectronics Technology Co Ltd filed Critical ZTE Corp
Priority to CN201210176890.2A priority Critical patent/CN103458527B/en
Publication of CN103458527A publication Critical patent/CN103458527A/en
Application granted granted Critical
Publication of CN103458527B publication Critical patent/CN103458527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a preamble detection task processing and dispatching method and device. The method comprises the following steps: step 1, preamble detection tasks are analyzed and checked regularly, and the preamble detection tasks at the fixed time are sent to a task execution queue; step 2, a preamble detection task to be processed is obtained from the task execution queue, and a corresponding access preamble is loaded; step 3, after the loading of the access preamble is completed, search window data of the preamble detection task are read, a preamble detection operation is conducted, and the step 2 is executed at the same time; step 4, after the preamble detection operation is completed, a result of the preamble detection operation is reported, the step 2 and the step 3 are executed at the same time, the next preamble detection task in the task execution queue is subjected to the preamble detection operation, wherein a preamble is already loaded to the next preamble detection task, and the access preamble of the next preamble detection task to be processed in the task execution queue is loaded at the same time.

Description

Preamble detection task processes dispatching method and device
Technical field
The present invention relates to field of mobile communication, more particularly to a kind of preamble detection task process dispatching method and device.
Background technology
Random access procedure is that user equipment (User Equipment, referred to as UE) is linked into broadband CDMA system (Wideband Code Division Multiple Access, referred to as WCDMA) system must be through process.
The uplink random of UE accesses signal and includes the access lead that one or more length is 4096 chips (chip) (Preamble) and one length is the message of 10ms or 20ms.When random access procedure starts, UE is accessed using physical accidental Channel (Physical Random Access Channel, referred to as PRACH) sends access lead.Sending access lead Afterwards, if UE does not receive any response from base station, randomly choose an available signature (Signature), and The transmission power that next available access slot increases access lead is again attempted to, until receive base station allow connect Enter or refuse the response accessing or the maximum attempts having reached permission or leading transmission power beyond maximum allowable Power.If UE receives the response allowing to access of base station, sent by PRACH channel after a specific time interval Message part.
Base station is responsible for targeting signal is detected, to determine whether UE request access.The process of detection of preamble is permissible It is managed by way of preamble detection task.Foundation with PRACH channel, join again, to delete configuration process corresponding, substantially Preamble detection task type include newly-built, update and delete.Preamble detection task is according to the task identification of high-rise configuration (Identity, referred to as ID), task type, task entry-into-force time and task parameters, can to each in cell of UE Detection of preamble can be carried out in position, obtain corresponding 256 of 16 signatures (corresponding 16 of each signature) of each frequency deviation the most significantly Degree time delay functional value (Amplitude Delay Profile, referred to as ADP) and a noise data of each frequency deviation.
Fig. 1 is the schematic diagram of the PRACH and AICH sequential relationship of UE side in prior art, as shown in figure 1, according to WCDMA Agreement, from 4096chips leading start time to base station send capture instruction (Acquisition Indicator, referred to as AI) time interval τ between the momentp-aSend timing (AICH_Transmission_timing) in AICH Acquisition Indication Channel and be equal to 0 When be 7680chips;When AICH Acquisition Indication Channel sends when timing (AICH_Transmission_timing) is equal to 1 it is 12800chips.Therefore, detection of preamble must base station send the AI moment shift to an earlier date certain time complete.
Existing preamble detection task is processed and scheduling scheme, after usually task requests being parsed, according to task The precedence relationship of task entry-into-force time in state table and detection of preamble hardware resource free time occupancy situation, determine task Execution time.Only after previous task is sequentially completed that loading is leading, loads search window and complete preamble detection task, then enter The same treatment of row next task.This preamble detection task processes simple it is easy to hardware is realized, but exists following two Problem:
Problem 1, load 4096chips leading when, detection of preamble hardware resource is in idle condition, and in detection of preamble During computing, the hardware resource loading 4096chips is in idle condition it is impossible to make full use of limited hardware resource, processes speed Degree is slow;
If problem 2 is in abnormal cases, for example, there are the big time delay of burst or software arrangements in reading antenna data Preamble detection task exceed hardware disposal ability situations such as, can cause after base station issues the AI moment, still having should The task that has timed out abandoning also is continuing computing or is being retained in wait detection of preamble computing in tasks carrying queue.On the one hand, The idle task having timed out may proceed to execute successively, reports wrong or invalid result, on the other hand, because new task continues It is sent in tasks carrying queue, and the historic task left occupies the memory space of tasks carrying queue, cause task to hold Row queue overflow, new task is lost so that detection of preamble system hardware cannot voluntarily recover normal work after abnormal conditions release Make.
Content of the invention
The present invention provides a kind of preamble detection task to process dispatching method and device, to solve prior art middle width strip code division Present in the process of multiple access detection of preamble system task and scheduling scheme, hardware resource utilization is low and processing speed is slow Problem.
The present invention provides a kind of preamble detection task to process dispatching method, including:
Step 1, is parsed to preamble detection task and regular check, and the detection of preamble of timing arrival is appointed Business is sent to tasks carrying queue;
Step 2, obtains pending preamble detection task from tasks carrying queue, and loads corresponding access lead;
Step 3, after the completion of loading access lead, reads the search window data of preamble detection task, carries out detection of preamble Computing, and execution step 2 simultaneously, start the access lead to pending preamble detection task next in tasks carrying queue Loaded;
Step 4, after the completion of leading detection calculations, reports the result of detection of preamble computing, and execution step 2 and step simultaneously 3, start the next one in tasks carrying queue has been loaded and complete leading preamble detection task and carry out detection of preamble computing, and with When start the access lead of pending preamble detection task next in tasks carrying queue is loaded.
Present invention also offers a kind of preamble detection task processes dispatching device, including:
Parsing regular check module, for being parsed to preamble detection task and regular check, and by timing The preamble detection task reaching is sent to tasks carrying queue;
Leading load-on module, for obtaining pending preamble detection task from tasks carrying queue, and loads corresponding Access lead;
Detection of preamble computing module, for, after the completion of loading access lead, reading the search window number of preamble detection task According to carrying out detection of preamble computing;
Reporting module, for after the completion of leading detection calculations, reporting the result of detection of preamble computing;
The present invention has the beneficial effect that:
By task is processed be divided into leading loading, detection of preamble computing, result report three phases, three phases Hardware resource can solve prior art middle width strip CDMA detection of preamble system task for different task time-sharing multiplexs Process and the problem that present in scheduling scheme, hardware resource utilization is low and processing speed is slow, it is possible to increase hardware The utilization rate of resource, improves processing speed and the disposal ability of detection of preamble hardware simultaneously.
Brief description
Fig. 1 is the schematic diagram of the PRACH and AICH sequential relationship of UE side in prior art;
Fig. 2 is the flow chart of the preamble detection task process dispatching method of the embodiment of the present invention;
Fig. 3 is the schematic diagram of the task handling process of the embodiment of the present invention;
Fig. 4 is the structural representation of the preamble detection task process dispatching method of the embodiment of the present invention;
Fig. 5 is the task process of the embodiment of the present invention and the detailed process figure of scheduling;
Fig. 6 is the schematic diagram in the task processing cycle cycle of the embodiment of the present invention;
Fig. 7 is the structural representation of the preamble detection task process dispatching device of the embodiment of the present invention.
Specific embodiment
In order to solve present in the process of prior art middle width strip CDMA detection of preamble system task and scheduling scheme Hardware resource utilization is low, the slow problem of processing speed, the invention provides a kind of preamble detection task processes dispatching party Method and device, by task process by way of three class pipeline so that the hardware resource of leading loading, detection of preamble computing hard The hardware resource that part resource, detection of preamble result report can use for different tasks in section at the same time simultaneously, carries The high disposal ability of the utilization rate of hardware resource and detection of preamble hardware.By overtime drop mechanism is adopted to task, can Improve abnormal automatic recovery ability and the vigorousness of detection of preamble system hardware.Below in conjunction with accompanying drawing and embodiment, to the present invention It is further elaborated.It should be appreciated that specific embodiment described herein, only in order to explain the present invention, does not limit The present invention.
Embodiment of the method
According to embodiments of the invention, there is provided a kind of preamble detection task processes dispatching method,
Fig. 4 is the structural representation of the preamble detection task process dispatching method of the embodiment of the present invention, as shown in figure 4, appointing Business request FIFO is used for storing the task requests of software arrangements;Task responds FIFO and returns to specifying of software for storage hardware The response message of task;Task resolution unit is used for parsing task requests, extracts mission bit stream, monitor task configuration error, protects Deposit task status information in task status table, wherein, task status table is mainly used in preserving the status information of task run;Appoint Business processing and control element (PCE) is used for producing that task parsing enables signal and task regular check enables signal, that is, control leading load and Detection of preamble computing, and signal related to the read-write of task status table to task resolution unit and task regular check unit is entered Row distribution or merging.Task regular check unit mainly completes the entry-into-force time of task and whether current system time mates one Cause property inspection, the task that more the new task entry-into-force time has arrived at upper once run when entry-into-force time, produce task run Deadline and other tasks carrying information, and these information are saved in tasks carrying queue, wherein, tasks carrying queue For preserving the execution information of executed task.
Discarding task FIFO is used for preserving the mission bit stream that leading load phase and detection of preamble operation stages abandon;Leading Loading Control unit is used for completing the leading Loading Control of 4096chips;Task flow control unit is taken responsibility the flowing water control of business System;The search window data that detection of preamble control unit is responsible for during detection of preamble computing loads, detection of preamble operation control;Leading inspection Survey arithmetic element and be responsible for the items computing such as the related progressive of detection of preamble, frequency deviation compensation, coherent accumulation;Detection of preamble arithmetic element There are 2 leading buffer areas for ping-pong operation inside, can be used for the leading water operation loading and detection of preamble computing between, In addition also has a search window buffer area, for caching the search window loading from antenna data memory block every time;Task parameters Table is used for preserving all task parameters needed for detection of preamble computing of software arrangements.Antenna data memory block is one piece and is available for not Same antenna data asks the memory block for storing front data of source share and access.Result reporting unit is used for each frequency The data packing of each 16 maximum ADP of each inclined signature and one noise of each frequency deviation reports software processes.
Fig. 2 is the flow chart of the preamble detection task process dispatching method of the embodiment of the present invention, as shown in Fig. 2 according to this The preamble detection task of inventive embodiments processes dispatching method and includes processing as follows:
Step 201, is parsed to preamble detection task and regular check, and the mission bit stream after parsing is stored in task shape In state table, and the preamble detection task of timing arrival is sent to tasks carrying queue;Above-mentioned tasks carrying queue is FIFO (First Input First Output, referred to as FIFO) data buffer.
Specifically, step 201 specifically includes following process:
Step 2011, as shown in figure 4, preamble detection task is configured in task request queue;Wherein, above-mentioned task please Queue is asked to be data fifo buffer.
Step 2012, within the predetermined period 1, parses to the preamble detection task in task request queue, will obtain In task status table, wherein, preamble detection task information includes the preamble detection task information Store taking:When task comes into force Between, task parameters memory block, task run mark, AICH Acquisition Indication Channel send timing;It should be noted that only above-mentioned In one cycle, can execution step 2012 operation, i.e. the preamble detection task in task request queue is parsed, If the preamble detection task in task request queue in this cycle has not been resolved, need when the next period 1 When, can proceed to process.In actual applications, one section of guard time can be left between period 1 and second round, This time can be used for also not parsed in certain task requests current, but in the case of the period 1 has arrived, continues The parsing of this task requests is finished.
It should be noted that above-mentioned period 1 and second round may be referred to Fig. 6 understood, as shown in fig. 6, wherein M parses enable stage lasting clock periodicity (above-mentioned period 1) for task requests, and n enables the stage for task regular check Lasting clock periodicity (above-mentioned second round).Between m clock cycle and n clock cycle, leave certain guard time, This time can be used for also not checking out in certain preamble detection task current, but in the case of clock cycle m has arrived, Continue to finish the parsing of this preamble detection task;Eliminate this guard time in figure 6.
Step 2013, the task life within predetermined second round, to the preamble detection task of the preservation in task status table The effect time is timed inspection, and the tasks carrying information transmission of the preamble detection task of task entry-into-force time arrival is taken office Business execution queue;Wherein, the number of times that tasks carrying information includes task ID, this execution of task has added up, task complete to end Time and task parameters memory block.It should be noted that only within above-mentioned second round, can execution step 2013 Operation, i.e. inspection is timed to the task entry-into-force time in task status table.
Wherein, step 2013 specifically includes following process:
Step 1, obtains the task entry-into-force time of corresponding preamble detection task from task status table according to predefined procedure;
Step 2, judges whether the task entry-into-force time is consistent with current system time, if the judgment is Yes, execution step 3, otherwise, execution step 4;
Step 3, in the case that the determination task entry-into-force time is consistent with current system time, calculates preamble detection task The time next time come into force, the accumulative number of times of this execution of task, determined according to AICH_Transmission_timing Task completes deadline, and the accumulative number of times of time that task is come into force next time, this execution of task, task run mark Will, task parameters memory block update in task status table, and tasks carrying information is sent to tasks carrying queue;Wherein, The number of times that tasks carrying information includes task ID, this execution of task has added up, task complete deadline, task parameters storage Area.
Step 4, in the case of determining that task entry-into-force time and current system time are inconsistent, from the beginning of step 1, weight Execute above-mentioned steps again, all preamble detection task regular checks in task status table finish.
Step 202, obtains pending preamble detection task from tasks carrying queue, and before loading corresponding access Lead;
Specifically, step 202 includes processing as follows:
1st, the leading Loading Control unit free shown in Fig. 4 and pre-set using table tennis mode of operation two before In the case of leading the leading relief area at least one of relief area free time, obtain pending leading inspection from tasks carrying queue The tasks carrying information of survey task;
2nd, task parameters are read from task parameters table according to corresponding tasks carrying information, calculate further according to task parameters Load the leading base address of 4096chips in antenna data memory block, load corresponding access lead and delay to the leading of free time Deposit in area;
Step 202 also includes processing as follows:When obtaining pending preamble detection task from tasks carrying queue, with And during loading corresponding access lead, when judging whether current system time completes cut-off more than or equal to task Between, if the judgment is Yes, then abandon current preamble detection task and access lead, and start in loading tasks execution queue Next preamble detection task access lead;Otherwise, continue to be loaded according to corresponding tasks carrying information and task parameters Corresponding access lead.
In sum, in embodiments of the present invention it is necessary first to task entry-into-force time and current in comparison task state table System time concordance, if the two is consistent, this is held to calculate the time that this task comes into force next time and this task According to AICH_Transmission_timing, the accumulative number of times of row, determines that task completes deadline, and by it and remaining Tasks carrying information is sent to tasks carrying queue;If pending processing task in tasks carrying queue, and load leading Hardware resource can use, then according to the principle of first in first out, read a task, load 4096chips leading.Leading loading During, if task occurs time-out, abandon this task, load the leading of next task.
Step 203, after the completion of loading access lead, reads the search window data of preamble detection task, carries out leading inspection Survey computing, and execution step 202 simultaneously, start the access to pending preamble detection task next in tasks carrying queue Leading loaded;
Specifically, step 203 includes processing as follows:After the completion of loading access lead, if as shown in figure 4, leading inspection Survey that control unit and detection of preamble arithmetic element are idle, then the base address antenna data memory block from the search window of this task Start to load in the search window buffer area in detection of preamble arithmetic element for the search window data by phased manner, and correspond to from this task Leading buffer area and search window buffer area in read data carry out detection of preamble computing.Meanwhile, if leading load control The unit free processed and leading relief area in one of two leading relief areas using table tennis mode of operation pre-setting is idle In the case of, using the time slot loading search window operation by phased manner, execution step 2, start under in tasks carrying queue The access lead of one pending preamble detection task is loaded.
Step 203 also includes processing as follows:Read preamble detection task search window data when and carry out leading During detection calculations, judge whether current system time completes deadline more than or equal to task, if the judgment is Yes, then stop Only current detection of preamble computing, abandons preamble detection task, and proceeds by the detection of preamble of next preamble detection task Computing;Otherwise, proceed current detection of preamble computing.
That is, completing the leading task of 4096chips if there are loading, then read the search window number of this task According to beginning detection of preamble computing.Meanwhile, if leading Loading Control unit and the employing table tennis mode of operation pre-setting The leading buffer area of one of two leading buffer areas idle, then using the time slot loading search window operation by phased manner By in the leading leading buffer area being loaded into this free time of a leading task to be loaded.In the process, if There is time-out in the task of carrying out detection of preamble computing, then abandon this task, restarts the next one and has loaded leading task Detection of preamble computing.
Step 204, after the completion of detection of preamble computing, reports the result of detection of preamble computing, and execution step 202 simultaneously With step 203, the preamble detection task starting to have completed leading loading to the next one carries out detection of preamble computing, and starts simultaneously at The access lead of pending preamble detection task next in tasks carrying queue is loaded, it should be noted that Loading completes leading task and is placed in task flow control unit.
That is, carrying out while reporting of detection of preamble operation result after the completion of detection of preamble computing, under starting The detection of preamble computing of one task of having completed leading loading and the leading loading of next leading task to be loaded.
The technical scheme of the embodiment of the present invention by preamble detection task process be divided into leading loading, detection of preamble computing, Result reports three phases, and the hardware resource of three phases can be for different task time-sharing multiplexs.Fig. 3 is the embodiment of the present invention Task handling process schematic diagram, as shown in figure 3, task process adopt three class pipeline mode, start leading inspection in task A When surveying computing, task B carries out loading leading simultaneously;While task A starts result and reports, before task B can proceed by Lead detection calculations, task C can load leading.In this way, in section at the same time, different hardware resources can be same When work, t3, t4, t6 and t7 time period in such as Fig. 3, particularly in the t3 time period, the hardware resource in above three stage with When work, process different tasks.Additionally, the technical scheme of the embodiment of the present invention is in the leading loading, leading of same task The detection calculations stage, using overtime drop mechanism, abandons the idle task of left over by history in time it is ensured that detection of preamble system hardware Normal work can voluntarily be recovered when abnormal conditions release in time.
Below in conjunction with accompanying drawing, the technique scheme of the embodiment of the present invention is illustrated.
Fig. 5 is the task process of the embodiment of the present invention and the detailed process figure of scheduling, as shown in figure 5, including as follows Process:
Step 501, the maximum task quantity supported according to detection of preamble system, setting task process cycle is that (m+n) is individual Clock cycle (that is, above-mentioned period 1+second round), as shown in fig. 6, wherein m continued for the task requests parsing enable stage Clock periodicity (that is, in execution step 503 process cycle, the above-mentioned period 1), n be task regular check enable the stage Lasting clock periodicity (that is, the cycle processing in execution step 504, above-mentioned second round).It should be noted that in m Between clock cycle and n clock cycle, leave certain guard time, this time can be used in certain task requests current also Do not check out, but in the case of clock cycle m has arrived, continue to finish the parsing of this task requests;Eliminate this in figure 6 Guard time;
Step 502, task requests task requests FIFO that embedded software is configured enter row cache, wherein, leading inspection The type of survey task include newly-built, update, delete, read task responsive state;
Step 503, in the task request analysis stage, the task requests reading in task requests FIFO are parsed, and will Task regular check stage and mission bit stream required for follow-up work process are saved in task status table.It should be noted that If the task type of parsing is to read task status, directly by specified task status information write task response FIFO In, it is supplied to software and reads, and terminate the operation of this task requests;
Step 504, in the task regular check stage, with task ID from small to large as order, is examined in each task Whether the entry-into-force time is consistent with current system time, if unanimously, calculates time that this task comes into force next time and should The accumulative number of times of this execution of task, calculates the cut-off of this execution of this task according to AICH_transmission_timing Time, and the number of times that this execution of time and this task that this task is come into force next time has added up writes in task status table, By task ID, the deadline of this execution of this task and this task, this executes accumulative number of times, task parameters memory block Write in tasks carrying queue together Deng tasks carrying information, if it is inconsistent, continuing checking for when coming into force of next task Between, until all tasks have checked;
Step 505, if load the leading hardware resource free time, and with ping pong scheme work in detection of preamble arithmetic element The two leading relief areas at least one made can use, then load 4096chips leading in available leading buffer area, no Then, after waiting the task of currently doing detection of preamble computing to complete computing, reload the leading of next task.From task When reading out leading tasks carrying information to be loaded in execution queue, and load leading during, will be constantly The exercise cut-off time of the current time of system and this task is compared, to determine whether task is overtime;
Step 506, if the current time of system is more than or equal to the exercise cut-off time of this task, shows current task Have timed out, then abandon the leading of this task and loading, number of times accumulative for this execution of this task and task ID write are lost Abandon in task FIFO, and start to load the leading of next task.Otherwise, continue to load the leading of this task;
Step 507, if the hardware resource of detection of preamble computing is idle, and has to load and completes leading task, then Start to read search window data, carry out detection of preamble computing.Discontinuously read search window data and can ensure detection of preamble computing It is carried out continuously.In the gap reading search window data, carry out the leading loading of next leading task to be loaded.In leading inspection Survey in the whole process of computing, carry out the comparison of this tasks carrying deadline and current system time simultaneously, be somebody's turn to do with determining Whether task is overtime;
Step 508, if the current time of system is more than or equal to the exercise cut-off time of this task, stops this task Detection of preamble computing, abandon this task, by number of times accumulative for this execution of this task and task ID write discarding task In FIFO, and start the process that the next one treats the task of detection of preamble computing;
Step 509, if task time-out in detection of preamble calculating process, this task will obtain final place Reason result.In this stage, this task result is reported software and is further processed.Meanwhile, under can starting The process of one task for the treatment of detection of preamble computing and the leading loading of next leading task to be loaded.
In sum, by means of the technical scheme of the embodiment of the present invention, improve the utilization rate of hardware resource, improve hard Part processing speed, it is to avoid detection of preamble system continue to when abnormal condition releases to waste hardware resource run invalid The problem of task, improves automatic recovery ability and the vigorousness of detection of preamble system hardware.
Device embodiment
According to embodiments of the invention, there is provided a kind of preamble detection task processes dispatching device, Fig. 7 is that the present invention is implemented The preamble detection task of example processes the structural representation of dispatching device, as shown in fig. 7, detection of preamble according to embodiments of the present invention Task processes dispatching device and includes:Parsing regular check module 70, leading load-on module 72, detection of preamble computing module 74, with And reporting module 76, below the modules of the embodiment of the present invention are described in detail.
Parsing regular check module 70, for being parsed to preamble detection task and regular check, and by timing The preamble detection task of arrival is sent to tasks carrying queue;Above-mentioned tasks carrying queue is data fifo buffer.
Parsing regular check module 70 specifically includes:
Configuration submodule, for being configured at preamble detection task in task request queue;Wherein, above-mentioned task requests team It is classified as data fifo buffer.
Analyzing sub-module, for, within the predetermined period 1, solving to the preamble detection task in task request queue Analysis, by the preamble detection task information Store obtaining in task status table, wherein, preamble detection task information includes:Task Entry-into-force time, task parameters memory block, task run mark, AICH Acquisition Indication Channel send timing;It should be noted that only existing The operation of analyzing sub-module in the above-mentioned period 1, can be executed, i.e. preamble detection task in task request queue is carried out Parsing, if the preamble detection task in task request queue in this cycle has not been resolved, needs when next the During one cycle, can proceed to process.In actual applications, a protection can be left between period 1 and second round Time, this time can be used for also not parsed in certain task requests current, but in the case of the period 1 has arrived, Continue to finish the parsing of this preamble detection task.
Timing check sub-module, for, within predetermined second round, appointing to the detection of preamble of the preservation in task status table The task entry-into-force time of business is timed inspection, and the tasks carrying letter by the preamble detection task of task entry-into-force time arrival Breath is sent to tasks carrying queue;Wherein, tasks carrying information includes the accumulative number of times of task ID, this execution of task, appoints Business completes deadline and task parameters memory block.It should be noted that only within above-mentioned second round, can hold The operation of row step 2013, i.e. inspection is timed to the task entry-into-force time in task status table.
Wherein, timing check sub-module specifically includes:
First acquisition submodule, for obtaining the life of corresponding preamble detection task from task status table according to predefined procedure Effect time and this task executed number of times;
Whether the first judging submodule is consistent with current system time for judging the task entry-into-force time;
First process submodule, for, in the case that the determination task entry-into-force time is consistent with current system time, counting Calculate next subtask entry-into-force time of preamble detection task, the accumulative number of times of this execution of this task, according to AICH_ Transmission_timing determines that task completes deadline, and time that task is come into force next time and task this hold In the accumulative number of times write task status table of row, by task ID, the deadline of this execution of this task and this task this The tasks carrying information such as the accumulative number of times of execution, task parameters memory block write in tasks carrying queue together;Appoint determining In the case that business entry-into-force time and current system time is inconsistent, call successively acquisition submodule, the first judging submodule, with And the first process submodule, all preamble detection task regular checks in task status table finish.
Leading load-on module 72, for obtaining pending preamble detection task from tasks carrying queue, and loads phase The access lead answered;
Leading load-on module 72 specifically includes:
Second acquisition submodule, for loading the employing table tennis Working mould that leading hardware resource is idle and pre-sets In the case of at least one in two leading relief areas of formula leading relief area free time, obtain from tasks carrying queue from treating The tasks carrying information of the preamble detection task of reason;
Second processing submodule, reads task parameters according to corresponding tasks carrying information from task parameters table, then root Calculate according to task parameters and load the leading base address of 4096chips in antenna data memory block, load corresponding access lead To in idle leading buffer area;
Leading load-on module 72 also includes:
Second judging submodule, for when obtaining pending preamble detection task from tasks carrying queue and During loading corresponding access lead, judge whether current system time completes deadline more than or equal to task, If the judgment is Yes, then abandon preamble detection task and access lead, and start loading tasks and execute the next one in queue The access lead of preamble detection task;Otherwise, second processing submodule is called to continue according to corresponding tasks carrying information and appoint Business parameter loads corresponding access lead;
In sum, technique scheme according to embodiments of the present invention, first, the first judging submodule needs to compare appoints The concordance of task entry-into-force time and current system time in business state table, if the two is consistent, first processes submodule then Calculate next subtask entry-into-force time, the accumulative number of times of this execution of this task, according to AICH_Transmission_ Timing determines that task completes deadline, and the accumulative number of times of time that task is come into force next time, this execution of task, Task run mark, task parameters memory block update in task status table, and task is completed deadline and remaining task Execution information is sent to tasks carrying queue;If pending processing task in tasks carrying queue, and load leading hard Part resource can use, and the second acquisition submodule, then according to the principle of first in first out, reads a task, and second processing submodule loads 4096chips is leading.During loading is leading, if it is determined that task occurs time-out, then abandoning should for the second judging submodule Task, loads the leading of next task.
Detection of preamble computing module 74, for, after the completion of loading access lead, reading the search window of preamble detection task Data, carries out detection of preamble computing, and calls leading load-on module 72 simultaneously;
Detection of preamble computing module 74 specifically includes:
3rd process submodule, for, after the completion of loading access lead, depositing in antenna data from the search window of this task Base address in storage area starts to load in the search window buffer area in detection of preamble arithmetic element for the search window data by phased manner, And reading data carries out detection of preamble computing from the corresponding leading buffer area of this task and search window buffer area, and search in reading Leading load-on module 72 is called in the gap of rope window data;
Detection of preamble computing module 74 also includes:
3rd judging submodule, for read preamble detection task search window data when and carrying out leading inspection When surveying computing, judge whether current system time completes deadline more than or equal to task, if the judgment is Yes, then stop Current detection of preamble computing, abandons preamble detection task, and proceeds by the detection of preamble fortune of next preamble detection task Calculate;Otherwise, the 3rd process submodule is called to proceed current detection of preamble computing.
That is, if there are loading the leading task of 4096chips that completes, the 3rd process submodule then reads this The search window data of business, starts detection of preamble computing.Meanwhile, leading load-on module 72 start next to be loaded leading The leading loading of task.In the process, the 3rd judging submodule is if it is determined that the task of carrying out detection of preamble computing is sent out Raw time-out, then abandon this task, restarts the detection of preamble computing that the next one has loaded leading task.
Reporting module 76, for after the completion of leading detection calculations, reporting the result of detection of preamble computing, and before calling simultaneously Lead load-on module 72 and detection of preamble computing module 74.
That is, after the completion of detection of preamble computing reporting module 76 carry out detection of preamble operation result report same When, start the next one and complete the detection of preamble computing of task of leading loading and the leading of next leading task to be loaded Load.
The technical scheme of the embodiment of the present invention by preamble detection task process be divided into leading loading, detection of preamble computing, Result reports three phases, and the hardware resource of three phases can be for different task time-sharing multiplexs.Fig. 3 is the embodiment of the present invention Task handling process schematic diagram, as shown in figure 3, task process adopt three class pipeline mode, start leading inspection in task A When surveying computing, task B carries out loading leading simultaneously;While task A starts result and reports, before task B can proceed by Lead detection calculations, task C can load leading.In this way, in section at the same time, different hardware resources can be same When work, t3, t4, t6 and t7 time period in such as Fig. 3, particularly in the t3 time period, the hardware resource in above three stage with When work, process different tasks.Additionally, the technical scheme of the embodiment of the present invention is in the leading loading, leading of same task The detection calculations stage, using overtime drop mechanism, abandons the idle task of left over by history in time it is ensured that detection of preamble system hardware Normal work can voluntarily be recovered when abnormal conditions release in time.
Below in conjunction with accompanying drawing, the technique scheme of the embodiment of the present invention is illustrated.
Fig. 4 is the task process of the embodiment of the present invention and the structural representation of scheduling, as shown in figure 4, task requests FIFO For storing the task requests of software arrangements;Task responds the response that FIFO returns to the appointed task of software for storage hardware Information;Task resolution unit is used for parsing task requests, extracts mission bit stream, monitor task configuration error, preserves task status To in task status table, wherein, task status table is mainly used in preserving the status information of task run information;Task processing controls Unit is used for producing that task parsing enables signal and task regular check enables signal, and by task resolution unit and task timing The inspection unit signal related to the read-write of task status table is distributed or merges.Task regular check unit mainly completes to appoint Whether the entry-into-force time of business and current system time mate consistency check, the task that more the new task entry-into-force time has arrived at Upper once run when entry-into-force time and the accumulative number of times of this execution of this task, produce task run deadline and its Its tasks carrying information, updates task status table, and tasks carrying information is sent in tasks carrying queue, wherein, task Execution queue is used for preserving the execution information of executed task.
Discarding task FIFO is used for preserving the mission bit stream that leading load phase and detection of preamble operation stages abandon;Leading Loading Control unit is used for completing the leading Loading Control of 4096chips;Task flow control unit is taken responsibility the flowing water control of business System;The search window data that detection of preamble control unit is responsible for during detection of preamble computing loads, detection of preamble operation control;Leading inspection Survey arithmetic element and be responsible for the items computing such as the related progressive of detection of preamble, frequency deviation compensation, coherent accumulation;Detection of preamble arithmetic element There are 2 leading buffer areas for ping-pong operation inside, can be used for the leading water operation loading and detection of preamble computing between, In addition also has a search window buffer area, for caching the search window loading from antenna data memory block every time;Task parameters Table is used for preserving all task parameters needed for detection of preamble computing of software arrangements.Antenna data memory block is one piece and is available for not Same antenna data asks the memory block for storing front data of source share and access.Result reporting unit is used for each frequency The data packing of each 16 maximum ADP of each inclined signature and one noise of each frequency deviation reports software processes.
Fig. 5 is the task process of the embodiment of the present invention and the detailed process figure of scheduling, as shown in figure 5, including as follows Process:
Step 501, the maximum task quantity supported according to detection of preamble system, setting task process cycle is that (m+n) is individual Clock cycle (above-mentioned period 1+above-mentioned second round), as shown in fig. 6, wherein m continued for the task requests parsing enable stage Clock periodicity (that is, in execution step 503 process cycle, the above-mentioned period 1), n be task regular check enable the stage Lasting clock periodicity (that is, the cycle processing in execution step 504, above-mentioned second round).It should be noted that in m Between clock cycle and n clock cycle, leave certain guard time, this time can be used in certain task requests current also Do not check out, but in the case of clock cycle m has arrived, continue to finish the parsing of this task requests;Eliminate this in figure 6 Guard time;
Step 502, task requests task requests FIFO that embedded software is configured enter row cache, wherein, leading inspection The type of survey task include newly-built, update, delete, read task responsive state;
Step 503, in the task request analysis stage, the task requests reading in task requests FIFO are parsed, and will Task regular check stage and mission bit stream required for follow-up work process are saved in task status table.It should be noted that If the task of parsing is to read task status request, directly by specified task status information write task response FIFO In, it is supplied to software and reads, and terminate the operation of this task requests;
Step 504, in the task regular check stage, with task ID from small to large as order, is examined in each task Whether the entry-into-force time is consistent with current system time, if unanimously, calculates the time that this task comes into force next time, and should The accumulative number of times of this execution of task, calculates the cut-off of this execution of this task according to AICH_transmission_timing Time, and the number of times that this execution of time and this task that this task is come into force next time has added up writes in task status table, By task ID, the deadline of this execution of this task and this task, this executes accumulative number of times, task parameters memory block Write in tasks carrying queue together Deng tasks carrying information, if it is inconsistent, continuing checking for when coming into force of next task Between, until all tasks have checked;
Step 505, if load the leading hardware resource free time, and with ping pong scheme work in detection of preamble arithmetic element The two leading relief areas at least one made can use, then load 4096chips leading in available leading buffer area, no Then, after waiting the task of currently doing detection of preamble computing to complete computing, reload the leading of next task.From task When reading out leading tasks carrying information to be loaded in execution queue, and load leading during, will be constantly The exercise cut-off time of the current time of system and this task is compared, to determine whether task is overtime;
Step 506, if the current time of system is more than or equal to the exercise cut-off time of this task, shows current task Have timed out, then abandon the leading of this task and loading, number of times accumulative for this execution of this task and task ID write are lost Abandon in task FIFO, and start to load the leading of next task.Otherwise, continue to load the leading of this task;
Step 507, if the hardware resource of detection of preamble computing is idle, and has to load and completes leading task, then Start to read search window data, carry out detection of preamble computing.Discontinuously read search window data and can ensure detection of preamble computing It is carried out continuously.In the gap reading search window data, carry out the leading loading of next leading task to be loaded.In leading inspection Survey in the whole process of computing, carry out the comparison of this tasks carrying deadline and current system time simultaneously, be somebody's turn to do with determining Whether task is overtime;
Step 508, if the current time of system is more than or equal to the exercise cut-off time of this task, stops this task Detection of preamble computing, abandon this task, by number of times accumulative for this execution of this task and task ID write discarding task In FIFO, and start the process that the next one treats the task of detection of preamble computing;
Step 509, if task time-out in detection of preamble calculating process, this task will obtain final place Reason result.In this stage, this task result is reported software and is further processed.Meanwhile, under can starting The process of one task for the treatment of detection of preamble computing and the leading loading of next leading task to be loaded.
In sum, by means of the technical scheme of the embodiment of the present invention, improve the utilization rate of hardware resource, improve hard Part processing speed, it is to avoid detection of preamble system continue to when abnormal condition releases to waste hardware resource run invalid The problem of task, improves automatic recovery ability and the vigorousness of detection of preamble system hardware.
Although being example purpose, have been disclosed for the preferred embodiments of the present invention, those skilled in the art will recognize Various improvement, increase and replacement are also possible, and therefore, the scope of the present invention should be not limited to above-described embodiment.

Claims (9)

1. a kind of preamble detection task processes dispatching method it is characterised in that including:
Step 1, is parsed to preamble detection task and regular check, and the described detection of preamble of timing arrival is appointed Business is sent to tasks carrying queue;
Step 2, obtains pending preamble detection task from described tasks carrying queue, and loads corresponding access lead;
Step 3, after the completion of loading described access lead, reads the search window data of described preamble detection task, carries out leading Detection calculations, and execution step 2 simultaneously, start to pending preamble detection task next in described tasks carrying queue Access lead is loaded;
Step 4, after the completion of detection of preamble computing, reports the result of detection of preamble computing, and execution step 2 and step 3 simultaneously, The preamble detection task starting to have completed leading loading to the next one carries out detection of preamble computing, and starts simultaneously to described task In execution queue, the access lead of next pending preamble detection task is loaded;
Wherein, described step 1 specifically includes:
Step 11, preamble detection task is configured in task request queue;
Step 12, within the predetermined period 1, parses to the preamble detection task in described task request queue, will obtain Preamble detection task information Store in task status table, wherein, described preamble detection task information includes:When task comes into force Between, task parameters memory block, task run mark and AICH Acquisition Indication Channel send timing;
Step 13, within predetermined second round, is timed inspection to the task entry-into-force time of the preservation in described task status table Look into, and the tasks carrying information of the preamble detection task of task entry-into-force time arrival is sent to tasks carrying queue;Wherein, Tasks carrying information includes task ID, task this execution number of times adding up, task completes deadline and task parameters Memory block.
2. the method for claim 1 is it is characterised in that described step 13 specifically includes following process:
Step 131, when coming into force according to the task that predefined procedure obtains corresponding preamble detection task from described task status table Between;
Step 132, judges whether the described task entry-into-force time is consistent with present system time, if the judgment is Yes, execution step 133, otherwise, execution step 134;
Step 133, in the case that the determination described task entry-into-force time is consistent with present system time, calculates described detection of preamble The time that task starts next time, this task this execution number of times adding up and task complete deadline, from described Obtain described tasks carrying information in business state table, described tasks carrying information is sent to tasks carrying queue, and will be described The accumulative number of times of time of next time starting, this execution of task, task run mark and task parameters memory block update To in described task status table;
Step 134, in the case of determining that described task entry-into-force time and present system time are inconsistent, execution step 131, directly All preamble detection task regular checks in described task status table finish.
3. method as claimed in claim 2 is it is characterised in that described step 2 specifically includes:
Load leading hardware resource idle and pre-set using one of table tennis two leading relief areas of mode of operation In the case of the free time of leading relief area, obtain the tasks carrying of pending preamble detection task from described tasks carrying queue Information;
Task parameters are read from task parameters table according to tasks carrying information, is calculated in antenna number further according to task parameters According to loading the leading base address of 4096chips in memory block, load corresponding access lead in idle leading buffer area;
Described step 2 also includes:
When obtaining pending preamble detection task from described tasks carrying queue and in the corresponding access lead of loading During, judge whether present system time completes deadline more than or equal to described task, if the judgment is Yes, then lose Abandon current preamble detection task and described access lead, and the next one starting to load in described tasks carrying queue is leading The access lead of Detection task;Otherwise, continue to be loaded before corresponding access according to corresponding tasks carrying information and task parameters Lead.
4. method as claimed in claim 3 is it is characterised in that described step 3 specifically includes:
After the completion of loading described access lead, from the search window of this task between the base address antenna data memory block starts Disconnected property ground loads in the search window buffer area in detection of preamble computing module for the search window data, and before this task is corresponding Lead reading data in buffer area and search window buffer area and carry out detection of preamble computing, and read the gap of described search window data Execution step 2, starts the access lead of pending preamble detection task next in described tasks carrying queue is carried out adding Carry;
Described step 3 also includes:
When reading the search window data of described preamble detection task and when carrying out detection of preamble computing, judge current Whether system time completes deadline more than or equal to task, if the judgment is Yes, then stops current detection of preamble computing, Abandon described preamble detection task, and proceed by the detection of preamble computing of next preamble detection task;Otherwise, proceed Current detection of preamble computing.
5. the method as any one of Claims 1-4 is it is characterised in that described task request queue and described task Execution queue is the data buffer of FIFO form.
6. a kind of preamble detection task processes dispatching device it is characterised in that including:
Parsing regular check module, for being parsed to preamble detection task and regular check, and timing is reached Described preamble detection task be sent to tasks carrying queue;
Leading load-on module, for obtaining pending preamble detection task from described tasks carrying queue, and loads corresponding Access lead;
Described detection of preamble computing module, for, after the completion of loading described access lead, reading described preamble detection task Search window data, carries out detection of preamble computing;
Reporting module, for, after the completion of detection of preamble computing, reporting the result of detection of preamble computing;
Wherein, described parsing regular check module specifically includes:
Configuration submodule, for being configured at preamble detection task in task request queue;
Analyzing sub-module, for, within the predetermined period 1, solving to the preamble detection task in described task request queue Analysis, by the preamble detection task information Store obtaining in task status table, wherein, described preamble detection task information includes: Task entry-into-force time, task parameters memory block, task run mark and AICH Acquisition Indication Channel send timing;
Timing check sub-module, for, within predetermined second round, being timed to the entry-into-force time in described task status table Check, and the tasks carrying information of the preamble detection task of task entry-into-force time arrival is sent to tasks carrying queue;Its In, tasks carrying information includes task ID, task this execution number of times adding up, task completes deadline and task Parameter memory block.
7. device as claimed in claim 6 is it is characterised in that described timing check sub-module specifically includes:
First acquisition submodule, for obtaining appointing of corresponding preamble detection task according to predefined procedure from described task status table The business entry-into-force time;
Whether the first judging submodule is consistent with present system time for judging the described task entry-into-force time;
First process submodule, for, in the case that the determination described task entry-into-force time is consistent with present system time, calculating The accumulative number of times of the time next time starting of described preamble detection task, this execution of this task and task complete to end Time, obtain described tasks carrying information from described task status table, described tasks carrying information is sent to tasks carrying Queue, and by number of times accumulative to the time next time starting of described task, this execution of task, task run mark, with And task parameters memory block updates in described task status table;Determining the described task entry-into-force time with present system time not In the case of consistent, described first acquisition submodule, described first judging submodule and described first is called to process son successively Module, all preamble detection task regular checks in described task status table finish.
8. device as claimed in claim 7 it is characterised in that
Described load-on module specifically includes:
Second acquisition submodule, for loading the employing table tennis mode of operation two that leading hardware resource is idle and pre-sets The leading relief area in one of individual leading relief area free time in the case of, obtain from described tasks carrying queue pending before Lead the tasks carrying information of Detection task;
Second processing submodule, for reading task parameters according to tasks carrying information from task parameters table, further according to task Parameter calculates and loads the leading base address of 4096chips in antenna data memory block, loads corresponding access lead to the free time Leading buffer area in;
Described load-on module also includes:
Second judging submodule, for when obtaining pending preamble detection task from described tasks carrying queue and During loading corresponding access lead, judge whether current system time completes to end more than or equal to described task Time, if the judgment is Yes, then abandon current preamble detection task and described access lead, and start to load described task The access lead of the next preamble detection task in execution queue;Otherwise, described second processing submodule is called to continue basis Corresponding tasks carrying information and task parameters load corresponding access lead;
Described detection of preamble computing module specifically includes:
3rd process submodule, for, after the completion of loading described access lead, depositing in antenna data from the search window of this task Base address in storage area starts to load in the search window buffer area in detection of preamble computing module for the search window data by phased manner, And reading data carries out detection of preamble computing from the corresponding leading buffer area of this task and search window buffer area, and reading institute Described load-on module is called in the gap stating search window data;
Described detection of preamble computing module also includes:
3rd judging submodule, for read described preamble detection task search window data when and carrying out leading inspection When surveying computing, judge whether current system time completes deadline more than or equal to described task, if the judgment is Yes, then Stop current detection of preamble computing, abandon described preamble detection task, and before proceeding by next preamble detection task Lead detection calculations;Otherwise, described 3rd process submodule is called to proceed current detection of preamble computing.
9. device as claimed in claim 8 is it is characterised in that described task request queue and described tasks carrying queue are first Enter first to go out the data buffer of form.
CN201210176890.2A 2012-06-01 2012-06-01 Preamble detection task processing and dispatching method and device Active CN103458527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210176890.2A CN103458527B (en) 2012-06-01 2012-06-01 Preamble detection task processing and dispatching method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210176890.2A CN103458527B (en) 2012-06-01 2012-06-01 Preamble detection task processing and dispatching method and device

Publications (2)

Publication Number Publication Date
CN103458527A CN103458527A (en) 2013-12-18
CN103458527B true CN103458527B (en) 2017-02-08

Family

ID=49740369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210176890.2A Active CN103458527B (en) 2012-06-01 2012-06-01 Preamble detection task processing and dispatching method and device

Country Status (1)

Country Link
CN (1) CN103458527B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108228327B (en) * 2017-12-29 2021-01-15 北京奇虎科技有限公司 Task processing method and device
CN109240810B (en) * 2018-08-03 2021-02-23 腾讯科技(深圳)有限公司 Task processing method and device and storage medium
CN110895484A (en) * 2018-09-12 2020-03-20 北京奇虎科技有限公司 Task scheduling method and device
CN111459981B (en) * 2019-01-18 2023-06-09 阿里巴巴集团控股有限公司 Query task processing method, device, server and system
CN114945817A (en) * 2020-10-30 2022-08-26 京东方科技集团股份有限公司 Task processing method, device and equipment based on defect detection and storage medium
CN113568731B (en) * 2021-09-24 2021-12-28 苏州浪潮智能科技有限公司 Task scheduling method, chip and electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000014424A (en) * 1998-08-17 2000-03-15 윤종용 Apparatus and method for transmitting preamble of access channel
CN101136689B (en) * 2006-12-05 2011-05-11 中兴通讯股份有限公司 Antenna data access scheduling method for preguiding detection
CN101489306A (en) * 2008-01-15 2009-07-22 中兴通讯股份有限公司 Initial access method based on high speed packet access enhanced technique
WO2010052522A1 (en) * 2008-11-07 2010-05-14 Nokia Corporation Random access channel message bundling

Also Published As

Publication number Publication date
CN103458527A (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN103458527B (en) Preamble detection task processing and dispatching method and device
US20110246596A1 (en) Load-aware method of optimizing command execution in a cloud environment
CN109901926A (en) Method, server and storage medium based on big data behavior scheduling application task
US9940177B2 (en) Traffic control method and system
CN107247651B (en) Cloud computing platform monitoring and early warning method and system
CN101290668B (en) Time sharing operation dynamic dispatching method and device
CN106293919A (en) The built-in tasks dispatching device of a kind of Time Triggered and method
CN109783157B (en) Method and related device for loading algorithm program
CN104536835B (en) Self-adapting task scheduling method in a kind of Hot Spare real-time control system
CN102984029B (en) Heartbeat detection device and method applied to distributed system and network equipment
CN106681811A (en) Multi-thread scheduling method and device based on thread pool
CN103699440A (en) Method and device for cloud computing platform system to distribute resources to task
EP2447841B1 (en) Scheduling policy for efficient parallelization of software analysis in a distributed computing environment
CN101788949A (en) Method and device for realizing embedded type system function monitoring
CN103593232B (en) The method for scheduling task and device of a kind of data warehouse
CN113268334B (en) Scheduling method, device, equipment and storage medium of RPA robot
CN109324983A (en) A kind of method, storage medium, equipment and the system of automatic cleaning cache file
CN103473120A (en) Acceleration-factor-based multi-core real-time system task partitioning method
CN106610870A (en) Method and device for adjusting quantity of processing nodes
CN115269108A (en) Data processing method, device and equipment
CN110347427A (en) The optimization method and device of web page code
CN106656675B (en) Detection method and device for transmission node cluster
CN110704206B (en) Real-time computing method, computer storage medium and electronic equipment
CN106980463A (en) The method for controlling quality of service and device of storage system
US11847041B2 (en) System and method for probe injection for code coverage

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20151116

Address after: 518057 Nanshan District Guangdong high tech Industrial Park, South Road, science and technology, ZTE building, Ministry of Justice

Applicant after: ZTE Corp.

Applicant after: SANECHIPS TECHNOLOGY Co.,Ltd.

Address before: 518057 Nanshan District Guangdong high tech Industrial Park, South Road, science and technology, ZTE building, Ministry of Justice

Applicant before: ZTE Corp.

C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221201

Address after: 518055 Zhongxing Industrial Park, Liuxian Avenue, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: SANECHIPS TECHNOLOGY Co.,Ltd.

Address before: 518057 Ministry of justice, Zhongxing building, South Science and technology road, Nanshan District hi tech Industrial Park, Shenzhen, Guangdong

Patentee before: ZTE Corp.

Patentee before: SANECHIPS TECHNOLOGY Co.,Ltd.