CN103164198A - Method and device of cutting linguistic model - Google Patents

Method and device of cutting linguistic model Download PDF

Info

Publication number
CN103164198A
CN103164198A CN2011104169744A CN201110416974A CN103164198A CN 103164198 A CN103164198 A CN 103164198A CN 2011104169744 A CN2011104169744 A CN 2011104169744A CN 201110416974 A CN201110416974 A CN 201110416974A CN 103164198 A CN103164198 A CN 103164198A
Authority
CN
China
Prior art keywords
ngram
language model
expression
log
relative entropy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011104169744A
Other languages
Chinese (zh)
Inventor
周杨
肖镜辉
李露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shiji Guangsu Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Computer Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Computer Systems Co Ltd filed Critical Shenzhen Tencent Computer Systems Co Ltd
Priority to CN2011104169744A priority Critical patent/CN103164198A/en
Publication of CN103164198A publication Critical patent/CN103164198A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention provides a method and a device of cutting a linguistic model. The method comprises that Ngram statistics is carried out on training corpus data to form the Ngram listing of an original Ngram linguistic model, and the Ngram listing comprises all Ngram of the original linguistic model; aiming at each Ngram of the Ngram listing, relative entropy between the probability distribution of the Ngram linguistic model with the Ngram cut down and the probability distribution of the original Ngram linguistic model is calculated; and at least one Ngram with small relative entropy of the Ngram listing is deleted to obtain the cut Ngram linguistic model. The method and the device of cutting the linguistic model can reduce the influences on the performance of the Ngram linguistic model in a cutting process.

Description

A kind of method and apparatus of reducing language model
Technical field
The present invention relates to the Language Modeling technical field, relate in particular to a kind of method and apparatus of reducing language model.
Background technology
Along with continuous lifting and intelligent the improving constantly of software of computer hardware performance, people expect that more and more computing machine can provide more natural man-machine interaction mode, and this shows: (1) provides more intelligent Chinese character input method; (2) provide the continuous speech input function; (3) provide continuous handwriting functions.And the realization of these three kinds of interactive modes, bottom all needs to have the support of Language Modeling technology, and the performance of language model has directly determined the intelligent and ease for use of above-mentioned interactive software.
The statistical language modeling technology is the mainstream technology of present Language Modeling, and the Ngram language model is the most successful statistical language model.Ngram is illustrated in the sequence of terms of N the word composition that occurs continuously in corpus, and that relatively more commonly used is bigram (2 sequences that word forms) and trigram (3 sequences that word forms), and the Ngram language model is comprised of a large amount of Ngram.The Ngram language model is to come the probability of calculated candidate Chinese sentence according to the conditional probability between word, and selects candidate's Chinese sentence of maximum probability as the output of interactive software.According to the regulation of Ngram language model, for a Chinese sentence S=W who comprises m word 1W 2... W m, its probability is:
P ( S ) = P ( W 1 W 2 . . . W m ) = Π i = 1 m P ( W i | W i - n + 1 . . . W i - 1 ) = Π i = 1 m C ( W i - n + 1 . . . W i - 1 W i ) C ( W i - n + 1 . . . W i - 1 )
Wherein, P (W i| W I-n+1... W i-1) be illustrated in sequence of terms W I-n+1W i-1Under the condition that occurs, word W appears iConditional probability;
C(W I-n+1... W i-1W i) expression sequence of terms W I-n+1... W i-1W iThe number of times that occurs in corpus;
C(W I-n+1... W i-1) expression sequence of terms W I-n+1... W i-1The number of times that occurs in corpus;
N is predefined integer.
The Ngram language model that adopts maximum Likelihood to obtain can't be applied directly in the input method engine and go, original Ngram language model also faces " zero probability " problem---when some word in testing material is combined in when not occurring in the Ngram language model, the statement probability that calculates by original Ngram language model is zero, and this brings serious problems can for most the application.For solving " zero probability " problem, need to adjust the probability in original Ngram language model, make when running into unknown Ngram, the probability that calculates is non-vanishing, and concrete probability method of adjustment is called as the smoothing algorithm of Ngram model.Smoothing algorithm is divided into two large classes, and a class is the interpolation smoothing algorithm, adopts model to merge thought, and lower-order model is combined with the mode of high-order model by linear interpolation, and concrete formula is as follows:
P ~ ( w | h ) = λ × P ( w | h ) + ( 1 - λ ) × P ( w | h ′ ) , Wherein,
H represents the historical word in Ngram;
W represents the current word in Ngram;
Figure BDA0000119903490000022
Ngram probability after expression is level and smooth;
P (w|h) represents original Ngram probability;
P (w|h ') expression low order Ngram probability;
Historical word in h ' expression low order Ngram;
λ is interpolation coefficient, and value is between [0,1] usually.
Another kind of is the rollback smoothing algorithm, namely when high-order model has the zero probability problem, adopts more reliable lower-order model, and concrete formula is as follows:
Figure BDA0000119903490000023
Wherein,
H represents the historical word in Ngram;
W represents the current word in Ngram;
Figure BDA0000119903490000024
Ngram probability after expression is level and smooth;
P d(w|h) expression is through the level and smooth probable value afterwards of Good-Turing;
The number of times that C (h, w) expression w and h occur in corpus simultaneously;
α adjusts coefficient, is the function of h;
P (w|h ') expression low order Ngram probability;
Historical word in h ' expression low order Ngram.
When the word number was K, the parameter space of Ngram language model was O (K in theory n).In the business Input Software, the value of K is 100,000 usually---1,000,000 magnitudes.In actual use, because the internal memory of computing machine is limited, can't load into a complete Ngram model, the Ngram model need to could use through after cutting usually.The quality of Pruning strategy has directly had influence on the actual usability of Ngram model.Which kind of strategy to reduce model with, be the committed step of Language Modeling.
The model reduction mode of standard is to carry out cutting according to the frequency of Ngram parameter at present, namely removes the Ngram parameter of low frequency, a reserved high-frequency Ngram parameter.The shortcoming of this method is the impact of not considering in the reduction process Ngram language model performance, a lot of low frequencies but cropped mistakenly promoting the helpful Ngram parameter of Ngram language model performance are to such an extent as to the Performance Ratio of the language model after cutting is lower.
Summary of the invention
The invention provides a kind of method and apparatus of reducing language model, can reduce the reduction process to the impact of Ngram language model performance.
Technical scheme of the present invention is achieved in that
A kind of method of reducing language model comprises:
The corpus data are carried out the Ngram statistics, form the Ngram list of original Ngram language model, described Ngram list comprises all Ngram in original Ngram language model;
For each Ngram in the Ngram list, calculate to reduce fall the relative entropy between the probability distribution of Ngram language model after this Ngram and original Ngram language model;
According to the actual requirements, delete the little Ngram of relative entropy at least one described Ngram list, obtain the Ngram language model after cutting.
A kind of device of reducing language model comprises:
Statistical module is used for the corpus data are carried out the Ngram statistics, forms the Ngram list of original Ngram language model, and described Ngram list comprises all Ngram in original Ngram language model;
Computing module is used for each Ngram for the Ngram list, calculates the relative entropy between the probability distribution of the Ngram language model reducing after this Ngram and original Ngram language model;
Reduce module, be used for according to the actual requirements, delete the little Ngram of relative entropy at least one described Ngram list, obtain the Ngram language model after cutting.
As seen, the method and apparatus of the reduction language model that the present invention proposes, for all Ngram in the Ngram language model, calculate to reduce fall the relative entropy between the probability distribution of Ngram language model and original Ngram language model after this Ngram, and the little Ngram of relative entropy is fallen in reduction.Because the difference between two probabilistic language models distributions of the less expression of relative entropy is less, so the present invention can reduce the reduction process to the impact of Ngram language model performance.
Description of drawings
Fig. 1 is the method flow diagram of the reduction language model that proposes of the present invention.
Embodiment
The present invention proposes a kind of method of reducing language model, and the method flow diagram as Fig. 1 is the reduction language model that proposes of the present invention comprises:
Step 101: the corpus data are carried out the Ngram statistics, form the Ngram list of original Ngram language model, described Ngram list comprises all Ngram in original Ngram language model;
Step 102: for each Ngram in the Ngram list, calculate to reduce fall the relative entropy between the probability distribution of Ngram language model after this Ngram and original Ngram language model;
Step 103: delete the little Ngram of relative entropy at least one described Ngram list, obtain the Ngram language model after cutting.
Relative entropy is the tolerance of weighing two probability distribution differences.For the Ngram language model, when cropping certain Ngram, before cutting and cutting afterwards the probability distribution of Ngram language model change, the relative entropy between these two probability distribution is calculated by following formula:
D KL = Σ h , w P ( h , w ) × { log [ P ( w | h ) ] - log [ P ′ ( w | h ) ] } , Wherein,
D KLThe expression relative entropy;
H represents the historical word in Ngram;
W represents the current word in Ngram;
The joint probability of h and w appears in P (h, w) expression;
Before P (w|h) expression reduces this Ngram, the Ngram language model provide in the conditional probability that w occurs occurring under the condition of h;
P ' (w|h) represent to reduce this Ngram after, the Ngram language model by smoothing algorithm provide in the conditional probability that w occurs occurring under the condition of h.
Can find out from top formula, language model adopts different smoothing algorithms, and the account form of probability P ' (w|h) is different, and the computing method of relative entropy are also different.
According to aforementioned two kinds of different smoothing algorithms computing formula, when adopting the rollback smoothing algorithm, above-mentioned relative entropy is calculated by following formula:
D KL = C ( w , h ) N × { log P ( w | h ) - log [ α ( h ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression h and w occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
α represents to adjust coefficient, is the function of h;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
When adopting the interpolation smoothing algorithm, above-mentioned relative entropy is calculated by following formula:
D KL = C ( w , h ) N × { log [ λ × P ( w | h ) + ( 1 - λ ) × P ( w | h ′ ) ] - log [ ( 1 - λ ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression w and h occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
λ represents interpolation coefficient;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
In the Ngram language model, by above-mentioned formula, each Ngram is calculated, and all Ngram can be sorted according to relative entropy, according to the sequence situation, crop those smaller Ngram of relative entropy, thereby obtain the language model near original Ngram model.
The present invention also proposes a kind of device of reducing language model, comprising:
Statistical module is used for the corpus data are carried out the Ngram statistics, forms the Ngram list of original Ngram language model, and described Ngram list comprises all Ngram in original Ngram language model;
Computing module is used for each Ngram for the Ngram list, calculates the relative entropy between the probability distribution of the Ngram language model reducing after this Ngram and original Ngram language model;
Reduce module, be used for the little Ngram of at least one described Ngram list relative entropy of deletion, obtain the Ngram language model after cutting.
Above-mentioned computing module can adopt following formula to calculate relative entropy:
D KL = Σ h , w P ( h , w ) × { log [ P ( w | h ) ] - log [ P ′ ( w | h ) ] } , Wherein,
D KLThe expression relative entropy;
H represents the historical word in Ngram;
W represents the current word in Ngram;
The joint probability of w and h appears in P (h, w) expression;
Before P (w|h) expression reduces this Ngram, the Ngram language model provide in the conditional probability that w occurs occurring under the condition of h;
P ' (w|h) represent to reduce this Ngram after, the Ngram language model by smoothing algorithm provide in the conditional probability that w occurs occurring under the condition of h.
When adopting the rollback smoothing algorithm, above-mentioned computing module can adopt following formula to calculate relative entropy:
D KL = C ( w , h ) N × { log P ( w | h ) - log [ α ( h ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression h and w occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
α represents to adjust coefficient, is the function of h;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
When adopting the interpolation smoothing algorithm, above-mentioned computing module can adopt following formula to calculate relative entropy:
D KL = C ( w , h ) N × { log [ λ × P ( w | h ) + ( 1 - λ ) × P ( w | h ′ ) ] - log [ ( 1 - λ ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression w and h occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
λ represents interpolation coefficient;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
As fully visible, the method and apparatus of the reduction language model that the present invention proposes calculates each Ngram, calculates the relative entropy between the probability distribution of the Ngram language model that cuts after this Ngram and original Ngram language model; And according to the actual requirements, reduce those smaller Ngram of relative entropy, thereby obtain the language model near original Ngram model, reduce the reduction process to the impact of Ngram language model performance, actual demand can be the size of considering the actual computer internal memory, determining the number of the Ngram of the cutting of wanting according to the size of internal memory, is generally all that relative entropy Ngram is from small to large deleted in turn, until meet demand.In the Ngram of identical scale model parameter situation, the Ngram language model cutting method that the present invention proposes can obtain higher-quality Ngram language model.The present invention can be applied to the association areas such as speech recognition, handwritten form identification, optical character identification.On basis of the present invention, can set up the information retrieval system based on language model, improve the performance of information retrieval system.
The above is only preferred embodiment of the present invention, and is in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of making, is equal to replacement, improvement etc., within all should being included in the scope of protection of the invention.

Claims (10)

1. a method of reducing language model, is characterized in that, described method comprises:
The corpus data are carried out the Ngram statistics, form the Ngram list of original Ngram language model, described Ngram list comprises all Ngram in original Ngram language model;
For each Ngram in the Ngram list, calculate to reduce fall the relative entropy between the probability distribution of Ngram language model after this Ngram and original Ngram language model;
Delete the little Ngram of relative entropy at least one described Ngram list, obtain the Ngram language model after cutting.
2. method according to claim 1, is characterized in that, described relative entropy is calculated by following formula:
D KL = Σ h , w P ( h , w ) × { log [ P ( w | h ) ] - log [ P ′ ( w | h ) ] } , Wherein,
D KLThe expression relative entropy;
The joint probability of h and w appears in P (h, w) expression;
Before P (w|h) expression reduces this Ngram, the Ngram language model provide in the conditional probability that w occurs occurring under the condition of h;
P ' (w|h) represent to reduce this Ngram after, the Ngram language model by smoothing algorithm provide in the conditional probability that w occurs occurring under the condition of h.
3. method according to claim 2, is characterized in that, when adopting the rollback smoothing algorithm, described relative entropy is calculated by following formula:
D KL = C ( w , h ) N × { log P ( w | h ) - log [ α ( h ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression h and w occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
α represents to adjust coefficient, is the function of h;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
4. method according to claim 2, is characterized in that, when adopting the interpolation smoothing algorithm, described relative entropy is calculated by following formula:
D KL = C ( w , h ) N × { log [ λ × P ( w | h ) + ( 1 - λ ) × P ( w | h ′ ) ] - log [ ( 1 - λ ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression h and w occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
λ represents interpolation coefficient;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
5. according to claim 1-4 arbitrary described methods, is characterized in that, the Ngram language model after reducing is applied in the input method engine.
6. a device of reducing language model, is characterized in that, described device comprises:
Statistical module is used for the corpus data are carried out the Ngram statistics, forms the Ngram list of original Ngram language model, and described Ngram list comprises all Ngram in original Ngram language model;
Computing module is used for each Ngram for the Ngram list, calculates the relative entropy between the probability distribution of the Ngram language model reducing after this Ngram and original Ngram language model;
Reduce module, be used for the little Ngram of at least one described Ngram list relative entropy of deletion, obtain the Ngram language model after cutting.
7. device according to claim 6, is characterized in that, described computing module adopts following formula to calculate relative entropy:
D KL = Σ h , w P ( h , w ) × { log [ P ( w | h ) ] - log [ P ′ ( w | h ) ] } , Wherein,
D KLThe expression relative entropy;
H represents the historical word in Ngram;
W represents the current word in Ngram;
The joint probability of h and w appears in P (h, w) expression;
Before P (w|h) expression reduces this Ngram, the Ngram language model provide in the conditional probability that w occurs occurring under the condition of h;
P ' (w|h) represent to reduce this Ngram after, the Ngram language model by smoothing algorithm provide in the conditional probability that w occurs occurring under the condition of h.
8. device according to claim 7, is characterized in that, when adopting the rollback smoothing algorithm, described computing module adopts following formula to calculate relative entropy:
D KL = C ( w , h ) N × { log P ( w | h ) - log [ α ( h ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression h and w occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
α represents to adjust coefficient, is the function of h;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
9. device according to claim 7, is characterized in that, when adopting the interpolation smoothing algorithm, described computing module adopts following formula to calculate relative entropy:
D KL = C ( w , h ) N × { log [ λ × P ( w | h ) + ( 1 - λ ) × P ( w | h ′ ) ] - log [ ( 1 - λ ) × P ( w | h ′ ) ] } , Wherein,
The number of times that C (w, h) expression h and w occur in corpus simultaneously;
N represent all Ngram occurrence numbers and;
λ represents interpolation coefficient;
P (w|h ')] expression low order Ngram language model provide in the conditional probability that w occurs occurring under the condition of h ';
Historical word in h ' expression low order Ngram language model.
10. according to claim 6-9 arbitrary described devices, is characterized in that, reduces module, and the Ngram language model after also being used for reducing is applied to the input method engine.
CN2011104169744A 2011-12-14 2011-12-14 Method and device of cutting linguistic model Pending CN103164198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011104169744A CN103164198A (en) 2011-12-14 2011-12-14 Method and device of cutting linguistic model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011104169744A CN103164198A (en) 2011-12-14 2011-12-14 Method and device of cutting linguistic model

Publications (1)

Publication Number Publication Date
CN103164198A true CN103164198A (en) 2013-06-19

Family

ID=48587323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011104169744A Pending CN103164198A (en) 2011-12-14 2011-12-14 Method and device of cutting linguistic model

Country Status (1)

Country Link
CN (1) CN103164198A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654945A (en) * 2015-10-29 2016-06-08 乐视致新电子科技(天津)有限公司 Training method of language model, apparatus and equipment thereof
CN106257441A (en) * 2016-06-30 2016-12-28 电子科技大学 A kind of training method of skip language model based on word frequency
CN111143518A (en) * 2019-12-30 2020-05-12 北京明朝万达科技股份有限公司 Cross-domain language model training method and device, electronic equipment and storage medium
CN115938351A (en) * 2021-09-13 2023-04-07 北京数美时代科技有限公司 ASR language model construction method, system, storage medium and electronic device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271450A (en) * 2007-03-19 2008-09-24 株式会社东芝 Method and device for cutting language model

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271450A (en) * 2007-03-19 2008-09-24 株式会社东芝 Method and device for cutting language model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANDREAS STOLCKE: "《Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop》", 31 December 1998 *
STANLEY F.CHEN 等: "《An Empirical Study of Smoothing Techniques for Language Modeling》", 《COMPUTER SPEECH AND LANGUAGE》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654945A (en) * 2015-10-29 2016-06-08 乐视致新电子科技(天津)有限公司 Training method of language model, apparatus and equipment thereof
WO2017071226A1 (en) * 2015-10-29 2017-05-04 乐视控股(北京)有限公司 Training method and apparatus for language model, and device
CN105654945B (en) * 2015-10-29 2020-03-06 乐融致新电子科技(天津)有限公司 Language model training method, device and equipment
CN106257441A (en) * 2016-06-30 2016-12-28 电子科技大学 A kind of training method of skip language model based on word frequency
CN111143518A (en) * 2019-12-30 2020-05-12 北京明朝万达科技股份有限公司 Cross-domain language model training method and device, electronic equipment and storage medium
CN115938351A (en) * 2021-09-13 2023-04-07 北京数美时代科技有限公司 ASR language model construction method, system, storage medium and electronic device
CN115938351B (en) * 2021-09-13 2023-08-15 北京数美时代科技有限公司 ASR language model construction method, system, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
US9336771B2 (en) Speech recognition using non-parametric models
CN1979638A (en) Method for correcting error of voice identification result
CN103280224B (en) Based on the phonetics transfer method under the asymmetric corpus condition of adaptive algorithm
CN105183923A (en) New word discovery method and device
CN106610951A (en) Improved text similarity solving algorithm based on semantic analysis
CN105389349A (en) Dictionary updating method and apparatus
US20130006611A1 (en) Method and system for extracting shadow entities from emails
CN102955857A (en) Class center compression transformation-based text clustering method in search engine
CN103164198A (en) Method and device of cutting linguistic model
CN104182388A (en) Semantic analysis based text clustering system and method
CN112395385A (en) Text generation method and device based on artificial intelligence, computer equipment and medium
CN102999533A (en) Textspeak identification method and system
CN104699797A (en) Webpage data structured analytic method and device
CN109033066A (en) A kind of abstract forming method and device
CN113157903A (en) Multi-field-oriented electric power word stock construction method
CN117271736A (en) Question-answer pair generation method and system, electronic equipment and storage medium
US20200192924A1 (en) Natural language query system
Wang et al. Improving handwritten Chinese text recognition by unsupervised language model adaptation
CN102567322B (en) Text compression method and text compression device
CN116756303A (en) Automatic generation method and system for multi-topic text abstract
CN114266249A (en) Mass text clustering method based on birch clustering
KR102117281B1 (en) Method for generating chatbot utterance using frequency table
Yamron et al. Statistical models of topical content
CN111125299A (en) Dynamic word bank updating method based on user behavior analysis
Yamron et al. Statistical models for tracking and detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
ASS Succession or assignment of patent right

Owner name: SHENZHEN SHIJI LIGHT SPEED INFORMATION TECHNOLOGY

Free format text: FORMER OWNER: SHENZHEN TENCENT COMPUTER SYSTEM CO., LTD.

Effective date: 20131021

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20131021

Address after: 518057 Tencent Building, 16, Nanshan District hi tech park, Guangdong, Shenzhen

Applicant after: Shenzhen Shiji Guangsu Information Technology Co., Ltd.

Address before: The South Road in Guangdong province Shenzhen city Fiyta building 518057 floor 5-10 Nanshan District high tech Zone

Applicant before: Shenzhen Tencent Computer System Co., Ltd.

C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20130619