CN100365578C - Compiler apparatus and linker apparatus - Google Patents

Compiler apparatus and linker apparatus Download PDF

Info

Publication number
CN100365578C
CN100365578C CNB2004100852667A CN200410085266A CN100365578C CN 100365578 C CN100365578 C CN 100365578C CN B2004100852667 A CNB2004100852667 A CN B2004100852667A CN 200410085266 A CN200410085266 A CN 200410085266A CN 100365578 C CN100365578 C CN 100365578C
Authority
CN
China
Prior art keywords
mentioned
group
data
grouping
cache memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CNB2004100852667A
Other languages
Chinese (zh)
Other versions
CN1609804A (en
Inventor
山本康博
小川一
瓶子岳人
道本昌平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Socionext Inc
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Publication of CN1609804A publication Critical patent/CN1609804A/en
Application granted granted Critical
Publication of CN100365578C publication Critical patent/CN100365578C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/44Encoding
    • G06F8/443Optimisation
    • G06F8/4441Reducing the execution time required by the program code
    • G06F8/4442Reducing the number of cache misses; Data prefetching

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

A compiler capable of increasing the hit rate of the cache memory is a compiler that targets at a computer having a cache memory, and that converts a source program into an object program, the compiler causing a computer to execute the following steps: a grouping step of analyzing grouping information that is used for grouping data objects included in the source program, and placing said data objects into groups based on a result of said analysis; and an object program generation step of generating the object program based on a result of the grouping performed in the grouping step, said object program not allowing data objects belonging to different groups to be laid out in any blocks with the same set number on the cache memory.

Description

Compilation device and coupling arrangement
Technical field
The invention relates to the program compiler that will convert the executive routine of describing with machine language to, particularly about this source program is converted in the program compiler with executable executive routine on the computing machine of cache memory with the source program of senior language descriptions such as C Plus Plus.
Background technology
Multiple computing machine program compiler with cache memory has been proposed before this.For example, knownly a kind ofly will go up data group (for example interval overlapping data group of existence) that approaching timing the carries out access program compiler (for example, opening flat 7-129410 communique) of configuration continuously on primary memory by the time with reference to the spy.Like this, by data group configuration continuously on primary memory of timing access approaching between inciting somebody to action on time, these data groups can once be configured on same (block) of cache memory.Therefore can improve the success ratio of cache memory.
But, when the data group for timing access approaching between on time is configured on same, and when determining the address of primary memory of this data group, if when the size of this data group (size) is also bigger than piece size, just can not be with in same of all data write-onces that data group comprised.Therefore, between each data that data group comprised, with the competition of cache memory on same takes place, the cache memory mistake often appears.This problem is being very significant on the cache memory of the Direct Transform mode of a corresponding piece only for a group (set).
Summary of the invention
The present invention is exactly that its purpose is to provide a kind of compilation device, can avoid the competition on same in order to solve above-mentioned problem, improves the success ratio of cache memory.
In order to achieve the above object, compilation device involved in the present invention, with the computing machine with cache memory is object, convert source program to target program, comprise: apparatus for grouping, the grouping information that the target that is comprised in the source program is divided into groups is analyzed, and this target is divided into groups; And the target program generating apparatus, according to the group result of above-mentioned apparatus for grouping, generate and to make each target that belongs to not on the same group not be configured in target program in the piece of same cluster group number of above-mentioned cache memory.
According to this formation, for example,, comprise the information of the target that the existence interval is overlapping as other groups if in grouping information, then according to this information, the interval overlapping target configuration of existence is in the distinct group group number of cache memory.Therefore when executive routine, between interval each the overlapping target of existence, just can not produce piece, and the race condition of expulsion target mutually of same group's group number of mutual contention cache memory.Thereby, be difficult to produce the cache memory mistake, can improve the success ratio of cache memory.In this manual, data such as " target " expression variable and array.
Above-mentioned apparatus for grouping also can be analyzed the indication of the compilation device that comprised in the above-mentioned source program, and the target that is comprised in the above-mentioned source program is divided into groups.Preferably, above-mentioned indication is the note order that specified target complex is divided into groups by the every capable size of above-mentioned cache memory; Above-mentioned apparatus for grouping according to the above-mentioned note order that is comprised in the above-mentioned source program, to by the specified above-mentioned target complex of this note order, divides into groups by the every capable size of above-mentioned cache memory.
When carrying out executive routine,, be configured in the piece of distinct group group number of cache memory according to the target that note order user might carry out access with timing around.Might carry out between each target of access with timing around with regard to not being created in like this, fight for piece, and the race condition of expulsion target mutually of the same group group number of cache memory mutually.Therefore be difficult to produce the cache memory mistake, can improve the success ratio of cache memory.
Above-mentioned indication also can be to make specified target configuration in the piece of group's group number independently, and monopolize the note order of using this piece; Above-mentioned apparatus for grouping comprises: packet transaction portion, according to the above-mentioned note order that is comprised in the above-mentioned source program, will be divided into groups by each this target by the specified above-mentioned target of this note order; And group's group number configuration part, each group of grouping is set different group's group numbers; Above-mentioned target program generating apparatus, generate make the target configuration that comprised in each group above-mentioned high-speed buffer corresponding to above-mentioned group group group number piece in, and make this target monopolize the target program of this piece of use.
Generation makes the target of note order appointment monopolize the target program of group's group number piece of the cache memory of being set by group's group number configuration part.Like this, the frequent target of using can be monopolized the use cache memory, prevents that this target is ejected away from cache memory, can carry out processing at a high speed.
In addition, above-mentioned apparatus for grouping, also the summary information that can be generated when carrying out from machine language instruction string that source program generates is analyzed, and the target that is comprised in the above-mentioned source program is divided into groups.In above-mentioned summary information, include the information of closing above-mentioned object access frequency; Above-mentioned apparatus for grouping with the target of above-mentioned access frequency more than the threshold value of regulation, is divided into respectively independently group.
When carrying out executive routine, the target that the access frequency is high is configured in respectively in the piece of distinct group group number of cache memory.Therefore, the target that the access frequency is high can be monopolized the piece that uses cache memory, can make the target of frequent use be difficult to be ejected away from cache memory.Thereby can prevent the cache memory mistake, improve the success ratio of cache memory.
In above-mentioned summary information, also can include the interval information of above-mentioned target existence of closing; Above-mentioned apparatus for grouping becomes different groups with the interval overlapping targeted packets of existence.
Having the overlapping target of living space is configured in respectively in the piece of distinct group group number.Therefore, between each target of phase access, can not produce the piece of the same group of mutual contention group number at the same time, and expel the race condition of target mutually.Thereby be difficult to produce cache device mistake, can improve the success ratio of cache memory.
Preferably, above-mentioned apparatus for grouping according to above-mentioned source program, to interval overlapping analysis of the existence of the target that comprised in the above-mentioned source program, becomes different groups with the interval overlapping targeted packets of existence.
Having the overlapping target of living space is configured in respectively in the piece of distinct group group number.Therefore, the phase carries out between each target of access at the same time, can not produce the piece of the same group of mutual contention group number, and expels the race condition of target mutually.Thereby be difficult to produce cache device mistake, can improve the success ratio of cache memory.
The present invention not only can realize generating the compilation device of this feature target program, and can realize that the feature device with compilation device is the Compilation Method of step, and realizes that this compilation device makes the program of computer operation.And such program can certainly circulate by transmission mediums such as recording mediums such as CD-ROM and the Internets.
According to the present invention, when executive routine, can improve the success ratio of cache memory, can also carry out at a high speed and handle.
Description of drawings
Fig. 1 is the block scheme that the hardware as the computing machine of object of the related compiling system of the expression embodiment of the invention constitutes a part.
Fig. 2 is the block scheme that the expression cache hardware constitutes.
Fig. 3 is the position pie graph of each piece of comprising in the cache memory of expression.
Fig. 4 is the figure for explanation data configuration method summary in the source program of compiling system.
Fig. 5 is the functional-block diagram that the expression compiling system constitutes.
Fig. 6 is the functional-block diagram that expression embodiment 1 related compiling portion constitutes.
Fig. 7 is the processing flow chart that carry out the note shown in Fig. 6 (pragma) analysis portion and configuration group information configuration part.
Fig. 8 is the figure of described source program one example of expression note " #pragma_overlap_access_object ".
Fig. 9 is the figure of the target of grouping.
Figure 10 is the figure of expression from assembler code one example of the generation of the source code shown in Fig. 8.
Figure 11 is the figure of source program one example of expression note " #pragma_cache_set_number " description.
Figure 12 is the figure of source program one example of expression note " #pragma_cache_set_monopoly " description.
Figure 13 is the processing flow chart that the address setting portion of the connecting portion shown in the presentation graphs 5 carries out.
Figure 14 is the figure that carries out processing for the address setting portion of the connecting portion shown in the key diagram 5.
Figure 15 is the functional-block diagram that expression embodiment 2 related compiling portions constitute.
Figure 16 is the portion and the processing flow chart that disposes the execution of group information configuration part according to one's analysis of the docket number shown in Figure 15.
Figure 17 carries out assembler code for explanation according to access frequency information to generate the figure that handles.
Figure 18 is the figure of interval summary data one example of the relevant target existence of expression.
Figure 19 is the line chart in the existence interval of expression target.
Figure 20 is the figure to the targeted packets result.
Figure 21 is the figure of expression from assembler code one example of the generation of the summary data shown in Figure 18.
Figure 22 is the functional-block diagram that expression embodiment 3 related compiling portions constitute.
Figure 23 is the interval overlapping figure of the existence of explanation target (object).
Figure 24 carries out the grouping of target and the figure that cache memory group group number is set for explanation.
Figure 25 is expression according to the survive figure of assembler code one example of interval overlapping generation of the target shown in Figure 23.
Embodiment
(embodiment 1)
(hardware formation)
Fig. 1 is the block scheme that the hardware of the computing machine of the related compiling system object of the expression embodiment of the invention 1 constitutes a part.Computing machine 10 comprises: processor 1, primary memory 2, and cache memory 3.The formation of processor 1 and primary memory 2 is owing to identical with common processor and primary memory, so no longer repeat in this its detailed description.
Fig. 2 is the block scheme that expression cache memory 3 hardware constitute.Cache memory 3 is cache memories of Direct Transform (direct mapping) mode (1 tunnel group connection mode), comprising: address register 20, code translator 30, memory section 31, comparer 32, "AND" circuit 33, control part 38, and memory I/F (interface) portion 21.
Address register 20 is the registers that keep the access address of primary memory 2.This access address is 32.As shown in the drawing, the access address begins to comprise successively from most significant digit: 21 tag address, 4 group index (SI the figure) and other 7 bit value.Tag address is the memory section 31 corresponding addresses that make primary memory 2 and cache memory 3 herein.Group's index (SI) is the address of specifying group's (row, piece) of memory section 31.
Because the figure place of group's index (SI) is 4, so memory section 31 has 16 (=2 4) individual group (set) (owing to being the complete association mode, institute thinks 16 pieces herein).Fig. 3 is the position pie graph that comprises each piece in the expression memory section 31.As shown in this Fig, in 1 piece, comprise: effective marker V, 21 marks, 128 byte line data, and residual dirt (dirty) sign D.
Effective marker V represents whether this piece is effective.Mark is that 21 tag address is copied.Line data be to the address that will be kept in the address register 20 to start with 128 byte datas in the primary memory 2 of address copy.Whether residual dirty sign D is illustrated in to have in this piece and writes, that is, expression since at the line data of this piece high speed buffering by after writing, different with data in the primary memory 2, whether need to write back in the primary memory 2.
Herein, tag address represents to be transformed to the zone (size in this zone be the size of group number * line data) of line data in primary memory 2 of memory section 31.The size in this zone is the size of being determined by 10 bit address more the next than tag address, i.e. 2K byte.In addition, group index (SI) refers in 16 groups 1.By tag address and index (SI) specific group of group, be the unit of displacement.The size of line data is by index than group 7 the next definite sizes of (SI), i.e. 128 bytes.Suppose that 1 word is 4 bytes, then 1 line data is 32 words.
Code translator 30 shown in Fig. 2 is deciphered 1 group in 16 groups of selection memory portion 31 to 4 of group index (SI).
Comparer 32, the tag address in the compare address register 20, with the group of selecting by group index (SI) in the feature that comprises whether consistent.
The comparative result of 33 pairs of effective markers of "AND" circuit (V) and comparer 32 carries out AND operation.When " with " when being 1, mean that then the line data corresponding to the tag address in the address register 20 and group's index (SI) is present in the memory section 31.When " with " when being 0, then mean not success.
All controlling of 38 pairs of cache memories 3 of control part.
(diagrammatic illustration of data configuration method)
Fig. 4 is the figure for explanation data configuration method summary in the source program of the related compiling system of present embodiment.As shown in Fig. 4 (a), suppose in the variable that comprises at source program 3 (set of variables A-C) are arranged with variable (target) group of around timing access.Herein, the size of data that each set of variables comprised is the line data size of cache memory 3, i.e. 128 bytes.When these 3 variablees write in the caches 3, compiling system generated the machine language instruction that writes in the distinct group group number piece.For example, when in the piece of the group 0,1 that set of variables A, B and C is configured in cache memory 3 respectively and 15, as shown in Fig. 4 (b), set of variables A, B and C are written into the storage area of the primary memory 2 in the piece of group 0,1 and 15 when being stored in being written into cache memory 3.Like this, as shown in Fig. 4 (c), set of variables A, B and C then write respectively in the piece of group 0,1 and 15 when writing the cache memory 3 from primary memory 2.
(compiling system)
Fig. 5 is the functional-block diagram that the related compiling system of expression present embodiment constitutes.Compiling system 40 is the systems that source program 44 are transformed into the executive routine of being described by the executable machine languages of the computing machine shown in Fig. 1 10 58, comprising: compiling portion 46, compilation portion 50, connecting portion 54, simulation part 60, and summary (profile) portion 64.Each handling part can be implemented in the program of operating on the processor 1 of computing machine 10.But compiling system 40 also can be to be object with computing machine 10, the cross compile system that carries out on other computing machines.
Compiling portion 46, the cache memory parameter 42 that receives source program 44, constitutes by the parameter information of relevant cache memory 3 (for example the size of group number, line data etc.), and the summary data 66 of expression analysis result when carrying out executive routine 58 by senior language descriptions such as C Plus Pluss as input, according to these data, source program 44 is transformed into the assembling file 48 of using the compilation language description.
Compilation portion 50, generation will be replaced as the file destination 52 of machine language with the assembling file 48 of compilation language description.
Connecting portion 54 carries out combination with the file destination more than 1 52 (only describing a file destination 52 in the drawings), generates the executive routine 58 of execute form.In connecting portion 54, be provided with address setting portion 56, determine the address of the primary memory 2 of storage target complex, target complex (data group or order bloc) with around timing access is configured in the piece of distinct group group number of cache memory 3.
Simulation part 60 virtual execution executive routines 58, output executive logging 62.
Summary portion 64 is by analyzing executive logging 62, generates the summary data 66 that prompting (hint) effect is arranged of the best executive routines 58 such as living space of the access frequency that is used to obtain variable, variable.
(compiling portion)
Fig. 6 is the functional-block diagram that expression compiling portion 46 constitutes.The related compiling portion 46 of present embodiment is according to cache memory parameter 42 and source program 44, and source program 44 is transformed into the handling part of assembling file 48, comprises analysis portion 72, and assembler code transformation component 76.
Analysis portion 72 is to the source program 44 as compiler object, extracts reserved word (key word) etc., and the pre-process portion that carries out grammatical analysis except the analytic function that the general compiled program has, also has the note analysis portion 74 that order is analyzed to note.
" note (or note order) " is that the user can specify the indication to compiling portion 46 of (configuration) arbitrarily in source program 44, is the character string with " #pragma " beginning.
Assembler code transformation component 76 is that each statement of the source program 44 that will turn over from analysis portion 72 is transformed into after the intermediate code, is transformed into the sign indicating number of assembly language, and the handling part of output assembling file 48.Assembler code transformation component 76 is except the mapping function that the general compiled program has, also has configuration group information configuration part 78, this configuration group information configuration part is according to the specified target of being analyzed by note analysis portion 74 of note, generates the assembler code in the piece of the suitable group's group number that is configured in cache memory 3.
Herein, the kind of note has 3 kinds of notes shown below.
(1)#pragma_overlap_access_object?a,b,c
(2)#pragma_cache_set_number=n?a
Wherein, n is group's group number (0~15)
(3)#pragma_cache_set_monopoly?a,b
(1) individual note, expression target a, b and c carry out access with timing around.As long as number of targets is more than 1 or 1, be several can.Meaning about this note will be narrated in the back.(2) individual note, be for specify with target a be configured in cache memory 3 which group group number piece in.(3) individual note, indication are configured in target a and b in the piece of other groups group number of cache memory 3, and make target a and b monopolize these pieces, promptly do not dispose the target beyond target a and the b in these pieces.
Fig. 7 is the processing flow chart that carry out note analysis portion 74 shown in Fig. 6 and configuration group information configuration part 78.
74 pairs of source programs of note analysis portion, 44 described note kinds are analyzed (S1).When note is above-mentioned (1) individual note (among the S1 _ overlap_access_object), the grouping of carrying out (S2), make the target complex of appointment after " #pragma_overlap_access_object ", below the line data size (i.e. 128 bytes) of 1 group of cache memory 3.Below packet transaction (S2) is described more specifically.
Fig. 8 is the figure of described source program one example of expression (1) individual note.Specify by " #pragm_overlap_access_object a, b, c " note, can spell out integer a[32 by the user], b[32], and c[32] on time between approaching timing carry out access.Above-mentioned packet transaction (S2) is carried out according to the indication of this note in configuration group information configuration part 78.That is, when with array a[32], b[32] and c[32] during as 1 target complex, it is divided into groups by per 128 bytes.When the integer type variable is 4 bytes, array a[32], b[32] and c[32] be respectively 128 bytes.Therefore, this target complex is divided into 3 groups shown in Fig. 9 (group data_a, data_b and data_c), comprises array a[32 in group data_a], in group data_b, comprise array b[32], in group data_c, comprise array c[32].
In packet transaction (S2) afterwards, 78 pairs of each groups in configuration group information configuration part add different group's group number (S3 of Fig. 7).For example, on group data_a, data_b and data_c, add group group number 0,1 and 2 respectively.
Then, configuration group information configuration part 78 is created on by group number and sets the assembler code (S4) that disposes this group target in the piece of the cache memory 3 of handling the group group number that (S3) set.
Figure 10 is the figure of expression from assembler code one example of the generation of the source code shown in Fig. 8.3 initial row are illustrated in the storage area with the group primary memory 2 of target configuration in No. 0 group of cache memory 3 that data_a comprised, and deposit this target.Next 3 row are illustrated in the storage area with the group primary memory 2 of target configuration in No. 1 group of cache memory 3 that data_b comprised, and deposit this target.3 last row are illustrated in the storage area with the group primary memory 2 of target configuration in No. 2 groups of cache memory 3 that data_c comprised, and deposit this target.
Below 3 initial row are illustrated, the 1st line display, order " SECTION " is that decollator, the group name of group is " data_a ".The 2nd row is illustrated in the storage area with the primary memory 2 of target configuration in No. 0 group of cache memory 3 shown in the 3rd row, deposits this target.This target of the 3rd line display, (array size of data a) is 128 bytes to target a.Also identical below the 4th row.
When note is above-mentioned (2) individual note (among the S1 _ cache_set_number) time, carry out the grouping (S5) of target according to the appointment of note, and group is set group's group number (S6).For example, when the source program that (2) individual note shown in Figure 11 is described, specify by the note of " #pragma_cache_set_number=o i ", array i[32] set " 0 " as group's group number of cache memory 3.For " #pragma_cache_set_number=1 j ", and " #pragma_cache_set_number=2 k " too.
Then, dispose group information configuration part 78 and generate assembler codes, make and in the piece of setting group's group number cache memory 3 of handling (S6) setting by group number, dispose this group target (S4).
When note is above-mentioned (3) individual note (among the S1 _ cache_set_monopoly), configuration group information configuration part 78 will be by most targets of note appointment respectively as group (S7) independently.Then, each group is set different group's group numbers (S8).For example, when being the described source program of (3) individual note shown in Figure 12, specify according to the note of " #pragma_cache_set_monopoly x, y ", at array x[32] and array y[32] the distinct group group number of setting high-speed memory buffer 3 gone up.
Then, configuration group information configuration part 78 is created on by group number and sets the assembler code (S4) of handling this group target of configuration on the piece of cache memory 3 that (S8) set group group number.When being (3) individual note appointment, generating the target that makes the mark appointment and monopolize by group number and set the assembler code of handling the high-speed buffer group group number piece that (S7) set.Like this, the target of frequent use is monopolized use cache memory 3, prevent that this target is ejected from cache memory 3, can carry out high speed processing.
To whole notes, carry out above processing (S1~S8) (circulation A), and generation assembler code.Also can set (2) individual note " #pragma_cache_set_number " and (3) individual note " #pragma_cache_set_monopoly " simultaneously to same target.
(connecting portion)
Figure 13 is the processing flow chart that the address setting portion 56 of the connecting portion 54 shown in the presentation graphs 5 carries out.Figure 14 is for the figure of this processing is described.Below with reference to Figure 13 and Figure 14, the processing that the address setting portion 56 of connecting portion 54 is carried out describes.
One or more file destination 52 is read in address setting portion 56, with the target that is comprised in the file destination 52, is dividing in the target of having determined configuration group group number in the cache memory 3, and undetermined target (S11).For example, be divided into the target of determining group's group number shown in Figure 14 (a), and do not determine the target of group's group number shown in Figure 14 (b).
Then, address setting portion 56 gives primary memory 2 (S12) with Target Assignment.More detailed description is, with determining the target of group's group number, is configured in one by one in the zone of primary memory 2, and the zone of described primary memory 2 makes described target configuration give the piece of 3 groups of group numbers of cache memory.And will not determine the zone of group's group number to be configured to target is not set in the zone corresponding to the cache memory 3 of this group group number of group group number.At this moment, shown in Figure 14 (c), to the 0x90000FFF address, store target in 0x900000000 address from primary memory 2.That is,, only set one of them for the target of 2 group's group numbers shown in Figure 14 (a).
Then, address setting portion 56 checks whether the target of determining group's group number all is configured in primary memory 2 and has suffered (S13).If all dispose (YES among the S13), then end process.If also have not (NO among the S13) of configuration, then address setting portion 56 handles (S12) equally with above-mentioned target configuration, and the 2nd later target is configured in the primary memory 2 too.At this moment, corresponding to the zone that needs only the group's group number that has once disposed the target of determining group's group number, even also no longer configuration (S14) of dummy section.Like this, as shown in Figure 14 (c), the target configuration of Pei Zhi group's group number 4 is not a dummy section corresponding to later group's group number 0,1 in 0x90001000 address and 3 zone in storer.
As described above, present embodiment is when carrying out executive routine, and according to the appointment of note, the user thinks the target of carrying out access with timing around, is configured in the piece of distinct group group number of cache memory 3.Therefore, may carry out between each target of access, can not produce the piece of fighting for the same group of cache memory group number, the race condition that reaches mutual expulsion target with timing around.Thereby be difficult to produce the cache memory mistake, can improve the success ratio of cache memory.
(embodiment 2)
The part that the calculation and object machine hardware of the compiling system that the embodiment of the invention 2 is related constitutes, identical with shown in Fig. 1~Fig. 3.And the formation of the related compiling system of present embodiment, identical with shown in Fig. 5.So describe no longer repetition in detail this its.
Figure 15 is the functional-block diagram that the related compiling portion of expression present embodiment 46 constitutes.The compiling portion 46 that present embodiment is related is according to cache memory parameter 42, source program 44 and summary data 66, and source program 44 is transformed into the handling part of assembling file 48, comprises analysis portion 82, and assembler code transformation component 86.
Analysis portion 82, be source program 44, extract reserved word (key word) etc., the pre-process portion that carries out grammatical analysis compiler object, except the analytic function that the general compiled program has, also has docket number that summary data 66 are analyzed portion 84 according to one's analysis.As described in example 1 above, summary data 66 are to be the access frequency that obtains target (variable etc.), the irradiative information that reaches the best executive routines 58 such as living space of target.
Assembler code transformation component 86 is that each statement of the source program 44 that will turn over from analysis portion 82 is transformed into after the intermediate code, is transformed into the sign indicating number of assembly language, and the handling part of output assembling file 48.Except the mapping function that the general compiled program has, also has configuration group information configuration part 88, this configuration group information configuration part generates target configuration such assembler code in the piece of suitable group's group number of cache memory 3 according to the docket number analysis result of portion 84 according to one's analysis.
Figure 16 is the portion 84 and the processing flow chart that disposes 88 execution of group information configuration part according to one's analysis of the docket number shown in Figure 15.
Docket number is portion 84 according to one's analysis, and summary data 66 described summary information categories are analyzed (S21).When (be access frequency information on S21) when the information described in the summary data 66 is the information of relevant object access frequency, configuration group information configuration part 88 makes the target with the above access frequency of defined threshold as independently organizing (S22) respectively.In addition, configuration group information configuration part 88 will have the target of access frequency of not enough this threshold value as 1 group (S23).Then, configuration group information configuration part 88 pairs of groups of being tried to achieve by packet transaction (S22 and S23) are distinguished the different group's group number (S24) of setting high-speed memory buffer 3.Then, configuration group information configuration part 88 is created on the assembler code of depositing this target in the zone of primary memory 2, makes being set by group's group number and handles this group target (S25) of configuration in the piece of cache memory 3 that (S24) set group's group number.
Below, with instantiation, the assembler code of carrying out according to access frequency information is generated processing (S22~S25) illustrate in greater detail.Figure 17 carries out assembler code for explanation according to access frequency information to generate the figure that handles.The summary information of the access frequency shown in (a) of supposing to have Figure 17.The access frequency adopts the ratio of each object access number of times to the target complete access times herein, still, for example also can with total access times, and the access times of unit interval as the access frequency.Figure 17 (b) is the figure that the line chart with Figure 17 (a) quantizes, and target a~e (array a[32]~e[32]) expression has 72%, 25%, 2%, 2% and 1% access frequency respectively.
For example when setting the threshold to 10%, shown in Figure 17 (c), target a and b with 10% above access frequency are categorized as group A and B (S22 of Figure 16) respectively, and the target c~e with less than 10% access frequency is categorized as 1 group C (S23 of Figure 16).In addition, group A~C is set group group number 0~2 (S24 of Figure 16) respectively.At last, be created on the assembler code of stored target a~e in the zone of primary memory 2, target a is configured in the piece of group's group number 0 of caches 3, target b is configured in the piece of group's group number 1 of high-speed buffer 3, and target c~e is configured in (S25 of Figure 16) in the piece of group's group number 2 of cache memory 3.
When summary data 66 described information are the information in relevant target existence interval (the existence block information on S21), interval overlapping (S26) of targets existence checked in configuration group information configuration part 88.Then, divide into groups in configuration group information configuration part 88, makes the existence interval have overlapping target to become different group (S27).Then, the 88 pairs of groups of trying to achieve in configuration group information configuration part, the distinct group group number (S28) of setting high-speed memory buffer 3 respectively by packet transaction (S26 and S27).Then, configuration group information configuration part 88 is carried out above-mentioned assembler code and is generated processing (S25).
Below, with instantiation the assembler code based on the existence block information is generated processing (S26~S28 and S25) and be described in more detail.Figure 18 is the figure of interval summary data 66 1 examples of the relevant target existence of expression.The existence interval of in Figure 18, having represented relevant 5 target a~e, the existence interval of the relevant target a of the 1st line display for example, the interval beginning of expression existence data regularly are " 0x80000010 ", and the data of the interval stop timing of expression existence are " 0xx800001ff ".The 2nd the row below too.
When figuring the information in relevant this existence interval, shown in Figure 19 (a).Thereby, when being node, use the overlapping aphalangia of representing the existence interval between target by branch to line chart with the target, it is interval when overlapping to represent to survive, and becomes Figure 19 (b) and Figure 19 (c).That is, target a, b and d have overlapping mutually, and target c and e have mutually overlapping (S26 of Figure 16).
Overlapping according to such existence interval, carry out grouping shown in Figure 20 (S27 of Figure 16).That is, have overlapping target a, b to be categorized as different group A, B and C respectively with d mutually, same, target c is categorized as different group B and C respectively with e.In addition, for group A~C, set group's group number 0~2 (S24 of Figure 16) respectively.At last, as shown in figure 21, be created on the assembler code of stored target a~e in the zone of primary memory 2, in the piece of the group's group number 1 that make that target a is configured in the piece of group's group number 0 of cache memory 3, target b and c is configured in cache memory 3 and target d and e be configured in the piece of group's group number 2 of cache memory 3 (S25 of Figure 16).Herein, target b and c are same group, and target c and e are same group, but also can be respectively with target configuration in other groups.
According to the present embodiment of above explanation, when carrying out executive routine, the target that the access frequency is high is configured in respectively in the piece of cache memory distinct group group number.And the low target configuration of access frequency is in the piece of other groups group number different with these.Like this, can make the high target of access frequency monopolize the piece that uses cache memory.Therefore the frequent target of using is difficult to be ejected from cache memory, thereby can prevent to produce the cache memory mistake, can improve the success ratio of cache memory.
In addition, the interval overlapping target of existence is arranged, be configured in respectively in the piece of distinct group group number.Therefore, the phase carries out between each target of access at the same time, can not produce the piece of the same group of contention group number, and expels the race condition of target mutually.Thereby be difficult to produce the cache memory mistake, can improve the success ratio of cache memory.
(embodiment 3)
The part that the calculation and object machine hardware of the compiling system that the embodiment of the invention 3 is related constitutes, identical with shown in Fig. 1~Fig. 3.And the formation of the related compiling system of present embodiment, identical with shown in Fig. 5.So describe no longer repetition in detail this its.
Figure 22 is the functional-block diagram that the related compiling portion of expression present embodiment 46 constitutes.The compiling portion 46 that present embodiment is related is according to cache memory parameter 42, and source program 44, and source program 44 is transformed into the handling part of assembling file 48, comprises analysis portion 92, and assembler code transformation component 86.
Analysis portion 92, be source program 44, extract reserved word (key word) etc., the pre-process portion that carries out grammatical analysis compiler object, except the analytic function that the general compiled program has, also have the interval overlapping analysis portion 94 of the interval overlapping existence of analyzing of target (variable etc.) existence.The formation of assembler code transformation component 86, since identical with embodiment 2, so being described in detail in this, it no longer repeats.
The interval overlapping analysis portion 94 of surviving analyze source program 44, and the existence interval of evaluating objects is overlapping.For example, when the source program 44 shown in Figure 23 (a), when the existence interval of target a~f is analyzed, generate the line chart shown in Figure 23 (b).The interval overlapping analysis portion 94 of surviving, overlapping according to the line chart shown in Figure 23 (b) for living space, when being node with the target, represent that with branch the interval overlapping aphalangia of existence becomes the situation shown in Figure 23 (c) between target when line chart is represented.That is, expression target a, b, e and f have overlapping mutually, and target a, c and d have overlapping mutually.About the interval overlapping information of existence between target,, as shown in figure 24, carry out the grouping of target and the setting of 3 groups of group numbers of cache memory according to this by carrying out processing similarly to Example 2.Generate assembler code shown in Figure 25 at last.
As mentioned above, according to present embodiment, existence is interval overlapping target, is configured in respectively in the piece of distinct group group number.Therefore, the phase carries out between each target of access at the same time, can not produce the piece of the same group of contention group number, and expels the race condition of target mutually.Thereby be difficult to produce the cache memory mistake, can improve the success ratio of cache memory.
More than describe the present invention by embodiment, but the present invention is not limited to these embodiment.
For example, cache memory also can adopt the cache memory of n road group connection mode.
(utilizing on the industry possibility)
The present invention is applicable to compiler, particularly applicable to have caches The computer of device is the compiler of object etc.

Claims (20)

1. a compilation device is an object with the computing machine with cache memory, converts source program to target program, it is characterized in that comprising:
Apparatus for grouping is analyzed the grouping information that the data that are used for that source program is comprised are divided into groups, and these data are divided into groups; And
The target program generating apparatus according to the group result of above-mentioned apparatus for grouping, generates and to make each data that belongs to not on the same group not be configured in target program in the piece of same cluster group number of above-mentioned cache memory.
2. compilation device as claimed in claim 1 is characterized in that:
Above-mentioned apparatus for grouping is analyzed the indication to compilation device that is comprised in the above-mentioned source program, and the data that comprised in the above-mentioned source program are divided into groups.
3. compilation device as claimed in claim 2 is characterized in that:
Above-mentioned indication is the note order that specified data group is divided into groups by the every capable size of above-mentioned cache memory;
Above-mentioned apparatus for grouping according to the above-mentioned note order that is comprised in the above-mentioned source program, to by the specified above-mentioned data group of this note order, divides into groups by the every capable size of above-mentioned cache memory.
4. compilation device as claimed in claim 2 is characterized in that:
Above-mentioned indication is to make the note order of appointed data configuration in the piece of group's group number of the appointment of above-mentioned cache memory;
Above-mentioned apparatus for grouping according to the above-mentioned note order that is comprised in the above-mentioned source program, divides into groups to above-mentioned data by each appointed above-mentioned group's group number;
Above-mentioned target program generating apparatus generates and to make the data configuration that belongs in the above-mentioned group target program in the piece of specified above-mentioned group's group number by above-mentioned note order of above-mentioned cache memory.
5. compilation device as claimed in claim 2 is characterized in that:
Above-mentioned indication is to make appointed data configuration in the piece of group's group number independently, and monopolizes the note order of using this piece;
Above-mentioned apparatus for grouping comprises:
Packet transaction portion according to the above-mentioned note order that is comprised in the above-mentioned source program, will be divided into groups by each these data by the specified above-mentioned data of this note order; And
Different group's group numbers is set to each group of grouping in group's group number configuration part;
Above-mentioned target program generating apparatus, generate make the data configuration that comprised in each group with the piece of above-mentioned group of above-mentioned high-speed buffer corresponding group group number in, and make these data monopolize the target program of this piece of use.
6. compilation device as claimed in claim 1 is characterized in that:
Above-mentioned apparatus for grouping, the summary information that is generated when carrying out from machine language instruction string that source program generates is analyzed, this summary information becomes the prompting that is used to make the relevant best above-mentioned target program of the data that comprised with above-mentioned source program, and the data that comprised in the above-mentioned source program are divided into groups.
7. compilation device as claimed in claim 6 is characterized in that:
In above-mentioned summary information, include the information of closing above-mentioned access frequency on data;
Above-mentioned apparatus for grouping with the data of above-mentioned access frequency more than the threshold value of regulation, is divided into respectively independently group.
8. compilation device as claimed in claim 6 is characterized in that:
In above-mentioned summary information, include the interval information of above-mentioned data existence of closing;
Above-mentioned apparatus for grouping becomes different groups with the interval overlapping packet of existence.
9. compilation device as claimed in claim 1 is characterized in that:
Above-mentioned apparatus for grouping, according to above-mentioned source program, interval overlapping analysis of data existence to being comprised in the above-mentioned source program becomes different groups with the interval overlapping packet of existence.
10. a coupling arrangement with adopting the one or more target program that compilation device generated to carry out combination, generates the executive routine of execute form, it is characterized in that:
Above-mentioned compilation device is an object with the computing machine with cache memory, converts source program to target program, comprising:
Apparatus for grouping is analyzed the grouping information that the data that are used for that source program is comprised are divided into groups, and these data are divided into groups; And
The target program generating apparatus according to the group result of above-mentioned apparatus for grouping, generates and to make each data that belongs to not on the same group not be configured in target program in the piece of same cluster group number of above-mentioned cache memory,
Above-mentioned coupling arrangement comprises:
The 1st address setting device, the fixed data of group's group number of the piece when being configured in above-mentioned cache memory are set the address of aforementioned calculation machine primary memory, make above-mentioned data configuration in the piece of this group group number; And
The 2nd address setting device, the undetermined data of group's group number of the piece when being configured in above-mentioned cache memory, set the address of above-mentioned primary memory, make in the piece of group's group number of the undetermined data configuration of above-mentioned group's group number beyond this group group number of the fixed data of above-mentioned group's group number.
11. a Compilation Method is an object with the computing machine with cache memory, converts source program to target program, it is characterized in that comprising:
The grouping step is analyzed the grouping information that the data that are used for that source program is comprised are divided into groups, and these data is divided into groups; And
Target program generates step, according to the group result of above-mentioned grouping step, generates and makes each data that belongs to not on the same group not be configured in target program in the piece of same cluster group number of above-mentioned cache memory.
12. Compilation Method as claimed in claim 11 is characterized in that:
Above-mentioned grouping step is analyzed the indication of the Compilation Method that comprised in the above-mentioned source program, and the data that comprised in the above-mentioned source program is divided into groups.
13. Compilation Method as claimed in claim 12 is characterized in that:
Above-mentioned indication is the note order that data group to be named divides into groups by the every capable size of above-mentioned cache memory;
Above-mentioned grouping step according to the above-mentioned note order that is comprised in the above-mentioned source program, to by the specified above-mentioned data group of this note order, is divided into groups by the every capable size of above-mentioned cache memory.
14. Compilation Method as claimed in claim 12 is characterized in that:
Above-mentioned indication is to make the note order of appointed data configuration in the piece of group's group number of above-mentioned cache memory appointment;
Above-mentioned grouping step according to the above-mentioned note order that is comprised in the above-mentioned source program, is divided into groups to above-mentioned data by each specified above-mentioned group's group number;
Above-mentioned target program generates step, generates to make the data configuration that belongs in the above-mentioned group target program in the piece of specified above-mentioned group's group number by above-mentioned note order of above-mentioned cache memory.
15. Compilation Method as claimed in claim 12 is characterized in that:
Above-mentioned indication is to make appointed data configuration in the piece of group's group number independently, and monopolizes the note order of using this piece;
Above-mentioned grouping step comprises:
Packet transaction according to the above-mentioned note order that is comprised in the above-mentioned source program, will be divided into groups by each these data by the specified above-mentioned data of this note order step by step; And
Group's group number is set step by step, and each group of grouping is set different group's group numbers;
Above-mentioned target program generates step, generate make the data configuration that comprised in each group with the piece of above-mentioned group of above-mentioned high-speed buffer corresponding group group number in, and make these data monopolize the target program of this piece of use.
16. Compilation Method as claimed in claim 11 is characterized in that:
Above-mentioned grouping step, the summary information that is generated when carrying out from machine language instruction string that source program generates is analyzed, this summary information becomes the prompting that is used to make the relevant best above-mentioned target program of the data that comprised with above-mentioned source program, and the data that comprised in the above-mentioned source program are divided into groups.
17. Compilation Method as claimed in claim 16 is characterized in that:
In above-mentioned summary information, include the information of the access frequency that closes above-mentioned data;
Above-mentioned grouping step with the data of above-mentioned access frequency more than the threshold value of regulation, is divided into respectively independently group.
18. Compilation Method as claimed in claim 16 is characterized in that:
In above-mentioned summary information, include the interval information of above-mentioned data existence of closing;
Above-mentioned grouping step becomes different groups with the interval overlapping packet of existence.
19. Compilation Method as claimed in claim 11 is characterized in that:
Above-mentioned grouping step, according to above-mentioned source program, interval overlapping analysis of data existence to being comprised in the above-mentioned source program is divided into different groups with the interval overlapping data of existence.
20. a method of attachment with adopting the one or more target program that Compilation Method generated to carry out combination, generates the executive routine of execute form, it is characterized in that:
Above-mentioned Compilation Method is an object with the computing machine with cache memory, converts source program to target program, comprising:
The grouping step is analyzed the grouping information that the data that are used for that source program is comprised are divided into groups, and these data is divided into groups; And
Target program generates step, and according to the group result of above-mentioned grouping step, generate and make each data that belongs to not on the same group not be configured in target program in the piece of same cluster group number of above-mentioned cache memory,
Above-mentioned method of attachment comprises:
The 1st address setting step, the fixed data of group's group number of the piece when being configured in above-mentioned cache memory are set the address of aforementioned calculation machine primary memory, make above-mentioned data configuration in the piece of this group group number; And
The 2nd address setting step, the undetermined data of group's group number of the piece when being configured in above-mentioned cache memory, set the address of above-mentioned primary memory, make in the piece of the undetermined data configuration of above-mentioned group's group number group's group number beyond this group group number of the fixed data of above-mentioned group's group number.
CNB2004100852667A 2003-10-16 2004-10-18 Compiler apparatus and linker apparatus Active CN100365578C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003356921A JP4047788B2 (en) 2003-10-16 2003-10-16 Compiler device and linker device
JP356921/2003 2003-10-16

Publications (2)

Publication Number Publication Date
CN1609804A CN1609804A (en) 2005-04-27
CN100365578C true CN100365578C (en) 2008-01-30

Family

ID=34509811

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2004100852667A Active CN100365578C (en) 2003-10-16 2004-10-18 Compiler apparatus and linker apparatus

Country Status (3)

Country Link
US (1) US7689976B2 (en)
JP (1) JP4047788B2 (en)
CN (1) CN100365578C (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083783A1 (en) * 2005-08-05 2007-04-12 Toru Ishihara Reducing power consumption at a cache
US7647514B2 (en) * 2005-08-05 2010-01-12 Fujitsu Limited Reducing power consumption at a cache
US7616470B2 (en) * 2006-06-16 2009-11-10 International Business Machines Corporation Method for achieving very high bandwidth between the levels of a cache hierarchy in 3-dimensional structures, and a 3-dimensional structure resulting therefrom
JP2010026851A (en) * 2008-07-22 2010-02-04 Panasonic Corp Complier-based optimization method
JP2011170439A (en) * 2010-02-16 2011-09-01 Nec Corp Compiler, compile method, and compile execution program
US20120089774A1 (en) * 2010-10-12 2012-04-12 International Business Machines Corporation Method and system for mitigating adjacent track erasure in hard disk drives
US8572315B2 (en) 2010-11-05 2013-10-29 International Business Machines Corporation Smart optimization of tracks for cloud computing
JP5597584B2 (en) * 2011-03-29 2014-10-01 三菱電機株式会社 Instruction execution analysis apparatus, instruction execution analysis method, and program
JP5687603B2 (en) * 2011-11-09 2015-03-18 株式会社東芝 Program conversion apparatus, program conversion method, and conversion program
JP2014002557A (en) * 2012-06-18 2014-01-09 Fujitsu Ltd Test data generation method, test method, test data generation deice, and test data generation program
JP6191240B2 (en) 2013-05-28 2017-09-06 富士通株式会社 Variable update device, variable update system, variable update method, variable update program, conversion program, and program change verification system
JP6171816B2 (en) * 2013-10-04 2017-08-02 富士通株式会社 Data management program, data management apparatus, and data management method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0784799A (en) * 1993-09-10 1995-03-31 Hitachi Ltd Compiling method for reducing cache competition
JPH08212081A (en) * 1995-02-08 1996-08-20 Hitachi Ltd Memory allocation method, compiling method and compiler
US5848275A (en) * 1996-07-29 1998-12-08 Silicon Graphics, Inc. Compiler having automatic common blocks of memory splitting
CN1228558A (en) * 1998-02-16 1999-09-15 日本电气株式会社 Program transformation method and program transformation system

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07129410A (en) 1993-11-05 1995-05-19 Fujitsu Ltd Memory allocating method for compiler
US5649137A (en) * 1994-10-20 1997-07-15 Advanced Micro Devices, Inc. Method and apparatus for store-into-instruction-stream detection and maintaining branch prediction cache consistency
US6301652B1 (en) * 1996-01-31 2001-10-09 International Business Machines Corporation Instruction cache alignment mechanism for branch targets based on predicted execution frequencies
US6530075B1 (en) * 1998-12-03 2003-03-04 International Business Machines Corporation JIT/compiler Java language extensions to enable field performance and serviceability
US6438655B1 (en) * 1999-04-20 2002-08-20 Lucent Technologies Inc. Method and memory cache for cache locking on bank-by-bank basis
US7254806B1 (en) * 1999-08-30 2007-08-07 Ati International Srl Detecting reordered side-effects
US6574682B1 (en) * 1999-11-23 2003-06-03 Zilog, Inc. Data flow enhancement for processor architectures with cache
JP2001273138A (en) * 2000-03-24 2001-10-05 Fujitsu Ltd Device and method for converting program
US6708330B1 (en) * 2000-06-13 2004-03-16 Cisco Technology, Inc. Performance improvement of critical code execution
US7107583B2 (en) * 2001-02-16 2006-09-12 Hewlett-Packard Development Company, L.P. Method and apparatus for reducing cache thrashing
US6704833B2 (en) * 2002-01-04 2004-03-09 Hewlett-Packard Development Company, L.P. Atomic transfer of a block of data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0784799A (en) * 1993-09-10 1995-03-31 Hitachi Ltd Compiling method for reducing cache competition
US5862385A (en) * 1993-09-10 1999-01-19 Hitachi, Ltd. Compile method for reducing cache conflict
JPH08212081A (en) * 1995-02-08 1996-08-20 Hitachi Ltd Memory allocation method, compiling method and compiler
US5848275A (en) * 1996-07-29 1998-12-08 Silicon Graphics, Inc. Compiler having automatic common blocks of memory splitting
CN1228558A (en) * 1998-02-16 1999-09-15 日本电气株式会社 Program transformation method and program transformation system

Also Published As

Publication number Publication date
US20050086651A1 (en) 2005-04-21
CN1609804A (en) 2005-04-27
JP4047788B2 (en) 2008-02-13
JP2005122481A (en) 2005-05-12
US7689976B2 (en) 2010-03-30

Similar Documents

Publication Publication Date Title
CN100365578C (en) Compiler apparatus and linker apparatus
CN110569979B (en) Logical-physical bit remapping method for noisy medium-sized quantum equipment
EP1145105B1 (en) Determining destinations of a dynamic branch
US5107418A (en) Method for representing scalar data dependences for an optimizing compiler
CN104423929B (en) A kind of branch prediction method and relevant apparatus
JP4709933B2 (en) Program code conversion method
CN1804803B (en) Software tool with modeling of asynchronous program flow
US20020013938A1 (en) Fast runtime scheme for removing dead code across linked fragments
US20100050163A1 (en) Caching run-time variables in optimized code
US6829760B1 (en) Runtime symbol table for computer programs
US5940621A (en) Language independent optimal size-based storage allocation
CN103955354B (en) Method for relocating and device
JP2007525727A (en) Block modeling I / O buffer
CN101311901A (en) Program re-writing apparatus
US8910135B2 (en) Structure layout optimizations
CN106055343A (en) Program evolution model-based object code reverse engineering system
KR101224788B1 (en) Software tool with modeling of asynchronous program flow
Bergmann et al. Improving coverage analysis and test generation for large designs
CN1894674A (en) Memory access instruction vectorization
CN110059378B (en) Automatic manufacturing system Petri network state generation method based on GPU parallel computing
CN102360306A (en) Method for extracting and optimizing information of cyclic data flow charts in high-level language codes
JPH096646A (en) Program simulation device
EP1943589A2 (en) Method for generating a simulation program which can be executed on a host computer
WO2007131089A2 (en) Code translation and pipeline optimization
CN111309329B (en) Instruction address self-adaptive repositioning method and program compiling method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20151027

Address after: Kanagawa

Patentee after: Co., Ltd. Suo Si future

Address before: Osaka Japan

Patentee before: Matsushita Electric Industrial Co., Ltd.