CN103020589A - Face recognition method for single training sample - Google Patents

Face recognition method for single training sample Download PDF

Info

Publication number
CN103020589A
CN103020589A CN2012104646292A CN201210464629A CN103020589A CN 103020589 A CN103020589 A CN 103020589A CN 2012104646292 A CN2012104646292 A CN 2012104646292A CN 201210464629 A CN201210464629 A CN 201210464629A CN 103020589 A CN103020589 A CN 103020589A
Authority
CN
China
Prior art keywords
sample
training
training sample
per person
photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104646292A
Other languages
Chinese (zh)
Other versions
CN103020589B (en
Inventor
许野平
方亮
张传峰
曹杰
刘辰飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Synthesis Electronic Technology Co Ltd
Original Assignee
SHANDONG SYNTHESIS ELECTRONIC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG SYNTHESIS ELECTRONIC TECHNOLOGY Co Ltd filed Critical SHANDONG SYNTHESIS ELECTRONIC TECHNOLOGY Co Ltd
Priority to CN201210464629.2A priority Critical patent/CN103020589B/en
Publication of CN103020589A publication Critical patent/CN103020589A/en
Application granted granted Critical
Publication of CN103020589B publication Critical patent/CN103020589B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a face recognition method for a single training sample. The method comprises the steps of: 1), inputting a human face sub characteristic training sample material; 2), structuring a plurality of training samples; 3), extracting P sub characteristics of each training sample; 4), appointing an optional training sample, calculating the difference value of two images in the training sample according to the P sub characteristics by using a measuring module, structuring a P-dimensional characteristic data vector v of the sample, when two photos in the training sample represent a same person, the response value of v is r which is equal to 1, otherwise r is equal to 0; 5), obtaining a training result data set of machine learning according to the step 4); and 6), inputting two human face photos to be recognized and compared, and then recognizing. According to the method, the recognition capability of face sub characteristics is achieved by structuring a multi-training-sample set of the face sub characteristic in advance; and the face recognition method for the single training sample is achieved by using a sub characteristic recognition fusion technique.

Description

A kind of single training image per person method
Technical field
The present invention relates to relate to a kind of single training image per person method, belong to face recognition technology (Face Recognition Technique, FRT) technical field.
Background technology
Face recognition technology is representative and the most challenging important technology direction in the current field of biometrics.Recognition of face refers to based on known people's face Sample Storehouse, utilizes image processing and/or mode identification technology from static state or dynamic scene, identifies one or more people's faces.
The recognition of face meeting is in the face of two kinds of situations, first training sample is relatively more sufficient, another kind of then be that training sample is not very sufficient situation, some face identification method is when sample is relatively deficient, be difficult to obtain desirable recognition effect, as in ID card verification, passport checking etc. are used, everyone only has a facial image for the recognition system training, and some face recognition methods are difficult to the competent important task that obtains desirable recognition result.
For the relatively deficient application of training sample, one training sample is suggested, the single training image per person problem refers to, a photos is only preserved for each litigant in the face characteristic storehouse, when needs are judged witness to be tested whether as the litigant, only the scene can be captured a unique photos of preserving in photo and the face characteristic storehouse and compare.
The single training image per person problem, adopt several method usually:
Method one, the single sample in the face characteristic storehouse is changed into a plurality of samples by mapping algorithm, then carry out learning classification with learning algorithm, thereby problem is changed into many training samples recognition of face problem.The problem of this method is that single sample changes into a plurality of samples, can cause the sample content distortion, so that training effect is well below real many training samples face recognition algorithms.
Method two, make up the three-dimensional model of people's face according to the individual human face sample, the two dimensional image identification problem is converted into the three-dimensional model identification problem.These class methods are also very immature at present, can't set up accurately three-dimensional model according to two dimensional image.
Method three, human face region is divided into the same zone of some sizes, realizes utilizing the human face photo sample of a large amount of and independent of service that regional is carried out classification based training.In actual identifying, the judgement of classifying respectively of the regional for the treatment of recognition image according to the similarity in two width of cloth image among zones, is judged comparison result.These class methods have certain effect, but discrimination is lower, do not possess practical value.
Summary of the invention
The present invention proposes a kind of new single training image per person method, by the many training sample sets of prior structure for facial subcharacter, realizes the recognition capability to facial subcharacter, and zygote feature identification integration technology, realizes single training image per person.
The present invention is by the following technical solutions:
A kind of single training image per person method may further comprise the steps:
1) input people face features training sample material: prepare lineup's face photo, capacity is M=m[1]+m[2]+... + m[N], wherein, N is the quantity of participating in the people of shooting sample in the training sample, m[i] (1≤i≤N, m[i] 〉=1) be the total quantity of i people photo under given different shooting conditions;
2) training sample: M training of structure material, in twos pairing, the training sample of M * M human face photo of generation;
3) extract P subcharacter of each described training sample, and then obtain P sub-characteristic measure module of each training sample by the difference between two corresponding subcharacters of photo in each training sample;
4) given arbitrarily training sample, according to the difference of two width of cloth images in P the sub-characteristic measure module calculation training sample, construct the P dimension sample characteristics data vector v of this sample, if when two secondary photos represent same person in the training sample, the response of v is r=1, otherwise r=0;
5) according in the step 4), for M * M training vector and corresponding response, by the method for machine learning, obtain the training result data set of machine learning;
6) two width of cloth human face photos of input contrast to be identified, call P sub-characteristic measure module and calculate P the distance under the topology metric space meaning, consist of vector v to be tested ', according to the machine learning algorithm in the step 7 and training result data set, the value r ' that v ' is corresponding is judged in prediction; When r '=1, judge the corresponding same people of two photos; Allow r '=0 o'clock, judge the corresponding different people of two photos.
According to above-mentioned single training image per person method of the present invention, by an amount of people's face features training sample, constructor characteristic measure module, and then generation P dimension sample data vector v, form the training result data set by machine learning algorithm, step 6) judges that according to the machine learning algorithm that adopts and training result data set certain identifies for the photo of one training sample and the photo to be identified of input, this mode has improved discrimination greatly, makes the single training image per person method have industrial application prospect.In step 6), the difference of subcharacter two word proper vectors of general reference distance in the metric space under its place topology meaning.
Preferably, above-mentioned single training image per person method, in step 2) also comprise before the step to sample material scale calibration: people's pupil average coordinates on unified all photos, and two interocular distances on unified each photo, and described photo regular be same size.
Above-mentioned single training image per person method is to also comprising the step to sample material gray processing behind the sample material dimensional standard.
Preferably, above-mentioned single training image per person method comprises that also the described photo to the gray processing that obtains carries out the step of luminance standard, can reduce the operand of subsequent step.
Further, in order to reduce the operand of subsequent step, above-mentioned single training image per person method, luminance standard is that executor's face detects, and cuts out human face region, then allows facial mean flow rate and contrast standardization.
Preferably, above-mentioned single training image per person method, the standard of facial mean flow rate are 127, and the standardized standard of contrast is that the brightness mean square deviation is 32, has preferably resolution.
Above-mentioned single training image per person method, described step 2) the middle regular pixel value 240 * 320 that is of a size of of photo, interpupillary distance 64 pixels, in the situation that satisfies identification, operand is relatively little.
Above-mentioned single training image per person method, for the RGB photochrome, the step that is converted to gray level image is, reads the brightness value of 3 passages of each pixel, utilizes Y=((R*299)+(G*587)+(B*114))/1000 to carry out gray processing.
Above-mentioned single training image per person method, the number of described subcharacter are no less than 6 and be not more than 38, and the processing storage capacity of coupling relevant device is selected suitable word number of features.
Above-mentioned single training image per person method, the method for machine learning is selected from artificial neural network algorithm, algorithm of support vector machine, Bayesian Classification Arithmetic, decision Tree algorithms.
Embodiment
The current general discrimination of single training image per person method is not high, mostly about 65%, does not have market outlook.The inventor thinks to only have discrimination just to have the value that industry is used greater than 90%.
According to the present invention, a kind of single training image per person method by the multiple sub-recognition feature of effective integration, realizes single training image per person.Concrete steps are as follows with the formal description of tree structure:
1, obtain the sample material: its capacity is M=m[1]+m[2]+... + m[N], N is the quantity of participating in shooting sample people in the training sample, m[i] (1≤i≤N, m[i] 〉=1) be that i people is in the lower number of pictures of different shooting conditions (such as shooting conditions such as illumination, attitude, expressions), the larger territory that finally obtains of this quantity is just larger, but operand also can correspondingly increase.
2, sample material scale calibration is beneficial to the processing of subsequent step: the portrait photo of collection is according to unified standard, dimensional standard.
2-1, according to 2, unified convergent-divergent, rotation, translation, cut out the sample material be 160 so that the photo size unification is 240 * 320, two pupil mean ordinates, the average horizontal ordinate of pupil is 120, interpupillary distance 64 pixels.Convergent-divergent wherein, rotation, translation are selected for the original image-element of photo itself, such as angle not just, are being rotated in place and get final product.
Annotate: in image was processed, ranks were demarcated by pixel value automatically, and coordinate is to should pixel value in length and breadth.
3, sample material gray processing: the RGB coloured image is converted to gray level image.
3-1, according to 3, available formula Y=((R*299)+(G*587)+(B*114))/1000 is converted to gray level image to the RGB coloured image.
4, lightness standardization: allow facial mean flow rate and contrast standardization.
4-1, according to 4, allow the facial average brightness value of photo be 127, brightness mean square deviation 32.
5, training sample: M training of structure material matches in twos, produces M * M human face photo pairing, and these pairings are exactly training sample.
6, according to M * M training sample, structure P(P 〉=1) individual sub-characteristic measure module, each subcharacter metric module can be calculated difference between two photo character pairs in the sample according to training sample.
Below be the subcharacter metric module that the process checking can be selected, quantity is 7, and by through verifying, can make up at most 38 sub-characteristic measure modules.
6-1, according to 6, subcharacter metric module a kind of is that implementation method is calculate in the sample people's face chin ordinate in two photos poor.
6-2, according to 6, subcharacter metric module a kind of is that implementation method is calculate in the sample people's face width in two photos poor.
6-3, according to 6, subcharacter metric module a kind of is that implementation method is calculate in the sample people's face lower lip ordinate in two photos poor.
6-4, according to 6, subcharacter metric module a kind of is that implementation method is to calculate in the sample area (pixel count) of people's face eyebrow area differentiation part in two photos.
6-5, according to 6, subcharacter metric module a kind of is that implementation method is to calculate in the sample difference of face gender in two photos, is 0 with sex difference, different sexes difference is 1.
6-6, according to 6, subcharacter metric module a kind of is that implementation method is calculate in the sample mouth width of people's face in two photos poor.
6-7, according to 6, subcharacter metric module a kind of is that implementation method is to calculate in the sample ASM skeleton pattern corresponding node coordinate distance sum of people's face in two photos.
7, given arbitrarily training sample according to the difference of two width of cloth images in the sample of P sub-characteristic measure module calculating, is constructed a P dimension sample characteristics data vector v.When two photos in the training sample represent same person, the response r=1 that vector v is corresponding, otherwise r=0.
8, for M * M training sample, can obtain M * M training vector and corresponding response, can by means of machine learning algorithm, obtain machine learning training result data set.
8-1, according to 8, machine learning algorithm can be artificial neural network algorithm.
8-2, according to 8, machine learning algorithm can be algorithm of support vector machine.
8-3, according to 8, machine learning algorithm can be Bayesian Classification Arithmetic.
8-4, according to 8, machine learning algorithm can be decision Tree algorithms.
9, the fixed sample to be tested of structure: provide two width of cloth human face photos of comparison to be identified, call P sub-characteristic measure module and calculate P difference, consist of vector v to be tested '.Machine learning algorithm and training result data set in the foundation 8, the value r ' that v ' is corresponding is judged in prediction.When r '=1, judge the corresponding same people of two photos; Allow r '=0 o'clock, judge the corresponding different people of two photos.
The algorithm of above machine learning is current image processing algorithm relatively more commonly used, does not repeat them here.
The discrimination of the above-mentioned recognition methods of process checking is 92.5 ~ 96%.
An embodiment:
1, establishment sample material: the establishment capacity is the sample material of M=N * 10=200 * 10=2000, and N=200 is the quantity of participating in shooting sample people in the training sample, everyone 10 photos.
2, unify convergent-divergent, rotation, translation, cut out the sample material be 160 so that the photo size unification is 240 * 320, two pupil mean ordinates, the average horizontal ordinate of pupil is 120, interpupillary distance 64 pixels.
3, sample material gray processing: with formula Y=((R*299)+(G*587)+(B*114))/1000, the RGB coloured image is converted to gray level image.
4, lightness standardization: allow the facial average brightness value of photo be 127, brightness mean square deviation 32.
5, training sample: M=2000 training of structure material matches in twos, produces M * M=4000000 human face photo pairings, and these pairings are exactly training sample.
6, according to M * M=4000000 training sample, structure P=12 sub-characteristic measure module, each subcharacter metric module can be calculated two photos in the sample according to training sample and calculate difference between the character pairs.These 12 sub-characteristic modules are measured respectively following characteristics:
(1) eyebrow concentration;
(2) eyebrow width;
(3) nostril ordinate;
(4) nostril spacing;
(5) mouth central point ordinate;
(6) upper lip ordinate;
(7) has the ASM model of 68 nodes;
(8) distributed areas of eyebrow;
(9) the binaryzation shape of eyes;
(10) shape type of mouth (utilizing the cluster algorithm classification)
(11) nose shape type (utilizing the cluster algorithm classification)
(12) sex
7, given arbitrarily training sample according to the interpolation of two width of cloth images in the sample of P=12 sub-characteristic measure module calculating, is constructed a P=12 dimension sample characteristics data vector v.When two photos in the sample represent same person, the response r=1 that vector v is corresponding, otherwise r=0.
8, for M * M=4000000 training sample, can obtain M * M=4000000 training vector and corresponding response, can by means of Bayes classifier, obtain machine learning training result data set.
9, the fixed sample to be tested of structure: provide two width of cloth human face photos of comparison to be identified, call P=12 sub-characteristic measure module and calculate P=12 difference, consist of 12 dimensions vector v to be tested '.Machine learning algorithm and training result data set in the foundation 8, the value r ' that v ' is corresponding is judged in prediction.When r '=1, judge the corresponding same people of two photos; Allow r '=0 o'clock, judge the corresponding different people of two photos.
Through checking, the discrimination of the method is 95%.

Claims (10)

1. a single training image per person method is characterized in that, may further comprise the steps:
1) input people face features training sample material: prepare lineup's face photo, capacity is M=m[1]+m[2]+... + m[N], wherein, N is the quantity of participating in the people of shooting sample in the training sample, m[i] (1≤i≤N, m[i] 〉=1) be the total quantity of i people photo under given different shooting conditions;
2) training sample: M training of structure material, in twos pairing, the training sample of M * M human face photo of generation;
3) extract P subcharacter of each described training sample, and then obtain P sub-characteristic measure module of each training sample by the difference between two corresponding subcharacters of photo in each training sample;
4) given arbitrarily training sample, according to the difference of two width of cloth images in P the sub-characteristic measure module calculation training sample, construct the P dimension sample characteristics data vector v of this sample, if when two secondary photos represent same person in the training sample, the response of v is r=1, otherwise r=0;
5) according in the step 4), for M * M training vector and corresponding response, by the method for machine learning, obtain the training result data set of machine learning;
6) two width of cloth human face photos of input contrast to be identified, call P sub-characteristic measure module and calculate P the distance under the topology metric space meaning, consist of vector v to be tested ', according to the machine learning algorithm in the step 7 and training result data set, the value r ' that v ' is corresponding is judged in prediction; When r '=1, judge the corresponding same people of two photos; Allow r '=0 o'clock, judge the corresponding different people of two photos.
2. single training image per person method according to claim 1, it is characterized in that, in step 2) also comprise before the step to sample material scale calibration: people's pupil average coordinates on unified all photos, and two interocular distances on unified each photo, and described photo regular be same size.
3. single training image per person method according to claim 2 is characterized in that, to also comprising the step to sample material gray processing behind the sample material dimensional standard.
4. single training image per person method according to claim 3 is characterized in that, comprises that also the described photo to the gray processing that obtains carries out the step of luminance standard.
5. single training image per person method according to claim 4 is characterized in that, luminance standard is that executor's face detects, and cuts out human face region, then allows facial mean flow rate and contrast standardization.
6. single training image per person method according to claim 5 is characterized in that, the standard of facial mean flow rate is 127, and the standardized standard of contrast is that the brightness mean square deviation is 32.
7. single training image per person method according to claim 1 is characterized in that, described step 2) the middle regular pixel value 240 * 320 that is of a size of of photo, interpupillary distance 64 pixels.
8. single training image per person method according to claim 1, it is characterized in that, for the RGB photochrome, the step that is converted to gray level image is, read the brightness value of 3 passages of each pixel, utilize Y=((R*299)+(G*587)+(B*114))/1000 to carry out gray processing.
9. single training image per person method according to claim 1 is characterized in that, the number of described subcharacter is no less than 6 and be not more than 38.
10. single training image per person method according to claim 1 is characterized in that, the method for machine learning is selected from artificial neural network algorithm, algorithm of support vector machine, Bayesian Classification Arithmetic, decision Tree algorithms.
CN201210464629.2A 2012-11-19 2012-11-19 A kind of single training image per person method Active CN103020589B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210464629.2A CN103020589B (en) 2012-11-19 2012-11-19 A kind of single training image per person method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210464629.2A CN103020589B (en) 2012-11-19 2012-11-19 A kind of single training image per person method

Publications (2)

Publication Number Publication Date
CN103020589A true CN103020589A (en) 2013-04-03
CN103020589B CN103020589B (en) 2017-01-04

Family

ID=47969180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210464629.2A Active CN103020589B (en) 2012-11-19 2012-11-19 A kind of single training image per person method

Country Status (1)

Country Link
CN (1) CN103020589B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927560A (en) * 2014-04-29 2014-07-16 苏州大学 Feature selection method and device
WO2015180101A1 (en) * 2014-05-29 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Compact face representation
CN106056074A (en) * 2016-05-27 2016-10-26 广东顺德中山大学卡内基梅隆大学国际联合研究院 Single training sample face identification method based on area sparse
CN106407966A (en) * 2016-11-28 2017-02-15 南京理工大学 Face identification method applied to checking attendance
CN108038948A (en) * 2017-12-26 2018-05-15 杭州数梦工场科技有限公司 Verification method and device, the computer-readable recording medium of passenger identity
CN110008934A (en) * 2019-04-19 2019-07-12 上海天诚比集科技有限公司 A kind of face identification method
CN110619945A (en) * 2018-06-19 2019-12-27 西门子医疗有限公司 Characterization of quantities of training for input to machine learning networks
CN110967678A (en) * 2019-12-20 2020-04-07 安徽博微长安电子有限公司 Data fusion algorithm and system for multiband radar target identification
CN110619945B (en) * 2018-06-19 2024-04-26 西门子医疗有限公司 Characterization of the amount of training for the input of a machine learning network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101089874A (en) * 2006-06-12 2007-12-19 华为技术有限公司 Identify recognising method for remote human face image
CN102194131A (en) * 2011-06-01 2011-09-21 华南理工大学 Fast human face recognition method based on geometric proportion characteristic of five sense organs

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101089874A (en) * 2006-06-12 2007-12-19 华为技术有限公司 Identify recognising method for remote human face image
CN102194131A (en) * 2011-06-01 2011-09-21 华南理工大学 Fast human face recognition method based on geometric proportion characteristic of five sense organs

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SUMIT CHOPRA等: "Learning a Similarity Metric Discriminatively, with Application to Face Verification", 《COMPUTER VISION AND PATTERN RECOGNITION, 2005. CVPR 2005. IEEE COMPUTER SOCIETY CONFERENCE ON》 *
李文革: "基于主成分分析的人脸识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927560A (en) * 2014-04-29 2014-07-16 苏州大学 Feature selection method and device
CN103927560B (en) * 2014-04-29 2017-03-29 苏州大学 A kind of feature selection approach and device
WO2015180101A1 (en) * 2014-05-29 2015-12-03 Beijing Kuangshi Technology Co., Ltd. Compact face representation
CN106056074A (en) * 2016-05-27 2016-10-26 广东顺德中山大学卡内基梅隆大学国际联合研究院 Single training sample face identification method based on area sparse
CN106407966A (en) * 2016-11-28 2017-02-15 南京理工大学 Face identification method applied to checking attendance
CN106407966B (en) * 2016-11-28 2019-10-18 南京理工大学 A kind of face identification method applied to attendance
CN108038948A (en) * 2017-12-26 2018-05-15 杭州数梦工场科技有限公司 Verification method and device, the computer-readable recording medium of passenger identity
CN110619945A (en) * 2018-06-19 2019-12-27 西门子医疗有限公司 Characterization of quantities of training for input to machine learning networks
CN110619945B (en) * 2018-06-19 2024-04-26 西门子医疗有限公司 Characterization of the amount of training for the input of a machine learning network
CN110008934A (en) * 2019-04-19 2019-07-12 上海天诚比集科技有限公司 A kind of face identification method
CN110008934B (en) * 2019-04-19 2023-03-24 上海天诚比集科技有限公司 Face recognition method
CN110967678A (en) * 2019-12-20 2020-04-07 安徽博微长安电子有限公司 Data fusion algorithm and system for multiband radar target identification

Also Published As

Publication number Publication date
CN103020589B (en) 2017-01-04

Similar Documents

Publication Publication Date Title
KR102596897B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
US20210027048A1 (en) Human face image classification method and apparatus, and server
CN106548165B (en) A kind of face identification method of the convolutional neural networks based on image block weighting
CN103020589B (en) A kind of single training image per person method
CN105335722B (en) Detection system and method based on depth image information
CN105512624B (en) A kind of smiling face's recognition methods of facial image and its device
CN103530599B (en) The detection method and system of a kind of real human face and picture face
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN104091176B (en) Portrait comparison application technology in video
TWI439951B (en) Facial gender identification system and method and computer program products thereof
CN105740779B (en) Method and device for detecting living human face
CN110532970B (en) Age and gender attribute analysis method, system, equipment and medium for 2D images of human faces
CN109711281A (en) A kind of pedestrian based on deep learning identifies again identifies fusion method with feature
CN111768336B (en) Face image processing method and device, computer equipment and storage medium
CN105138954A (en) Image automatic screening, query and identification system
CN109598242B (en) Living body detection method
CN103020655B (en) A kind of remote identity authentication method based on single training image per person
TW201627917A (en) Method and device for face in-vivo detection
CN104951773A (en) Real-time face recognizing and monitoring system
Reese et al. A comparison of face detection algorithms in visible and thermal spectrums
CN107886507B (en) A kind of salient region detecting method based on image background and spatial position
CN107316029A (en) A kind of live body verification method and equipment
CN103544478A (en) All-dimensional face detection method and system
CN105184771A (en) Adaptive moving target detection system and detection method
CN108647621A (en) A kind of video analysis processing system and method based on recognition of face

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP01 Change in the name or title of a patent holder

Address after: Shun high tech Zone of Ji'nan City, Shandong province 250101 China West Road No. 699

Patentee after: SYNTHESIS ELECTRONIC TECHNOLOGY CO., LTD.

Address before: Shun high tech Zone of Ji''nan City, Shandong province 250101 China West Road No. 699

Patentee before: Shandong Synthesis Electronic Technology Co., Ltd.