CN103902992A - Human face recognition method - Google Patents

Human face recognition method Download PDF

Info

Publication number
CN103902992A
CN103902992A CN201410173445.XA CN201410173445A CN103902992A CN 103902992 A CN103902992 A CN 103902992A CN 201410173445 A CN201410173445 A CN 201410173445A CN 103902992 A CN103902992 A CN 103902992A
Authority
CN
China
Prior art keywords
human face
face
face recognition
generate
model based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410173445.XA
Other languages
Chinese (zh)
Other versions
CN103902992B (en
Inventor
李俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Yisheng Intelligent Technology Co.,Ltd.
Original Assignee
ZHUHAI YISHENG ELECTRONICS TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHUHAI YISHENG ELECTRONICS TECHNOLOGY Co Ltd filed Critical ZHUHAI YISHENG ELECTRONICS TECHNOLOGY Co Ltd
Priority to CN201410173445.XA priority Critical patent/CN103902992B/en
Publication of CN103902992A publication Critical patent/CN103902992A/en
Priority to PCT/CN2014/089652 priority patent/WO2015165227A1/en
Application granted granted Critical
Publication of CN103902992B publication Critical patent/CN103902992B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a human face recognition method. The human face recognition method comprises the following steps that (S1) a human face elastic bunch graph is generated; (S2) a human face recognition model based on an appearance is generated, and the cosine similarity between the vector of the human face recognition model based on the appearance and the vector of an existing human face model in a database is obtained through calculation; (S3) a human face recognition model based on geometrical characteristics is generated, and the cosine similarity between the vector of the human face recognition model based on the geometrical characteristics and the vector of the existing human face model in a database is obtained through calculation; (S4) logistic regression mixing is used based on the similarity level in the (S2) and the similarity level in the (S3); (S5) a human face recognition result is judged based on a result of the (S4). The similarity level mixing of the human face model based on the appearance and the human face model based on the geometrical characteristics is adopted in the human face recognition method, and the human face recognition method can be completely applied to actual living environment.

Description

Face identification method
Technical field
The present invention relates to a kind of face identification method.
Background technology
Face recognition technology develops rapidly in the past few years, and face recognition technology can not satisfactorily be tackled in the real life environments such as outdoor environment at present, only in indoor use.The difficult point of recognition of face remains illumination change, postural change, change of age, blocks etc., these face recognition algorithms that face identification system is adopted exert an influence, its degree is different, and the classification of face identification method and its relative merits, the recognition of face difficult point that this is exerted an influence are as follows:
Based on the face identification method of outward appearance, utilize the pixel value of facial image pixel, generate face template; Face identification method based on geometric properties is not to rely on pixel, is unique point (eye, nose, mouth, the ear according to face ...) between geometry site generate face template.The face identification method of past based on outward appearance follows the face identification method based on geometric properties to compare, can take out very abundant face characteristic from each pixel of image, so brought very high recognition of face performance than the face identification method based on geometric properties that is fixed against several unique points, most of successfully face identification method is all dependent on the recognition methods based on outward appearance now.
But the illumination change that pixel is exerted an influence still can not be well tackled in the recognition methods based on outward appearance, and recognition methods based on geometric properties is to rely on geometry site, is not limited to illumination change, can make up the shortcoming of the recognition methods based on outward appearance.
Owing to being fixed against human face characteristic point, require accurately to take out human face characteristic point in the last stage.According to making a living into, face template is investigated overall facial image to face identification method or part field is investigated, and is divided into Global Face recognition methods and local face identification method.Investigate the Global Face recognition methods handlebar face local feature of overall facial image and the advantage that global characteristics all shows, but there is the shortcoming that cannot tackle postural change, on the contrary, local face identification method is stronger than Global Face recognition methods for postural change, has the advantage that can reflect well face local characteristics.
Past, elastic graph bundle coupling-EBGM (Elastic Bunch Graph Matching) is as a face identification method based on unique point, belong to local face identification method, be in the most successful face identification method, but the shortcoming of local face identification method is the global characteristics that can not reflect face.For overcoming this point, there is the method in conjunction with Global Face recognition methods and local face identification method, bring some performance improvements, two methods, all according to the face identification method based on outward appearance, cannot overcome the shortcoming of the recognition methods based on outward appearance.
In real life environments, facial image by throwing light on, posture, age, the variation such as block and cause recognition of face difficulty.Therefore,, in real life environments, face recognition technology can not be satisfactory, furthers investigate for this reason.In recent years, carry out a lot of research in this field, had very large progress, but still do not reached satisfied requirement.
Summary of the invention
Technical matters to be solved by this invention is to provide a kind of facial feature points of effectively finding, irrelevant with illumination change, to the stable face identification method of postural change.
The invention provides a kind of face identification method, comprise the following steps:
S1: generate face elastic bunch graph;
S2: generate the human face recognition model based on outward appearance, calculate and obtain the cosine similarity between existing faceform's vector in the human face recognition model based on outward appearance and database;
S3: generate the human face recognition model based on geometric properties, the cosine similarity in the human face recognition model based on geometric properties that calculating obtains and database between existing faceform's vector;
S4: use logistic regression to mix with the similarity rank based on step S3 based on step S2;
S5: the result based on step S4 is judged face recognition result.
Further, generate face elastic bunch graph, in the human face region detecting, carry out mode detection according to Haar feature and take out human face characteristic point.
Further, generate face elastic bunch graph, first in the human face region detecting, extract four points, respectively left and right two eyeball mid points, mouth mid point and lower jaw points, composition initial stage part faceform, in the template figure that has 30 unique points, analyze contacting between each unique point and four points of initial stage part faceform, generate two-dimentional affined transformation, in 30 unique points of template figure, apply this conversion, obtain the eigenwert that 30 unique points are answered in contrast, draw initial stage Global Face model; All 30 unique points for initial stage Global Face model are all sought correct point, generate the face elastic bunch graph using this as unique point.
Further, generate the human face recognition model based on outward appearance, to 30 of face elastic bunch graph feature point extraction Gabor Jet, connect the vector of rear acquisition as the initial stage model of the faceform based on outward appearance, the width that takes out Gabor Jet plural number is straight, and composition is directly the vector of element by 40 width; To faceform's initial stage model application PCA and LDA, obtain the human face recognition model based on outward appearance.
Further, generate the human face recognition model based on geometric properties, calculate the distance between the human face characteristic point of taking-up, to proper vector application PCA and LDA in ratio is key element between transverse axis and vertical axis composition, obtain the human face recognition model based on geometric properties.
The face identification method that the present invention has adopted a face identification method based on outward appearance and the face identification method based on geometric properties to mix in similarity rank, can satisfactorily be applied in real life environments, and the method for searching facial feature points that can be is effectively proposed more, also propose to have nothing to do with illumination change, to the stable face identification method based on geometric properties of postural change.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, forms the application's a part, and schematic description and description of the present invention is used for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 schematically illustrates the process flow diagram of the face identification method that the invention process example provides.
Embodiment
Below with reference to the accompanying drawings and in conjunction with the embodiments, describe the present invention in detail.
The embodiment of the present invention provides a kind of face identification method, comprises the following steps:
One, generate face elastic bunch graph
In the present invention, first in the human face region detecting, extract four points, respectively left and right two eyeball mid points, mouth mid point and lower jaw points, composition initial stage part faceform, in the template figure that has 30 unique points, analyze contacting between each unique point and four points of initial stage part faceform, generate two-dimentional affined transformation, in 30 unique points of template figure, apply this conversion, obtain the eigenwert that 30 unique points are answered in contrast, draw initial stage Global Face model.
Transformation for mula is as follows:
What need obtain if do not abandon ubiquity is transformed to
Figure 674999DEST_PATH_IMAGE002
.
Preceding paragraph is and the associations that stretches and rotate that consequent is translation associations.
Translation associations is to calculate simply as the gap between the emphasis of two parts model.
?
Figure 201410173445X100002DEST_PATH_IMAGE003
;
Figure 529823DEST_PATH_IMAGE004
: the initial stage part faceform's that 4 points that obtain with the initial stage form emphasis;
: the emphasis of getting the part faceform of 4 some compositions in the faceform of preparation;
About 4 pixels of the matrix rotating and stretch, can obtain according to the relation between two parts faceform's two corresponding point.
Part faceform is made up of 4 points, allows it have least error, makes 4 corresponding points close to each other, obtains after linear regression converts and is rotated and stretching conversion matrix.
By in the affined transformation apply to Template figure obtaining, generate initial stage Global Face model.
Figure 496511DEST_PATH_IMAGE006
?;
Figure 201410173445X100002DEST_PATH_IMAGE007
: face elastic bunch graph;
Figure 891720DEST_PATH_IMAGE008
: template figure;
After acquisition initial stage Global Face model, try to achieve the certainty factor of the certain fields centered by it for each unique point, the point on this side is also sought to certainty factor, the point high with certainty factor upgrades individual features point.
As the point large less than certainty factor, can stop.
Till making so each unique point not upgrade again.
All 30 unique points for initial stage Global Face model are all sought correct point, generate the face elastic bunch graph using this as unique point.
In the present invention, utilize haar feature to replace Gabor feature, in the human face region detecting, carry out mode detection according to Haar feature and take out human face characteristic point, Haar feature is to replace pixel value in each point to investigate closing of pixel value in some fields, and feature is to detect the pixel value of the various patterns in candidate field closes differ from or close; For improving object detection performance, such Haar feature must be enriched, and this is to rely on the training of two-dimentional sorter-cascade classifier to form.
Utilize the detecting device of Haar feature faster than other detector speed, accuracy is high, is used for object detection, being particularly characterized as basic human-face detector with the Haar of viola-jones is the most successful detecting device; In the present invention, to each unique point, in large capacity face data bank, extract the model centered by this point, train with the detecting device of viola-jones.
Two, generate human face recognition model and the coupling based on outward appearance
To 30 of face elastic bunch graph feature point extraction Gabor Jet, connect the vector of rear acquisition as the initial stage model of the faceform based on outward appearance.
Gabor Jet is that the pixel to paying close attention to obtains Gabor wave filter convolution (convolution).
The following formula of convolution utilization of Gabor wave filter and image calculates:
Figure 201410173445X100002DEST_PATH_IMAGE009
Gabor wave filter is as follows:
According to vector
Figure 643775DEST_PATH_IMAGE012
determine the type of Gabor wave filter, in invention, 5 frequencies and 8 directions are formed to 40 Gabor wave filters of total.
Figure 653189DEST_PATH_IMAGE014
Be that Gabor wave filter is the set that can be decided to be 40 plural coefficients:
Figure 920222DEST_PATH_IMAGE016
The width straight (magnitude) that takes out Gabor Jet plural number in the present invention, composition is directly the vector of element by 40 width.
To faceform's initial stage model application PCA and LDA, obtain the human face recognition model based on outward appearance.
In two faceforms' matching stage, calculate the cosine similarity (similarity) between existing faceform's vector in the human face recognition model based on outward appearance that obtains from above and database.
Figure 119122DEST_PATH_IMAGE018
Three, generate human face recognition model and the coupling based on geometric properties
Can mention according to the face identification method of the face identification method of distance and ratio as the representational face identification method based on geometric properties in the past, the face identification method based on ratio has originally used the distance proportion between human face characteristic point, but this method or for the respective distances of the image of stereo rotating than regular meeting difference, have the unsettled shortcoming of postural change.
In order to overcome such shortcoming, in the present invention, investigate independently according to the horizontal direction composition of distance and vertical direction composition.
For two line segments of same plane, when this Plane Rotation, the ratio of the sense of rotation composition of two line segments is identical, for all consistent with ratio with the direction composition length of sense of rotation orthogonal.
Head part's rotation, generally can only be to vertical, horizontal direction rotation.
Extracting the line segment that connects similar conplane unique point in faceform, is to be that basis extracts face template to the constant fact of same person for the level of corresponding line segment, the ratio that vertical axis becomes to divide in two faceforms.
Face template generation phase is as follows:
In the unique point obtaining in the above except take nose point and auriculare as the very dark point of the bending led, select almost at conplane point, do not consider the order between them, be made into possible two (combination), try to achieve the distance of each biconjugate Z-axis and horizontal axis.
Figure 719048DEST_PATH_IMAGE022
Figure 201410173445X100002DEST_PATH_IMAGE023
: i horizontal range
Figure 660328DEST_PATH_IMAGE024
: i vertical range
(
Figure 201410173445X100002DEST_PATH_IMAGE025
): the coordinate of j node
N: the number of unique point
Then do not consider the order of each distance by axis of orientation, be made into possible two (combination), should comprise the ratio of the distance that this pair comprise to each biconjugate.
Figure 201410173445X100002DEST_PATH_IMAGE027
Figure DEST_PATH_IMAGE029
Figure 335023DEST_PATH_IMAGE030
: i horizontal proportion
Figure DEST_PATH_IMAGE031
: i vertical scale
M: the number of distances on an axle
The vector that the ratio that generation obtains in axis of orientation is element, interconnects rear generation primary template.
Figure 201410173445X100002DEST_PATH_IMAGE033
It is not high that the primary template vector obtaining like this comprises a lot of unnecessary feature and recognition capabilities.
Applying PCA(here carries out separately by axis of orientation) remove the laggard row vector of unnecessary composition and reduce.
The vector that reduces obtaining is carried out to LDA, generate the high human face recognition model based on geometric properties of discernment.
In two faceforms' matching stage, calculate the cosine similarity between existing faceform's vector in the human face recognition model based on geometric properties that obtains from above and database, the computing method of cosine similarity follow the face identification method based on outward appearance the same.
Four, mix
Face identification method based on outward appearance uses logistic regression to mix with the similarity rank of the face identification method based on geometric properties, and formula is as follows:
Figure 201410173445X100002DEST_PATH_IMAGE035
Figure 415816DEST_PATH_IMAGE036
Figure 201410173445X100002DEST_PATH_IMAGE037
: logistic regression (logistic regression) coefficient.
The face identification method that the present invention has adopted a face identification method based on outward appearance and the face identification method based on geometric properties to mix in similarity rank, can satisfactorily be applied in real life environments, and the method for searching facial feature points that can be is effectively proposed more, also propose to have nothing to do with illumination change, to the stable face identification method based on geometric properties of postural change.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (5)

1. a face identification method, is characterized in that, comprises the following steps:
S1: generate face elastic bunch graph;
S2: generate the human face recognition model based on outward appearance, calculate and obtain the cosine similarity between existing faceform's vector in the human face recognition model based on outward appearance and database;
S3: generate the human face recognition model based on geometric properties, the cosine similarity in the human face recognition model based on geometric properties that calculating obtains and database between existing faceform's vector;
S4: use logistic regression to mix with the similarity rank based on step S3 based on step S2;
S5: the result based on step S4 is judged face recognition result.
2. face identification method as claimed in claim 1, is characterized in that, generates face elastic bunch graph, carries out mode detection take out human face characteristic point in the human face region detecting according to Haar feature.
3. face identification method as claimed in claim 2, it is characterized in that, generate face elastic bunch graph, first in the human face region detecting, extract four points, respectively two the eyeball mid points in left and right, mouth mid point and lower jaw point, composition initial stage part faceform, in the template figure that has 30 unique points, analyze contacting between each unique point and four points of initial stage part faceform, generate two-dimentional affined transformation, in 30 unique points of template figure, apply this conversion, obtain the eigenwert that 30 unique points are answered in contrast, draw initial stage Global Face model, all 30 unique points for initial stage Global Face model are all sought correct point, generate the face elastic bunch graph using this as unique point.
4. face identification method as claimed in claim 2, it is characterized in that, generate the human face recognition model based on outward appearance, to 30 of face elastic bunch graph feature point extraction Gabor Jet, connect the vector of rear acquisition as the initial stage model of the faceform based on outward appearance, the width that takes out Gabor Jet plural number is straight, and composition is directly the vector of element by 40 width; To faceform's initial stage model application PCA and LDA, obtain the human face recognition model based on outward appearance.
5. face identification method as claimed in claim 4, it is characterized in that, generate the human face recognition model based on geometric properties, calculate the distance between the human face characteristic point of taking-up, to proper vector application PCA and LDA in ratio is key element between transverse axis and vertical axis composition, obtain the human face recognition model based on geometric properties.
CN201410173445.XA 2014-04-28 2014-04-28 Human face recognition method Active CN103902992B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410173445.XA CN103902992B (en) 2014-04-28 2014-04-28 Human face recognition method
PCT/CN2014/089652 WO2015165227A1 (en) 2014-04-28 2014-10-28 Human face recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410173445.XA CN103902992B (en) 2014-04-28 2014-04-28 Human face recognition method

Publications (2)

Publication Number Publication Date
CN103902992A true CN103902992A (en) 2014-07-02
CN103902992B CN103902992B (en) 2017-04-19

Family

ID=50994304

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410173445.XA Active CN103902992B (en) 2014-04-28 2014-04-28 Human face recognition method

Country Status (2)

Country Link
CN (1) CN103902992B (en)
WO (1) WO2015165227A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015165227A1 (en) * 2014-04-28 2015-11-05 珠海易胜电子技术有限公司 Human face recognition method
CN105069448A (en) * 2015-09-29 2015-11-18 厦门中控生物识别信息技术有限公司 True and false face identification method and device
CN105160331A (en) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 Hidden Markov model based face geometrical feature identification method
CN105631039A (en) * 2016-01-15 2016-06-01 北京邮电大学 Picture browsing method
CN109214352A (en) * 2018-09-26 2019-01-15 珠海横琴现联盛科技发展有限公司 Dynamic human face retrieval method based on 2D camera 3 dimension imaging technology
CN111783699A (en) * 2020-07-06 2020-10-16 周书田 Video face recognition method based on efficient decomposition convolution and time pyramid network

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2610682C1 (en) * 2016-01-27 2017-02-14 Общество с ограниченной ответственностью "СТИЛСОФТ" Face recognition method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495999A (en) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 Face recognition method
CN103440510A (en) * 2013-09-02 2013-12-11 大连理工大学 Method for positioning characteristic points in facial image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103902992B (en) * 2014-04-28 2017-04-19 珠海易胜电子技术有限公司 Human face recognition method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102495999A (en) * 2011-11-14 2012-06-13 深圳市奔凯安全技术有限公司 Face recognition method
CN103440510A (en) * 2013-09-02 2013-12-11 大连理工大学 Method for positioning characteristic points in facial image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015165227A1 (en) * 2014-04-28 2015-11-05 珠海易胜电子技术有限公司 Human face recognition method
CN105160331A (en) * 2015-09-22 2015-12-16 镇江锐捷信息科技有限公司 Hidden Markov model based face geometrical feature identification method
CN105069448A (en) * 2015-09-29 2015-11-18 厦门中控生物识别信息技术有限公司 True and false face identification method and device
CN105631039A (en) * 2016-01-15 2016-06-01 北京邮电大学 Picture browsing method
CN109214352A (en) * 2018-09-26 2019-01-15 珠海横琴现联盛科技发展有限公司 Dynamic human face retrieval method based on 2D camera 3 dimension imaging technology
CN111783699A (en) * 2020-07-06 2020-10-16 周书田 Video face recognition method based on efficient decomposition convolution and time pyramid network

Also Published As

Publication number Publication date
CN103902992B (en) 2017-04-19
WO2015165227A1 (en) 2015-11-05

Similar Documents

Publication Publication Date Title
CN108921100B (en) Face recognition method and system based on visible light image and infrared image fusion
CN103902992A (en) Human face recognition method
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN109101865A (en) A kind of recognition methods again of the pedestrian based on deep learning
CN101819628B (en) Method for performing face recognition by combining rarefaction of shape characteristic
Vijayan et al. Twins 3D face recognition challenge
CN105740779B (en) Method and device for detecting living human face
CN105139000B (en) A kind of face identification method and device removing glasses trace
WO2012077286A1 (en) Object detection device and object detection method
CN101833654B (en) Sparse representation face identification method based on constrained sampling
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN105138954A (en) Image automatic screening, query and identification system
CN103942577A (en) Identity identification method based on self-established sample library and composite characters in video monitoring
Ming Robust regional bounding spherical descriptor for 3D face recognition and emotion analysis
CN107066969A (en) A kind of face identification method
CN102521575A (en) Iris identification method based on multidirectional Gabor and Adaboost
Geng et al. Fully automatic face recognition framework based on local and global features
Araujo et al. Fast eye localization without a face model using inner product detectors
CN110796101A (en) Face recognition method and system of embedded platform
CN104008364A (en) Face recognition method
CN111259739A (en) Human face pose estimation method based on 3D human face key points and geometric projection
CN105760815A (en) Heterogeneous human face verification method based on portrait on second-generation identity card and video portrait
Galdámez et al. Ear recognition using a hybrid approach based on neural networks
CN105701486B (en) A method of it realizing face information analysis in video camera and extracts

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: No.306, complex building, 99 University Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, 519000

Patentee after: Zhuhai Yisheng Intelligent Technology Co.,Ltd.

Address before: 519000 Room 102, unit 2, building 31, No. 1288, Tangqi Road, Tangjiawan Town, Zhuhai City, Guangdong Province

Patentee before: ZHUHAI YISHENG ELECTRONICS TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address