CN110895672B - Face recognition method based on artificial fish swarm algorithm - Google Patents

Face recognition method based on artificial fish swarm algorithm Download PDF

Info

Publication number
CN110895672B
CN110895672B CN201811640291.5A CN201811640291A CN110895672B CN 110895672 B CN110895672 B CN 110895672B CN 201811640291 A CN201811640291 A CN 201811640291A CN 110895672 B CN110895672 B CN 110895672B
Authority
CN
China
Prior art keywords
face
artificial fish
similarity
face contour
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811640291.5A
Other languages
Chinese (zh)
Other versions
CN110895672A (en
Inventor
王晓丹
杜永贵
黄梓楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yanxiang Smart Technology Co ltd
Original Assignee
EVOC Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EVOC Intelligent Technology Co Ltd filed Critical EVOC Intelligent Technology Co Ltd
Priority to CN201811640291.5A priority Critical patent/CN110895672B/en
Publication of CN110895672A publication Critical patent/CN110895672A/en
Application granted granted Critical
Publication of CN110895672B publication Critical patent/CN110895672B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a face recognition method based on an artificial fish school. The method comprises the following steps: extracting a face contour of the face gray level image by using an artificial fish school algorithm; extracting a plurality of face characteristic values from the face gray level image of which the face contour is extracted; calculating facial feature similarity according to the extracted facial feature values and facial feature values stored in a database; and comparing the face feature similarity obtained by calculation with a preset threshold, if the face feature similarity is greater than the preset threshold, indicating that the face recognition is successful, otherwise, indicating that the face recognition is failed. The invention extracts the face contour by using the artificial fish swarm algorithm and carries out face recognition by adopting a single template matching mode, can adaptively extract the most appropriate face contour without large database capacity, thereby improving the speed and the precision of the face recognition.

Description

Face recognition method based on artificial fish swarm algorithm
Technical Field
The invention relates to the technical field of face recognition, in particular to a face recognition method based on an artificial fish school algorithm.
Background
In real life, places requiring identity authentication are ubiquitous, and the most common authentication methods include card swiping authentication and fingerprint authentication. In recent two years, with the heat of fire of artificial intelligence, the face recognition technology walks into the visual field of the public. At present, the face detection based on the template mainly constructs different templates aiming at different face shapes, then extracts the face contour by adopting a multi-template matching method, a detected person needs to stand in a specified area to capture a picture by a camera after the environmental condition meets the requirement, and the captured picture is matched for many times in a multi-template library to extract the face contour of the detected person.
In the process of implementing the invention, the inventor finds that at least the following technical problems exist in the prior art: because the face contour is extracted by adopting a multi-template matching mode, a large database capacity is needed, and the existing multi-template matching mode for extracting the face contour has higher requirements on the surrounding environment and factors of the face, the phenomenon that the existing template does not accord with the actual condition can occur, so that the face recognition system has low discrimination speed and low recognition precision.
Disclosure of Invention
The face recognition method based on the artificial fish swarm algorithm extracts the face contour by the artificial fish swarm algorithm and performs face recognition by adopting a single template matching mode, can adaptively extract the most appropriate face contour without large database capacity, and thus can improve the speed and the precision of the face recognition.
In a first aspect, the present invention provides a face recognition method based on an artificial fish swarm algorithm, including:
extracting a face contour of the face gray level image by using an artificial fish school algorithm;
extracting a plurality of face characteristic values from the face gray level image of which the face contour is extracted;
calculating facial feature similarity according to the extracted facial feature values and facial feature values stored in a database;
and comparing the face feature similarity obtained by calculation with a preset threshold, if the face feature similarity is greater than the preset threshold, indicating that the face recognition is successful, otherwise, indicating that the face recognition is failed.
The face recognition method based on the artificial fish swarm algorithm provided by the embodiment of the invention extracts the face contour by using the artificial fish swarm algorithm and performs face recognition by adopting a single template matching mode, namely, the artificial fish swarm algorithm is used for comparing a face gray image after grid division with a single template, whether the face contour exists in a grid sub-image is judged according to the similarity between the grid sub-image in the face gray image and the template, and the face contour image with the highest similarity with the template is finally obtained by continuously searching for the contrast.
Drawings
FIG. 1 is a flow chart of a face recognition method based on an artificial fish school algorithm according to an embodiment of the present invention;
fig. 2 is a schematic diagram of the process of calculating the contour similarity between the mesh subgraph and the face contour template in the above embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a face recognition method based on an artificial fish swarm algorithm, as shown in figure 1, the method comprises the following steps:
and S11, extracting the face contour of the face gray level image by using an artificial fish school algorithm.
And S12, extracting a plurality of face characteristic values from the face gray level image of which the face contour is extracted.
And S13, calculating the similarity of the facial features according to the extracted facial feature values and the facial feature values stored in the database.
S14, comparing the face feature similarity obtained by calculation with a preset threshold, if the face feature similarity is larger than the preset threshold, indicating that the face recognition is successful, otherwise, indicating that the face recognition is failed.
The face recognition method based on the artificial fish swarm algorithm provided by the embodiment of the invention extracts the face contour by using the artificial fish swarm algorithm and performs face recognition by adopting a single template matching mode, namely, the artificial fish swarm algorithm is used for comparing a face gray image after grid division with a single template, whether the face contour exists in a grid sub-image is judged according to the similarity between the grid sub-image in the face gray image and the template, and the face contour image with the highest similarity with the template is finally obtained by continuously searching for the contrast.
Optionally, the step S11 includes:
1) and carrying out random meshing on the face gray level image to generate an artificial fish school.
The artificial fish school comprises a plurality of artificial fishes in different states, the artificial fishes correspond to face gray level images in a grid division state, and the state information of the artificial fishes comprises grid sizes and horizontal and vertical coordinate values corresponding to the artificial fishes. The plurality of artificial fishes in different states comprises two conditions: one is that the grid subgraphs are the same in size but the coordinate positions of the grid subgraphs in the original gray level image are not the same, and the other is that the grid subgraphs are not the same in size.
2) Randomly selecting an artificial fish from the artificial fish group, calculating the face contour similarity of the current artificial fish, and recording the face contour similarity obtained by calculation on a bulletin board.
3) And performing clustering behavior on the current artificial fish, determining the number of artificial fish partners in the visual field range of the current artificial fish and the central position of the artificial fish partners, and calculating the face contour similarity of the central position.
4) And if the face contour similarity of the central position is greater than the face contour similarity recorded on the bulletin board, updating the state of the current artificial fish towards the direction of the central position based on the set step length, and calculating the face contour similarity of the current artificial fish after the state is updated.
5) If the face contour similarity of the current artificial fish after the state update is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity on the bulletin board by using the face contour similarity of the current artificial fish after the state update; otherwise, executing rear-end collision behavior on the current artificial fish after the state is updated, searching an optimal position in the visual field range of the current artificial fish after the state is updated, and calculating the face contour similarity of the optimal position.
6) And if the face contour similarity of the optimal position is greater than the face contour similarity recorded on the bulletin board, updating the state of the current artificial fish after the state is updated once towards the optimal position direction based on the set step length, and calculating the face contour similarity of the current artificial fish after the state is updated.
7) If the face contour similarity of the current artificial fish after the state update is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity on the bulletin board by using the face contour similarity of the current artificial fish after the state update; and if not, carrying out foraging action on the current artificial fish after the state is updated, randomly selecting a state in the visual field range of the current artificial fish after the state is updated, and calculating the face contour similarity of the current artificial fish in the randomly selected state.
8) If the face contour similarity of the current artificial fish in the random selection state is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity on the bulletin board by using the face contour similarity of the current artificial fish in the random selection state; otherwise, executing random behavior on the current artificial fish after the state is updated, randomly updating the state in the visual field range of the current artificial fish after the state is updated according to the step length, and calculating the face contour similarity of the current artificial fish in the randomly updated state.
9) If the face contour similarity of the current artificial fish in the random updating state is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity recorded on the bulletin board by using the face contour similarity of the current artificial fish in the random updating state; otherwise, another artificial fish is selected from the artificial fish group, and the steps are repeatedly executed until all the artificial fishes in the artificial fish group are traversed, so that the face gray level image with the highest face contour similarity is found, and the face gray level image with the face contour extracted is obtained.
Optionally, the calculation of the contour similarity is obtained by:
and when the artificial fish is in the current state, calculating the contour similarity of a random grid sub-image in the artificial fish swarm and the face contour template to obtain the contour similarity of the artificial fish.
For example, the calculation formula of the contour similarity between the mesh subgraph and the face contour template is as follows:
Figure BDA0001930993930000051
wherein, T (m, n) is a face contour template, which is overlaid on the face gray image S (W, H) for translation, SijAnd (m, n) is the grid sub-image covered by T (m, n), i, j is the horizontal and vertical coordinates of the lower left corner of the grid sub-image on the original gray image, i is more than or equal to 1 and less than or equal to W-n, and j is more than or equal to 1 and less than or equal to H-m.
Optionally, before the step 1) of randomly meshing the face grayscale image to generate the artificial fish school, the method further includes: and converting the shot face color image into a face gray image.
For example, a face color image obtained by shooting is converted into a face grayscale image by an averaging method.
The specific transformation formula is as follows:
Figure BDA0001930993930000052
wherein, I (x, y) is the gray value in the face gray image, and I _ R (x, y), I _ G (x, y), I _ B (x, y) are the values of 3 channels RGB of red, green, blue in the face color image.
Optionally, the plurality of facial feature values are geometric lengths of facial five sense organs.
Optionally, the calculating the similarity of facial features according to the extracted facial feature values and the facial feature values stored in the database includes:
respectively calculating the ratios of the extracted facial feature values to the facial feature values in the database to obtain a plurality of feature value ratios, and calculating the average value of the feature value ratios;
calculating the standard deviation of the characteristic value ratios according to the characteristic value ratios and the average value of the characteristic value ratios, and storing the characteristic value ratios corresponding to the minimum standard deviation;
and calculating the similarity of the characteristic values according to the plurality of characteristic value ratios corresponding to the minimum standard deviation and the corresponding average value.
Here, K face characteristic values (T) are extracted from the face gray level image of the face contour1,T2,…,TK) The extraction method comprises the steps of converting a face gray level image after face contour extraction into a frequency domain by utilizing Fourier transform, selecting a section of frequency by utilizing a band-pass filter, and selecting the characteristics of eyes, mouths, ears, noses and the like when the frequency domain image is converted into a time domain, wherein K face characteristic values (T) are obtained1,T2,…,TK) Respectively, the geometric length of the eyes, mouth, ears, nose, etc.
Combining the above K face feature values, the face feature similarity calculation step is as follows:
first, extracted K face feature values (T) are calculated1,T2,…,TK) With the characteristic values (D) stored in the database1,D2,…,DK) Ratio (x) of characteristic values of each of the items1,x2,…,xK) And calculating the ratio (x) of the characteristic values1,x2,…,xK) Average value μ of (d).
Secondly, calculating the standard deviation of the characteristic value ratio, wherein the calculation formula is as follows:
Figure BDA0001930993930000061
selecting the characteristic value in the database with the minimum standard deviation sigma and storing the corresponding characteristic value ratio (x)1,x2,…,xK)。
And finally, calculating the similarity d of the characteristic values, wherein the calculation formula is as follows:
Figure BDA0001930993930000071
it will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A face recognition method based on artificial fish shoal is characterized by comprising the following steps:
extracting a face contour of the face gray level image by using an artificial fish school algorithm;
extracting a plurality of face characteristic values from the face gray level image of which the face contour is extracted;
calculating facial feature similarity according to the extracted facial feature values and facial feature values stored in a database;
comparing the face feature similarity obtained by calculation with a preset threshold, if the face feature similarity is larger than the preset threshold, indicating that the face recognition is successful, otherwise, indicating that the face recognition is failed;
the method for extracting the face contour of the face gray level image by using the artificial fish swarm algorithm comprises the following steps: carrying out random meshing on the face gray level image to generate an artificial fish school; randomly selecting an artificial fish from the artificial fish group, calculating the face contour similarity of the current artificial fish, and recording the face contour similarity obtained by calculation on a bulletin board; performing clustering behavior on the current artificial fish, determining the number of artificial fish partners in the visual field range of the current artificial fish and the central position of the artificial fish partners, and calculating the face contour similarity of the central position; if the face contour similarity of the central position is larger than the face contour similarity recorded on the bulletin board, updating the state of the current artificial fish once towards the direction of the central position based on the set step length, and calculating the face contour similarity of the current artificial fish after the state is updated; if the face contour similarity of the current artificial fish after the state update is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity on the bulletin board by using the face contour similarity of the current artificial fish after the state update; otherwise, executing rear-end collision behavior on the current artificial fish after the state is updated, searching an optimal position in the visual field range of the current artificial fish after the state is updated, and calculating the face contour similarity of the optimal position; if the face contour similarity of the optimal position is larger than the face contour similarity recorded on the bulletin board, updating the state of the current artificial fish after the state updating towards the optimal position direction based on the set step length, and calculating the face contour similarity of the current artificial fish after the state updating; if the face contour similarity of the current artificial fish after the state update is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity on the bulletin board by using the face contour similarity of the current artificial fish after the state update; otherwise, carrying out foraging behavior on the current artificial fish after the state is updated, randomly selecting a state in the visual field range of the current artificial fish after the state is updated, and calculating the face contour similarity of the current artificial fish in the randomly selected state; if the face contour similarity of the current artificial fish in the random selection state is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity on the bulletin board by using the face contour similarity of the current artificial fish in the random selection state; otherwise, executing random behavior on the current artificial fish after the state is updated, randomly updating the state according to the step length in the visual field range of the current artificial fish after the state is updated, and calculating the face contour similarity of the current artificial fish in the randomly updated state; if the face contour similarity of the current artificial fish in the random updating state is larger than the face contour similarity recorded on the bulletin board, updating the face contour similarity recorded on the bulletin board by using the face contour similarity of the current artificial fish in the random updating state; otherwise, selecting another artificial fish from the artificial fish group and repeatedly executing the steps until all the artificial fishes in the artificial fish group are traversed to find the face gray level image with the highest face contour similarity so as to obtain the face gray level image with the face contour extracted;
the calculating the similarity of the facial features according to the extracted facial feature values and the facial feature values stored in the database comprises: respectively calculating the ratios of the extracted facial feature values to the facial feature values in the database to obtain a plurality of feature value ratios, and calculating the average value of the feature value ratios; calculating the standard deviation of the characteristic value ratios according to the characteristic value ratios and the average value of the characteristic value ratios, and storing the characteristic value ratios corresponding to the minimum standard deviation; and calculating the similarity of the characteristic values according to the plurality of characteristic value ratios corresponding to the minimum standard deviation and the corresponding average value.
2. The method of claim 1, wherein the artificial fish group comprises a plurality of artificial fish in different states, the artificial fish corresponds to the face gray scale image in a gridding state, and the state information of the artificial fish comprises a grid size and an abscissa and ordinate value corresponding to the artificial fish.
3. The method of claim 2, wherein the plurality of artificial fish in different states includes two cases:
one is that the grid subgraphs have the same size but the coordinate positions of the grid subgraphs in the original gray level image are different;
the other is that the mesh subgraphs are not the same size.
4. The method according to claim 1, characterized in that the calculation of the contour similarity is obtained by:
and when the artificial fish is in the current state, calculating the contour similarity of a random grid sub-image in the artificial fish and the face contour template to obtain the contour similarity of the artificial fish.
5. The method of claim 1, wherein the plurality of facial feature values are geometric lengths of facial five sense organs.
6. The method of claim 1, further comprising, before randomly meshing the face grayscale image to generate an artificial fish school:
and converting the shot face color image into a face gray image.
7. The method of claim 6, wherein transforming the captured face color image into the face grayscale image comprises:
and converting the shot face color image into a face gray image by using an averaging method.
CN201811640291.5A 2018-12-29 2018-12-29 Face recognition method based on artificial fish swarm algorithm Active CN110895672B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811640291.5A CN110895672B (en) 2018-12-29 2018-12-29 Face recognition method based on artificial fish swarm algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811640291.5A CN110895672B (en) 2018-12-29 2018-12-29 Face recognition method based on artificial fish swarm algorithm

Publications (2)

Publication Number Publication Date
CN110895672A CN110895672A (en) 2020-03-20
CN110895672B true CN110895672B (en) 2022-05-17

Family

ID=69785694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811640291.5A Active CN110895672B (en) 2018-12-29 2018-12-29 Face recognition method based on artificial fish swarm algorithm

Country Status (1)

Country Link
CN (1) CN110895672B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654095A (en) * 2015-12-22 2016-06-08 浙江宇视科技有限公司 Feature selection method and device
CN107784196A (en) * 2017-09-29 2018-03-09 陕西师范大学 Method based on Artificial Fish Swarm Optimization Algorithm identification key protein matter
CN109003450A (en) * 2018-08-06 2018-12-14 江苏师范大学 A kind of vehicle early warning method identified based on driver's age and gender

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110099130A1 (en) * 2003-07-16 2011-04-28 Massachusetts Institute Of Technology Integrated learning for interactive synthetic characters
US20150201591A1 (en) * 2014-01-23 2015-07-23 Michael McCamy Barrett Hrdrodynamic Artificial Soft-Plastic Fisshing Bait

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654095A (en) * 2015-12-22 2016-06-08 浙江宇视科技有限公司 Feature selection method and device
CN107784196A (en) * 2017-09-29 2018-03-09 陕西师范大学 Method based on Artificial Fish Swarm Optimization Algorithm identification key protein matter
CN109003450A (en) * 2018-08-06 2018-12-14 江苏师范大学 A kind of vehicle early warning method identified based on driver's age and gender

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Facial expression recognition using RBF neural network based on improved artificial fish swarm algorithm";Wang Ye等;《2008 27th Chinese Control Conference》;20080718;第416-420页 *
"LBP特征和改进Fisher准则的人脸识别";刘斌等;《计算机工程与应用》;20170815;第53卷(第16期);第155-160页 *

Also Published As

Publication number Publication date
CN110895672A (en) 2020-03-20

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109214403B (en) Image recognition method, device and equipment and readable medium
CN112016402B (en) Self-adaptive method and device for pedestrian re-recognition field based on unsupervised learning
WO2018100668A1 (en) Image processing device, image processing method, and image processing program
CN112836625A (en) Face living body detection method and device and electronic equipment
JP6448212B2 (en) Recognition device and recognition method
CN111476289A (en) Fish shoal identification method, device, equipment and storage medium based on feature library
JP2008251039A (en) Image recognition system, recognition method thereof and program
CN114299363A (en) Training method of image processing model, image classification method and device
JP2017102622A (en) Image processing device, image processing method and program
CN113963295A (en) Method, device, equipment and storage medium for recognizing landmark in video clip
CN112016437B (en) Living body detection method based on face video key frame
CN110895672B (en) Face recognition method based on artificial fish swarm algorithm
CN112131984A (en) Video clipping method, electronic device and computer-readable storage medium
KR101521136B1 (en) Method of recognizing face and face recognition apparatus
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN115527168A (en) Pedestrian re-identification method, storage medium, database editing method, and storage medium
CN112257666B (en) Target image content aggregation method, device, equipment and readable storage medium
CN114038030A (en) Image tampering identification method, device and computer storage medium
CN113095147A (en) Skin area detection method, system, image processing terminal and storage medium
CN110490027B (en) Face feature extraction training method and system
Akbas et al. Low-level image segmentation based scene classification
KR101884874B1 (en) Method and apparatus for distinguishing object based on partial image
CN110599517A (en) Target feature description method based on local feature and global HSV feature combination
JP4543759B2 (en) Object recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230620

Address after: 518057 1701, Yanxiang science and technology building, 31 Gaoxin middle Fourth Road, Maling community, Yuehai street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Yanxiang Smart Technology Co.,Ltd.

Address before: No.1, Yanxiang Zhigu chuangxiangdi, No.11, Gaoxin Road, Guangming New District, Shenzhen, Guangdong 518107

Patentee before: EVOC INTELLIGENT TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right