CN103115613B - Three-dimensional space positioning method - Google Patents

Three-dimensional space positioning method Download PDF

Info

Publication number
CN103115613B
CN103115613B CN201310057543.2A CN201310057543A CN103115613B CN 103115613 B CN103115613 B CN 103115613B CN 201310057543 A CN201310057543 A CN 201310057543A CN 103115613 B CN103115613 B CN 103115613B
Authority
CN
China
Prior art keywords
camera
scaling board
image
target object
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310057543.2A
Other languages
Chinese (zh)
Other versions
CN103115613A (en
Inventor
刘政怡
李炜
吴建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Yutian Electronics Co., Ltd.
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN201310057543.2A priority Critical patent/CN103115613B/en
Publication of CN103115613A publication Critical patent/CN103115613A/en
Application granted granted Critical
Publication of CN103115613B publication Critical patent/CN103115613B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a three-dimensional space positioning method. The method comprises a setting process and a positioning process, wherein a corresponding relationship between image pixel coordinates and mapping point space coordinates on a calibration panel is established in the setting process, and mapping points on the calibration panel are defined as intersection points of actual space points imaged on a camera and lines of the camera on the calibration panel; two-dimensional coordinates of a target object can be identified by an image processing technology in the positioning process, and three-dimensional coordinates of the mapping points on the calibration panel can be calculated according to the corresponding relationship; and the camera and the mapping points on the calibration panel are connected to form two space lines, and the intersection point of the two space lines is a target object. The method provided the invention can be applied to a situation not requiring high accuracy to realize the three-dimensional space positioning of the object; and the corresponding relationship is established by manually moving the calibration panel, and the target object is positioned by combining certain measurement data, so that a complicated camera calibration process can be avoided, and the positioning method is simple and direct and is easy to implement.

Description

A kind of spatial three-dimensional positioning method
Technical field
The present invention relates to image procossing and technical field of machine vision, particularly a kind of spatial three-dimensional positioning method utilizing two cameras to realize.
Background technology
The spatial three-dimensional positioning method utilizing two cameras to realize object adopts technique of binocular stereoscopic vision usually, by the two width images of imaging device from different position acquisition testees, obtain object dimensional geological information by the position deviation between computed image corresponding point.As 200910108185.7 " a kind of object dimensional positioning method and video cameras ", 201210075073.8 " a kind of space three-dimensional location technology operating systems " relate to localization method ideally, namely suppose that two cameras are identical, there is identical focal length, optical axis is completely parallel, there is not image deformation in camera lens, then according to similar triangle theory, utilize the image coordinate p of object in two camera imaging planes 1(x l, y l), p 2(x r, y r), distance b between camera focal distance f, two cameras, object dimensional coordinate (x can be tried to achieve w, y w, z w):
x w = bx l / ( x l - x r ) y w = by l / ( x l - x r ) z w = bf / ( x l - x r )
In fact, allowing the optical axis of two cameras is completely parallel cannot be accomplished, there is larger error in the location therefore ideally realized.In order to improve precision, technique of binocular stereoscopic vision is first demarcated camera usually, namely object in world coordinate system and the position corresponding relation between its picture point on the imaging plane of camera is determined, gathered the location drawing picture of target object again by two cameras simultaneously, the three-dimensional localization of object is completed, as Central China University of Science and Technology Liu Jingjing Master's thesis " the three-dimensional localization techniques research based on binocular stereo vision ", 201110390863.0 " space-location methods based on binocular stereo vision " by the camera inside and outside parameter of demarcating acquisition.Camera calibration is the key link obtaining three-dimensional information from two dimensional image, and precision is high, but algorithm is too complicated, operates too loaded down with trivial details.
Summary of the invention
The present invention is the weak point for avoiding existing for above-mentioned prior art, a kind of method realizing object space three-dimensional localization that occasion not high in accuracy requirement can be suitable for is provided, avoid complicated camera calibration process, manually move the corresponding relation between mapping point volume coordinate on scaling board determination image pixel coordinates and scaling board, and utilize described corresponding relation and some measurement data to realize the space three-dimensional location of testee.
Technical solution problem of the present invention adopts following technical scheme:
A kind of spatial three-dimensional positioning method of the present invention, comprise setting up procedure and position fixing process two parts, described setting up procedure comprises the steps:
(1) scaling board that the black and white lattice that preparation length breadth ratio is identical with the image resolution ratio length breadth ratio that camera is shot are alternate;
(2) two cameras are installed, make its optical axis substantially parallel, measure and record the distance between two cameras;
(3) respectively following same treatment is done to two cameras: open camera, move described scaling board along camera optical axis and observe the image of described camera shooting, ensure that scaling board center is on image center location, mobile scaling board, until scaling board fills the image of described camera shooting completely, is now measured and records the distance between scaling board and camera;
(4) set up the pixel coordinate on the image of each camera shooting and the corresponding relation between mapping point volume coordinate on corresponding scaling board respectively, on described scaling board, mapping point is defined as the intersection point that the real space point of imaging on camera and camera line hand over scaling board;
Described position fixing process comprises the steps:
(1) open two cameras, shooting comprises the target object image of needs location simultaneously;
(2) from two width target object image, identify target object respectively by image processing techniques, and calculate the two-dimensional coordinate of target object in two width images;
(3) pixel coordinate on the image that the camera set up according to described setting up procedure is taken and the corresponding relation between mapping point volume coordinate on corresponding scaling board, by the two-dimensional coordinate of target object in two width images, calculate the three-dimensional coordinate of mapping point on two scaling boards respectively;
(4) connect camera and form two space lines with mapping point on corresponding scaling board;
(5) intersection point of two space lines is calculated, as the three-dimensional coordinate of target object.
Compared with the prior art, beneficial effect of the present invention is embodied in:
1, a kind of spatial three-dimensional positioning method of the present invention is by setting up the corresponding relation localizing objects object on image pixel coordinates and scaling board between mapping point volume coordinate, and localization method is simply direct, easy to understand;
2, a kind of spatial three-dimensional positioning method of the present invention avoids complicated loaded down with trivial details camera calibration process, manually moves the corresponding relation between mapping point volume coordinate on scaling board determination image pixel coordinates and scaling board, is easy to operation and realizes.
Accompanying drawing explanation
Fig. 1 is the scaling board that a kind of spatial three-dimensional positioning method of the present invention uses.
Fig. 2 is the 3 d space coordinate system of a kind of spatial three-dimensional positioning method setting of the present invention.
Fig. 3 be in a kind of spatial three-dimensional positioning method of the present invention mobile scaling board until the scaling board image shot when scaling board fills the image of described camera shooting completely.
Fig. 4 is that a kind of spatial three-dimensional positioning method of the present invention moves scaling board schematic diagram.
Fig. 5 is a kind of spatial three-dimensional positioning method schematic diagram of the present invention.
Fig. 6 is a kind of spatial three-dimensional positioning method schematic diagram of the present invention.
Symbol in figure:
The spacing d of camera C1, camera C2, camera C1 and camera C2 cam1_cam2, mobile scaling board until scaling board fill completely described camera shooting image time scaling board and camera C1 and C2 between distance d cb1and d cb2, target object P, camera C1 imaging plane V1, camera C2 imaging plane V2, scaling board plane B2 that scaling board plane B1, camera C2 that camera C1 is corresponding are corresponding, P and C1 line hand over imaging plane V1 in a P ' 1, P and C1 line hand over scaling board plane B1 in a P " 1, P and C2 line hand over imaging plane V2 in a P ' 2, P and C2 line hand over scaling board plane B2 in a P " 2.
Below by way of embodiment, and the invention will be further described by reference to the accompanying drawings, but embodiments of the present invention are not limited thereto.
Embodiment
A kind of spatial three-dimensional positioning method of the present embodiment, comprise setting up procedure and position fixing process two parts, described setting up procedure comprises the steps:
(1) scaling board that the black and white lattice that preparation length breadth ratio is identical with the image resolution ratio length breadth ratio that camera is shot are alternate;
If the resolution of camera shooting image is M × N, the wide height of scaling board is respectively BoardWidth and BoardHeight, then requirement
BoardWidth BoardHeight = M N - - - ( 1 )
The resolution of the present embodiment camera shooting image is 1024 × 768, and scaling board is see Fig. 1, and be the alternate cardboard of black and white lattice of 8 × 6, wide height is respectively 40cm and 30cm, meets formula (1).
(2) two cameras are installed, make its optical axis substantially parallel, measure and record the distance d between two cameras cam1_cam2;
For sake of convenience, set up rectangular coordinate system in space, see Fig. 2, horizontal direction is Z axis, vertical direction is Y-axis, and perpendicular to YZ plane is X-axis, with the position at camera C1 place for true origin O (0,0,0), then camera C2 position coordinate is (0, d cam1_cam2, 0).
(3) respectively following same treatment is done to two cameras: open camera, move described scaling board along camera optical axis and observe the image of described camera shooting, ensure that scaling board center is on image center location, mobile scaling board is until scaling board fills the image of described camera shooting completely, see Fig. 3, now measure and record the distance d between scaling board and camera cb1and d cb2, see Fig. 4.
(4) set up the pixel coordinate on the image of each camera shooting and the corresponding relation between mapping point volume coordinate on corresponding scaling board respectively, on described scaling board, mapping point is defined as the intersection point that the real space point of imaging on camera and camera line hand over scaling board;
For camera C1, what camera adopted is the perspective projection imaging model similar with the vision system of people.See Fig. 5, basic perspective projection model is made up of viewpoint and view plane two parts, and viewpoint position can be regarded as the position that camera is placed, and view plane has infinite multiple.The imaging plane V1 of camera is a view plane, in described setting up procedure, when scaling board moves along camera optical axis, scaling board place plane is all view plane, until when scaling board fills the image of described camera shooting completely, scaling board place plane B1 is a view plane equally.Line for any point P in real world and viewpoint C1 hands over imaging plane place view plane V1 in a P ' 1, P ' 1the imaging point of P on imaging plane; The line of P and viewpoint C1 hands over scaling board place view plane B1 in a P " 1, P " 1for mapping point on described scaling board.
For camera C1, the pixel choosing the lower left of image is initial point O1, and the horizontal sides direction of image is x1 axle, and the vertical edges direction of image is y1 axle.By the mode of the table of comparisons, record when there is not lens distortion, the corresponding relation on the image pixel coordinates that described camera C1 takes and scaling board between mapping point volume coordinate, P ' 1represent the pixel two-dimensional coordinate on the image of camera C1 shooting, P " 1represent mapping point three-dimensional coordinate on scaling board.
P′ 1(0,0)→P″ 1(-BoardWidth/2,-BoardHeight/2,d cb1)
P′ 1(M/8,0)→P″ 1(-BoardWidth/2+BoardWidth/8,-BoardHeight/2,d cb1)
P′ 1(M/4,0)→P″ 1(-BoardWidth/2+BoardWidth/4,-BoardHeight/2,d cb1)
P′ 1(3*M/4,0)→P″ 1(-BoardWidth/2+3*BoardWidth/8,-BoardHeight/2,d cb1)
P′ 1(M/2,0)→P″ 1(0,-BoardHeight/2,d cb1)
P′ 1(0,N/6)→P″ 1(-BoardWidth/2,-BoardHeight/2+BoardWidth/6,d cb1)
P′ 1(M/8,N/6)→P″ 1(-BoardWidth/2+BoardWidth/8,-BoardHeight/2++BoardHeight/6,d cb1)
The resolution of the present embodiment camera shooting image is 1024 × 768, the wide height of scaling board is respectively 40cm and 30cm, when there is not lens distortion, the table of comparisons on the pixel two-dimensional coordinate on the image that described camera C1 takes and scaling board between mapping point three-dimensional coordinate is as follows:
P′ 1(0,0)→P″ 1(-20,-15,d cb1)
P′ 1(128,0)→p″ 1(-15,-15,d cb1)
P′ 1(256,0)→P″ 1(-10,-15,d cb1)
P′ 1(384,0)→P″ 1(-5,-15,d cb1)
P′ 1(512,0)→P″ 1(0,-15,d cb1)
P′ 1(0,128)→P″ 1(-20,-10,d cb1)
P′ 1(128,128)→P″ 1(-15,-10,d cb1)
But all there is distortion, actual P ' in the image of general camera shooting 1(x, y) can be obtained by the Corner Detection Algorithm of image processing techniques.The corresponding relation of mapping point three-dimensional coordinate on angle point two-dimensional coordinate on the image of camera C1 shooting and scaling board is designated as f 1: P ' 1(x, y) → P " 1(x, y, z).
Described position fixing process comprises the steps:
(1) open two cameras, shooting comprises the target object image of needs location simultaneously;
(2) from two width target object image, identify target object respectively by image processing techniques, and calculate the two-dimensional coordinate of target object in two width images;
(3) pixel coordinate on the image that the camera set up according to described setting up procedure is taken and the corresponding relation between mapping point volume coordinate on corresponding scaling board, by the two-dimensional coordinate of target object in two width images, calculate the three-dimensional coordinate of mapping point on two scaling boards respectively;
Corresponding relation f on pixel coordinate on the image that the camera C1 utilizing formula (2) to set up according to described setting up procedure takes and scaling board between mapping point volume coordinate 1, obtain the three-dimensional coordinate P of mapping point on scaling board corresponding to target object on image that camera C1 takes " 1(x ", y ", z ").Wherein, the two-dimensional coordinate in the image that (x ', y ') takes at camera C1 for target object, (x ' near, y ' near) for from the nearest angular coordinate of the target object detected (x ', y '), f 1x(x ' near, y ' near) be the x coordinate of mapping point on scaling board, f 1y(x ' near, y ' near) be the y coordinate of mapping point on scaling board.
x ′ ′ - f 1 x ( x near ′ , y near ′ ) = ( x ′ - x ′ near ) × ( BoardWidth / M ) y ′ ′ - f 1 y ( x near ′ , y near ′ ) = ( y ′ - y ′ near ) × ( BoardHeight / N ) z ′ ′ = f 1 z ( x near ′ , y near ′ ) = d cb 1 - - - ( 2 )
In like manner, the corresponding relation f on the pixel coordinate on the image that the camera C2 utilizing formula (3) to set up according to described setting up procedure takes and scaling board between mapping point volume coordinate 2, obtain the three-dimensional coordinate P of mapping point on scaling board corresponding to target object on image that camera C2 takes " 2(x ", y ", z "):
x ′ ′ - f 2 x ( x near ′ , y near ′ ) = ( x ′ - x ′ near ) × ( BoardWidth / M ) y ′ ′ - f 2 y ( x near ′ , y near ′ ) - d cam 1 _ cam 2 = ( y ′ - y ′ near ) × ( BoardHeight / N ) z ′ ′ = f 2 z ( x near ′ , y near ′ ) = d cb 2 - - - ( 3 )
(4) connect camera and form two space lines with mapping point on corresponding scaling board;
See Fig. 6, connect C1 and P " 1, C2 and P " 2form two space lines respectively.
(5) intersection point of two space lines is calculated, as the three-dimensional coordinate of target object.
Article two, space line intersection point is object position, and due to the existence of error, two space lines are possible without intersection point, therefore using the perpendicular bisector mid point of two space lines as the three-dimensional coordinate of target object.Described two space line perpendicular bisector mid-point computation methods are space multistory geometry category, no longer describe in detail.
In described setting up procedure, the chequered with black and white grid of scaling board is more, and setting accuracy is higher.

Claims (1)

1. a spatial three-dimensional positioning method, comprise setting up procedure and position fixing process two parts, described setting up procedure comprises the steps:
(1) scaling board that the black and white lattice that preparation length breadth ratio is identical with the image resolution ratio length breadth ratio that camera is shot are alternate;
(2) two cameras are installed, make its optical axis substantially parallel, measure and record the distance between two cameras;
(3) respectively following same treatment is done to two cameras: open camera, move described scaling board along camera optical axis and observe the image of described camera shooting, ensure that scaling board center is on image center location, mobile scaling board, until scaling board fills the image of described camera shooting completely, is now measured and records the distance between scaling board and camera;
(4) set up the pixel coordinate on the image of each camera shooting and the corresponding relation between mapping point volume coordinate on corresponding scaling board respectively, on described scaling board, mapping point is defined as the intersection point that the real space point of imaging on camera and camera line hand over scaling board;
Described position fixing process comprises the steps:
(1) open two cameras, shooting comprises the target object image of needs location simultaneously;
(2) from two width target object image, identify target object respectively by image processing techniques, and calculate the two-dimensional coordinate of target object in two width images;
(3) pixel coordinate on the image that the camera set up according to described setting up procedure is taken and the corresponding relation between mapping point volume coordinate on corresponding scaling board, by the two-dimensional coordinate of target object in two width images, calculate the three-dimensional coordinate of mapping point on two scaling boards respectively;
(4) connect camera and form two space lines with mapping point on corresponding scaling board;
(5) intersection point of two space lines is calculated, as the three-dimensional coordinate of target object.
CN201310057543.2A 2013-02-04 2013-02-04 Three-dimensional space positioning method Active CN103115613B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310057543.2A CN103115613B (en) 2013-02-04 2013-02-04 Three-dimensional space positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310057543.2A CN103115613B (en) 2013-02-04 2013-02-04 Three-dimensional space positioning method

Publications (2)

Publication Number Publication Date
CN103115613A CN103115613A (en) 2013-05-22
CN103115613B true CN103115613B (en) 2015-04-08

Family

ID=48414033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310057543.2A Active CN103115613B (en) 2013-02-04 2013-02-04 Three-dimensional space positioning method

Country Status (1)

Country Link
CN (1) CN103115613B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148173A (en) * 2019-05-21 2019-08-20 北京百度网讯科技有限公司 The method and apparatus of target positioning, electronic equipment, computer-readable medium

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103471500B (en) * 2013-06-05 2016-09-21 江南大学 A kind of monocular camera machine vision midplane coordinate and the conversion method of 3 d space coordinate point
CN103593658A (en) * 2013-11-22 2014-02-19 中国电子科技集团公司第三十八研究所 Three-dimensional space positioning system based on infrared image recognition
CN103940407B (en) * 2014-02-13 2016-09-07 鲁东大学 One extracts coombe method based on landform and Remote Sensing Image Fusion technology
CN105865327A (en) * 2015-01-22 2016-08-17 成都飞机工业(集团)有限责任公司 Zoom photograph-based relative position measurement method
CN104776832B (en) * 2015-04-16 2017-02-22 浪潮软件集团有限公司 Method, set top box and system for positioning objects in space
CN108616753B (en) * 2016-12-29 2020-08-04 深圳超多维科技有限公司 Naked eye three-dimensional display method and device
CN108257182A (en) * 2016-12-29 2018-07-06 深圳超多维光电子有限公司 A kind of scaling method and device of three-dimensional camera module
CN108257181A (en) * 2016-12-29 2018-07-06 深圳超多维光电子有限公司 A kind of space-location method and device
CN108731644B (en) * 2017-09-12 2020-08-11 武汉天际航信息科技股份有限公司 Oblique photography mapping method and system based on vertical auxiliary line
CN109751992B (en) * 2017-11-03 2021-07-20 北京凌宇智控科技有限公司 Indoor three-dimensional space-oriented positioning correction method, positioning method and equipment thereof
CN108648237B (en) * 2018-03-16 2022-05-03 中国科学院信息工程研究所 Space positioning method based on vision
CN109341530B (en) * 2018-10-25 2020-01-21 华中科技大学 Object point positioning method and system in binocular stereo vision
CN110670860B (en) * 2019-10-15 2021-03-09 广东博智林机器人有限公司 Laying method, laying robot and storage medium
CN112066950B (en) * 2020-07-24 2022-10-14 北京空间机电研究所 Multi-optical-axis parallel mapping camera single-center projection conversion method
WO2022188733A1 (en) * 2021-03-08 2022-09-15 Hangzhou Taro Positioning Technology Co., Ltd. Scenario triggering and interaction based on target positioning and identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694483A (en) * 1994-06-13 1997-12-02 Kabushiki Kaisha Toshiba Measurement face extraction apparatus and method
US6637016B1 (en) * 2001-04-25 2003-10-21 Lsi Logic Corporation Assignment of cell coordinates
CN102567989A (en) * 2011-11-30 2012-07-11 重庆大学 Space positioning method based on binocular stereo vision
DE102011015893A1 (en) * 2011-04-01 2012-10-04 NUMENUS GmbH Method for visualizing freeform surfaces by means of ray tracing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694483A (en) * 1994-06-13 1997-12-02 Kabushiki Kaisha Toshiba Measurement face extraction apparatus and method
US6637016B1 (en) * 2001-04-25 2003-10-21 Lsi Logic Corporation Assignment of cell coordinates
DE102011015893A1 (en) * 2011-04-01 2012-10-04 NUMENUS GmbH Method for visualizing freeform surfaces by means of ray tracing
CN102567989A (en) * 2011-11-30 2012-07-11 重庆大学 Space positioning method based on binocular stereo vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110148173A (en) * 2019-05-21 2019-08-20 北京百度网讯科技有限公司 The method and apparatus of target positioning, electronic equipment, computer-readable medium

Also Published As

Publication number Publication date
CN103115613A (en) 2013-05-22

Similar Documents

Publication Publication Date Title
CN103115613B (en) Three-dimensional space positioning method
CN110288642B (en) Three-dimensional object rapid reconstruction method based on camera array
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
CN104616292B (en) Monocular vision measuring method based on global homography matrix
CN104596502B (en) Object posture measuring method based on CAD model and monocular vision
CN101581569B (en) Calibrating method of structural parameters of binocular visual sensing system
CN111243002A (en) Monocular laser speckle projection system calibration and depth estimation method applied to high-precision three-dimensional measurement
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN104677330A (en) Small binocular stereoscopic vision ranging system
CN110375648A (en) The spatial point three-dimensional coordinate measurement method that the single camera of gridiron pattern target auxiliary is realized
CN109155070A (en) Use the method and computer program product of flat mirror calibration stereo imaging system
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
CN103093460A (en) Moving camera virtual array calibration method based on parallel parallax
Patel et al. Distance measurement system using binocular stereo vision approach
CN102914295A (en) Computer vision cube calibration based three-dimensional measurement method
Mahdy et al. Projector calibration using passive stereo and triangulation
CA3233222A1 (en) Method, apparatus and device for photogrammetry, and storage medium
CN105374067A (en) Three-dimensional reconstruction method based on PAL cameras and reconstruction system thereof
CN108180888A (en) A kind of distance detection method based on rotating pick-up head
Xu et al. 3D multi-directional sensor with pyramid mirror and structured light
CN111854636A (en) Multi-camera array three-dimensional detection system and method
Nagy et al. Development of an omnidirectional stereo vision system
Harvent et al. Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
Fleischmann et al. Fast projector-camera calibration for interactive projection mapping
CN108182727B (en) Phase unwrapping method based on multi-viewpoint geometric consistency

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170309

Address after: 213000 Changzhou, Jiangsu new North Road, No. 8

Patentee after: Changzhou Yutian Electronics Co., Ltd.

Address before: 230039 Hefei West Road, Anhui, No. 3

Patentee before: Anhui University