CN106295512B - Vision data base construction method and indoor orientation method in more correction lines room based on mark - Google Patents

Vision data base construction method and indoor orientation method in more correction lines room based on mark Download PDF

Info

Publication number
CN106295512B
CN106295512B CN201610599708.2A CN201610599708A CN106295512B CN 106295512 B CN106295512 B CN 106295512B CN 201610599708 A CN201610599708 A CN 201610599708A CN 106295512 B CN106295512 B CN 106295512B
Authority
CN
China
Prior art keywords
image
point
matrix
coordinate
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610599708.2A
Other languages
Chinese (zh)
Other versions
CN106295512A (en
Inventor
马琳
林英男
崔扬
徐玉滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Virtual Reality Digital Technology Research Institute Harbin Co ltd
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201610599708.2A priority Critical patent/CN106295512B/en
Publication of CN106295512A publication Critical patent/CN106295512A/en
Application granted granted Critical
Publication of CN106295512B publication Critical patent/CN106295512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Abstract

Vision data base construction method and indoor orientation method in the present invention relates to a kind of more correction lines room based on mark, belong to indoor scene field of locating technology, location Calculation is carried out in order to solve the disadvantage that the prior art can not avoid camera lens parameter, the present invention is obtained with tagged image, by image by accelerating robust features to extract characteristic point matrix, calculate the arest neighbors Euclidean distance between characteristic point matrix, retrieve the image with same mark, again by accelerating robust features to match to obtain m to match point, Mismatching point is rejected using algorithm, obtain homography matrix, solve two corresponding parametric equations of correction line, it solves from reference frame to the transformational relation of image physical coordinates system, and solve position coordinates and steering angle of the camera in reference frame, finally solve position coordinates of the user in real world.Vision data base construction method in the present invention also provides a kind of more correction lines room based on mark.The present invention is suitable for pedestrian's indoor positioning navigation system.

Description

Vision data base construction method and indoor positioning in more correction lines room based on mark Method
Technical field
Vision data base construction method and indoor positioning side in the present invention relates to a kind of more correction lines room based on mark Method belongs to indoor scene field of locating technology.
Background technique
In recent years, due to relevant device and technology are widely progressive and practical application scene in based on location-based service The necessity of seamless solution, indoor locating system obtain extensive concern and research, these systems open a kind of complete The new technical field of automatic target detection and localization.In an outdoor environment, shop satellite navigation system (Global Navigation Satellite System, GNSS) positioning result be user location is accurately estimated it is most reliable come One of source.However indoors or in closed environment, lead to the tight of positioning accuracy since satellite-signal will receive serious decaying It loses again, GNSS is infeasible.
Currently, the research hotspot of indoor locating system technology mainly includes WiFi location technology and bluetooth location technology.Its In, WiFi location technology has the characteristics that wireless access point has disposed range extensively and can spread breeding, utilizes user's smart phone The signal strength of each wireless access point detected completes the estimation to user location.However, the technology heavy dependence wirelessly connects The number of access point, and sharply decline in the positioning accuracy of the neighboring areas such as the inlet and outlet of the actual environments such as market.Bluetooth Location technology completes the estimation to user location by the intensity of the Bluetooth signal received using user, and can realize 1 meter Positioning accuracy, however the technology limits its application in practice seeking high latency phenomenon existing for the Bluetooth signal stage.It is comprehensive The advantage and disadvantage for closing the above indoor positioning technologies, need a kind of scalability strong, and lower deployment cost is low, and required time delay is small, positioning Precision height and stable positioning system.Due to containing information abundant in image, the smart phone with imaging sensor High popularization and expansible transplantability and computer processing technology such as greatly enhance at the advantages, so that the interior of view-based access control model is fixed Position technology becomes new research hotspot.
Vision indoor positioning technologies are not necessarily to additional deployment, and the indoor scene picture that need to be only shot by user terminal is matched Established indoor scene database is closed, and robustness stronger estimated result more accurate to user location can be completed.And And since image includes scene information abundant, vision indoor positioning can also provide better Visualization Service for user, this It is that other positioning methods are incomparable.The vision indoor positioning technologies proposed at present mainly use location fingerprint algorithm, Database is established jointly by the scene image and user location that shoot terminal, and data are then obtained using quick retrieval Most matched scene image and corresponding position in library complete positioning service.However, traditional location fingerprint side based on Epipolar geometry Method haves the defects that apparent, the i.e. positioning result concentration that depends critically upon location fingerprint.It is fixed when location fingerprint is intensive Position precision is promoted, but database storage capacity increases, and causes bigger retrieval time delay, vice versa.
Based on the above reasons, a kind of vision indoor orientation method based on mark is needed, is schemed using marks several in scene Picture and its location information replace location fingerprint and image information used in traditional algorithm, and use acceleration robust features algorithm (Speed Up Robust Features, SURF) and the estimation for correcting the completion of line location algorithm to user location more.It is correct more The location algorithm of line, the dependence for being capable of providing additional the constraint relationship to reduce location algorithm for prior information, so that algorithm It can apply in the changeable actual scene of situation.
Summary of the invention
The purpose of the present invention is to solve the prior art be used only a correction line, can not avoid camera lens parameter into The shortcomings that row location Calculation, and propose in a kind of more correction lines room based on mark vision data base construction method and indoor fixed Position method.
Vision data base construction method in a kind of more correction lines room based on mark characterized by comprising
Step 1 one: Image Acquisition is carried out to mark target using camera sensor, obtained image is denoted as image Ik(k =1,2,3);
Step 1 two: two correction line l are established in the mark targetk1And lk2, in the lk1And lk2Upper each choosing of difference N sampled point is taken, and l is obtained by the sampled pointk1And lk2Linear equation;The correction line lk1In world coordinate system Be positioned parallel to level ground, and lk1For a ray, starting point is located in the mark target;The correction line lk2 Position is perpendicular to level ground and perpendicular to l in world coordinate systemk1, starting point is located in the mark target;
Step 1 three: using acceleration robust features algorithm to IkCharacteristic point matrix is extracted, obtains that image I can be characterizedkFeature Characteristic point matrix Fk
Step 1 four: it solves and corrects line lk1Linear equation, sample point coordinate and origin in image pixel coordinates system, Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the horizontal seat of n sampled point Vector sum ordinate vector is marked, is the column vector for being n × 1;Origin is lk1Left side endpoint;If the l of straight linek1Table It is y=α up to formk1x+βk1, parameter alphak1And βk1Use matrix Wk1=[αk1 βk1] indicate, then Wk1It is acquired by following formula:
Wherein X is in column vector XnRight side added a column element 1 and the rank matrix of new n × 2 that is formed.
Step 1 five: it solves and corrects line lk2Linear equation, sample point coordinate and origin in image pixel coordinates system, Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the cross of n sampled point Coordinate vector and ordinate vector are the column vector for being n × 1;If the l of straight linek2Expression-form be y=αk2x+βk2, ginseng Number αk2And βk2Use matrix Wk2=[αk2 βk2] indicate, then Wk2It is acquired by following formula:
Step 1 six: by the elevation angle of the camera sensor in real world coordinates systemAnd the camera sensor With the relative altitude Z for correcting line0, parameter matrix Wk1, parameter matrix Wk2With characteristic point matrix FkDeng deposit database, as image IkAll information.
Step 1 seven: step 1 one is repeated to step 1 six, until database sharing finishes.
The present invention also provides a kind of more correction line indoor orientation methods based on mark, include the following steps:
Step 2 one: user is obtained by camera sensor with tagged image, and image I is denoted ason, by described image IonBy accelerating robust features algorithm to extract characteristic point matrix, and the characteristic point matrix is sent to server end, it is described Mark includes two correction lines;What the characteristic point matrix referred in the present invention referred both to be extracted by acceleration robust features algorithm.
Step 2 two: in server end, image I is calculatedonCharacteristic point matrix and database in the feature of image that stores Arest neighbors Euclidean distance between dot matrix retrieves and the figure in the database according to the arest neighbors Euclidean distance As IonThe image with same mark is denoted as I by the most matched image with same markopt, by IoptWith IonIt carries out Robust features are accelerated to match to obtain m to match point;The database is the data as described in any one of claims 1 to 3 Library, the arest neighbors Euclidean distance are the Euclidean distance being calculated using nearest neighbor algorithm;
Step 2 three: the m is rejected to the Mismatching point in match point using RANSAC algorithm, and obtains described image IoptFeature Descriptor to described image IonFeature Descriptor homography matrix H;Define image IoptIn image pixel coordinates The coordinate P of the lower n sampled point for correcting line of systemiFor (xi,yi,1)T, it is mapped to image IonIn image pixel coordinates system coordinate For Qi=(xi,yi,1)T, wherein i=1,2 ..., nl;Then mapping relations are expressed as:
[Q1 Q2 … Qn]=H [P1 P2 … Pn] (3)
Step 2 four: image I is solvedonMiddle correction line lk1Corresponding parametric equation, detailed process are as follows: define the lk1's Coordinates matrix is Q=[Q1 Q2 … Qn], the Q is separated, the 1st row of matrix Q and the separation of the 3rd row are formed into new square Battle array X, using the Q rest part as vector Y, then image IonMiddle correction line lk1Linear equation be expressed as Wherein Y and X is it is known that defined parameters matrix W1=[α11], it acquires:
W1=YXT(XXT)-1 (4)
The W1For in described image IonIn represent correct line lk1
Step 2 five: image I is solvedonMiddle correction line lk2Corresponding parametric equation, detailed process and four phase of step 2 Together, the parameter matrix W acquired2Are as follows:
W2=YXT(XXT)-1 (5)
The W2For in described image IonIn represent correct line lk2
Step 2 six: it solves and corrects line lk1On point from reference frame map to image physical coordinates system conversion close System;
Step 2 seven: it solves and corrects line lk2On point from reference frame map to image physical coordinates system conversion close System;
Step 2 eight: position coordinates [X of the camera in reference frame is solved0,Y0] and steering angle θ, according to formula (13) It finds out:
Wherein, variable b and variable c as off-line phase and homography matrix geometrical-restriction relation obtain as a result, being known Amount;Variable Z0WithIt can artificially measure to obtain as prior information;The focal length f of camera can be solved according to formula (18);
θ value, which is substituted into formula (13), can obtain Y0Value be
X can be found out by formula (10) and formula (11)0For
Step 2 nine: position coordinates of the user in real world, detailed process are solved are as follows: define and entangle in reference frame Main track lk1Coordinate value of the starting point in world coordinate system be Tr=[Xr,Yr,Zr]T, plane and the world where mark target Rotation angular dependence between coordinate system is Rr, wherein XrIndicate that reference frame and world coordinates tie up to the offset of X-direction; YrIndicate that reference frame and world coordinates tie up to the offset of Y direction;ZrIndicate that reference frame and world coordinates tie up to Z The offset of axis direction;Use Ls=[X0,Y0,1]TIndicate the coordinate at any point in reference frame, then it corresponds to true generation Coordinate X in boundaryrMeet following relational expression:
Lr=Rr -1·Ls-Tr (22)
According to formula (22), coordinate information of the user in reference frame is converted into user in real world coordinates Position coordinates, to complete estimation to user location.
The invention has the benefit that compared with prior art, the present invention adds additional a correction line, so that indoors The camera parameter of acquisition user terminal is not needed during positioning, camera parameter is often difficult to obtain, therefore method of the invention This parameter is avoided, so that the result of indoor positioning is more efficient, more accurate.
Detailed description of the invention
Fig. 1 is the flow chart of vision data base construction method in more correction lines room based on mark of the embodiment of the present invention;
Fig. 2 is the schematic diagram of the reference frame of the embodiment of the present invention, camera coordinates system, image physical coordinates system;
Fig. 3 is the correction line options schematic diagram of the embodiment of the present invention;
Fig. 4 is the flow chart of vision positioning method in more correction lines room based on mark of the embodiment of the present invention;
Fig. 5 is the flow chart of step 2 three or five in the embodiment of the present invention.
Specific embodiment
The present invention is suitable for indoor positioning, such as when user is inside market is accurately positioned the position of user.This The method of invention is divided into database construction method and indoor orientation method, and when indoor positioning needs to use the image letter in database Breath, therefore the building of advanced row database is needed in actual use.It needs to utilize having with user when positioning indoors The equipment of camera sensor carries out, and carries out matching primitives by the image in the image and database that take and finally obtains user Position.
The present invention relates to four kinds of coordinate systems: reference frame, camera coordinates system, image pixel coordinates system and image physics Coordinate system.Its schematic diagram is as shown in Fig. 2, RCS is reference frame, and CCS is camera coordinates system, and ICS is image physical coordinates system. R0, C0For coordinate origin.Wherein reference frame is to identify in target by the coordinate of (X, Y, Z) three orthogonal axis compositions Line l corrects in systemk1The coboundary right direction of target is identified as X-axis, the vertical target that identifies is Y-axis inwards, identifies target lk2It is perpendicular to the ground downwards Z axis, origin is the upper left corner for identifying target.The origin of camera coordinates system is the optical center of camera, phase The optical axis of machine is as W axis, U axis and V axis, and wherein U-V plane is parallel to the imaging plane of camera.Image physical coordinates system is two dimension Coordinate system, for indicating that project objects are in camera CCD (Charge-coupledDevice) sensor imaging plane in the true world On image, u axis and v axis in coordinate system are respectively equivalent to U axis and V axis in camera coordinates system.Image pixel coordinates system with The upper left corner that image indicates target (such as poster) is origin, and with origin to the right for X-axis, origin is Y-axis downwards, wherein image pixel Coordinate system establishes coordinate correspondence relationship between image physical coordinates system by way of scaling.
Specific embodiment 1: vision data base construction method packet in more correction lines room based on mark of present embodiment It includes:
Step 1 one: Image Acquisition is carried out to mark target using camera sensor, obtained image is denoted as image Ik(k =1,2,3);
Step 1 two: two correction line l are established in mark targetk1And lk2, in lk1And lk2Upper each n sampled point of selection, And l is obtained by sampled pointk1And lk2Linear equation under image pixel coordinates system;Correct line lk1In world coordinate system It is positioned parallel to level ground, and lk1For a ray, starting point is located in mark target;Correct line lk2In world coordinates Position is perpendicular to level ground and perpendicular to l in systemk1, starting point, which is located at, to be identified in target;
Step 1 three: using acceleration robust features algorithm to IkCharacteristic point matrix is extracted, obtains that image I can be characterizedkFeature Characteristic point matrix Fk
Step 1 four: it solves and corrects line lk1Linear equation, sample point coordinate and origin in image pixel coordinates system, Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the horizontal seat of n sampled point Vector sum ordinate vector is marked, is the column vector for being n × 1;Origin is lk1Left side endpoint;If the l of straight linek1Table It is y=α up to formk1x+βk1, parameter alphak1And βk1Use matrix Wk1=[αk1 βk1] indicate, then Wk1It is acquired by following formula:
Wherein X is in column vector XnRight side added a column element 1 and the rank matrix of new n × 2 that is formed;
Step 1 five: it solves and corrects line lk2Linear equation, sample point coordinate and origin in image pixel coordinates system, Specifically: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the cross of n sampled point Coordinate vector and ordinate vector are the column vector for being n × 1;If the l of straight linek2Expression-form be y=αk2x+βk2, ginseng Number αk2And βk2Use matrix Wk2=[αk2 βk2] indicate, then Wk2It is acquired by following formula:
Step 1 six: by the elevation angle of the camera sensor in real world coordinates systemAnd the camera sensor With the relative altitude Z for correcting line0, parameter matrix Wk1, parameter matrix Wk2With characteristic point matrix FkDeng deposit database, as image IkAll information;
Step 1 seven: step 1 one is repeated to step 1 six, until database sharing finishes.
It should be strongly noted that the correction line that the present invention is additional by setting, has obtained one in the prior art The parameter not provided, the i.e. elevation angleThe present invention passes through the elevation angleInstead of camera parameter in the prior art, due to the elevation angleHold very much It is easily obtained by survey calculation, two camera parameters are often difficult to obtain according to the difference of each camera, therefore herein for the elevation angleCalculating and using innovation of the invention can be embodied well, i.e., with easy acquisition amount instead of difficult acquisition amount, significantly Increase the versatility and accuracy rate of method.
The database construction method of present embodiment is also referred to as off-line phase.
Specific embodiment 2: the present embodiment is different from the first embodiment in that: mark target is poster.That is structure Poster is chosen to be target by us when building database, is shot to poster, and correction line is artificially drawn on poster.In step The step of sampled point is chosen in rapid 1 as artificially draws the process for correcting line.Specifically it can be by artificially judging to select 8 A, this 8 points are judged substantially point-blank by range estimation.
Other steps and parameter are same as the specific embodiment one.
Specific embodiment 3: the present embodiment is different from the first and the second embodiment in that: lk1For the top of poster Boundary line, lk2For the left side boundary line of the poster, origin is the marginal point in the poster upper left corner.I.e. default poster is rectangle, two sides Boundary line is one group of non-parallel side, selects mutually orthogonal two lines to be for the ease of subsequent calculating, other selection modes can also To complete this programme.
Other steps and parameter are the same as one or two specific embodiments.
Specific embodiment 4: more correction line indoor orientation methods based on mark of present embodiment include:
Step 2 one: user is obtained by camera sensor with tagged image, and image I is denoted ason, by image IonIt is logical It crosses acceleration robust features algorithm and extracts characteristic point matrix, and characteristic point matrix is sent to server end, mark includes two Line is corrected, i.e., is the image for correcting line with two with tagged image.
User can be by the process that camera sensor obtains image has camera function by mobile phone of user etc. Equipment is obtained by shooting.
Step 2 two: in server end, image I is calculatedonCharacteristic point matrix and database in the feature of image that stores Arest neighbors Euclidean distance between dot matrix retrieves and image I in the database according to arest neighbors Euclidean distanceonMost Image with same mark is denoted as I by the image with same mark matchedopt, by IoptWith IonCarry out acceleration robust features Matching obtains m to match point;Database is the database in specific embodiment one, and arest neighbors Euclidean distance is to use arest neighbors The Euclidean distance that algorithm is calculated;
" most matching " indicates that Euclidean distance is most short, and the process of " obtaining m to match point " can be to preset an Euclidean Distance threshold, by the Euclidean distance for certain a pair of the point for accelerating robust features to match if it is less than this threshold in two images Value, then it is assumed that this pair of point is match point.
Step 2 three: the m is rejected to the Mismatching point in match point using RANSAC algorithm, and obtains described image IoptFeature Descriptor to described image IonFeature Descriptor homography matrix H;Define image IoptIn image pixel coordinates The coordinate P of the lower n sampled point for correcting line of systemiFor (xi,yi,1)T, it is mapped to image IonIn image pixel coordinates system coordinate For Qi=(xi,yi,1)T, wherein i=1,2 ..., nl;Then mapping relations are expressed as:
[Q1 Q2 … Qn]=H [P1 P2 … Pn] (3)
Step 2 four: image I is solvedonMiddle correction line lk1Corresponding parametric equation, detailed process are as follows: define the lk1's Coordinates matrix is Q=[Q1 Q2 … Qn], Q is separated, the 1st row of matrix Q and the separation of the 3rd row are formed into new matrix X, Using Q rest part as vector Y, then image IonMiddle correction line lk1Linear equation be expressed asWherein Y and X It is known that defined parameters matrix W1=[α11], it acquires:
W1=YXT(XXT)-1 (4)
W1For in described image IonIn represent correct line lk1
Step 2 five: image I is solvedonMiddle correction line lk2Corresponding parametric equation, detailed process is identical as step 2 four, The parameter matrix W acquired2Are as follows:
W2=YXT(XXT)-1 (5)
W2For in described image IonIn represent correct line lk2
Step 2 six: it solves and corrects line lk1On point from reference frame map to image physical coordinates system conversion close System;
Step 2 seven: it solves and corrects line lk2On point from reference frame map to image physical coordinates system conversion close System;
Step 2 eight: position coordinates [X of the camera in reference frame is solved0,Y0] and steering angle θ, according to formula (13) It finds out:
Wherein, variable b and variable c as off-line phase and homography matrix geometrical-restriction relation obtain as a result, being known Amount;Variable Z0WithIt can artificially measure to obtain as prior information;The focal length f of camera can be solved according to formula (18);
θ value, which is substituted into formula (13), can obtain Y0Value be
X can be found out by formula (10) and formula (11)0For
Step 2 nine: position coordinates of the user in real world, detailed process are solved are as follows: define and entangle in reference frame Main track lk1Coordinate value of the starting point in world coordinate system be Tr=[Xr,Yr,Zr]T, plane and the world where mark target Rotation angular dependence between coordinate system is Rr, wherein XrIndicate that reference frame and world coordinates tie up to the offset of X-direction; YrIndicate that reference frame and world coordinates tie up to the offset of Y direction;ZrIndicate that reference frame and world coordinates tie up to Z The offset of axis direction;Use Ls=[X0,Y0,1]TIndicate the coordinate at any point in reference frame, then it corresponds to true generation Coordinate X in boundaryrMeet following relational expression:
Lr=Rr -1·Ls-Tr (22)
According to formula (22), coordinate information of the user in reference frame is converted into user in real world coordinates Position coordinates, to complete estimation to user location.
The indoor orientation method of present embodiment is also referred to as on-line stage.
Specific embodiment 5: present embodiment is unlike specific embodiment four: in step 2 three, using RANSAC algorithm is rejected the m and is specifically included to the Mismatching point in match point:
Step 231: 0 is set by the initial value of the number of iterations n and match point i;
Step 2 three or two: from the m obtained in step 2 two to 4 pairs of match points are selected in match point, image I is calculatedon With image IoptBetween homography matrix H, it is corresponding at its to remaining match point in match point that the m is calculated by homography matrix H Identify the theoretical coordinate under the image coordinate system of target;
Step 2 three or three: image I is calculatedoptIn i-th match point theoretical coordinate and described image IonIth feature point The Euclidean distance of coordinate;Wherein, 1≤i≤m;
Step 2 three or four: judging whether the Euclidean distance in the step 2 three or three is less than Euclidean distance threshold value, if so, Step 2 three or five is executed, if it is not, thening follow the steps 233;Euclidean distance threshold value herein can be 0.01.
Step 2 three or five: adding 1 for the value of n, judges whether the value of n is greater than preset the number of iterations n0=20;If so, protecting The matching double points are deposited, and save homography matrix H;If it is not, then rejecting the matching double points, and execute step 2 three or two.
Other steps and parameter are identical as one of specific embodiment one to four.
Specific embodiment 6: present embodiment is unlike specific embodiment four or five: in step 2 six, being located at Described image IonPixel coordinate system in the correction line l that has found outk1Pixel coordinate equation by transformation after in image physics Equation l is expressed as on coordinate system1: u+bv+c=0;User is provided when acquiring image, present position is in current identification target Coordinate is (X in reference frame0,Y0,Z0);Wherein, Z0It as priori conditions, is obtained by artificially measuring, (X0,Y0) indicate wait ask The user location of solution;Correct line lk1There is another expression-form l in image physical coordinates system1', l1' expression-form etc. It is same as the correction line equation l1: u+bv+c=0;It solves and corrects line lk1On point from reference frame map to image physics sit The transformational relation of mark system, which is equal to, solves equation l1: the parameter of u+bv+c=0, detailed process are as follows:
Step 261: reference frame is converted into camera coordinates system, detailed process are as follows: will be former in reference frame Point R0By appraising vector (- X through discussion0,-Y0,-Z0) be converted to the origin of camera coordinates system;X-Y plane is rotated counterclockwise along Z axis, is made Y-Z plane is parallel to the V-W plane in camera coordinates system, rotation angle θ;Y-Z plane is rotated counterclockwise along X-axis, makes X-Y plane It is parallel to U-V plane, rotation angle is
Step 2 six or two: it solves and corrects line lk1On point the transformational relation of camera coordinates system is mapped to from reference frame; Provide that direction of rotation counterclockwise is positive direction, and reference frame is left-handed coordinate system criterion;It will be any in reference frame A bit [x, y, z]TMap to the coordinate [u, v, w] in camera coordinates systemTIt indicates are as follows:
Wherein, [u, v, w]T[x, y, z]TIt is any point in camera coordinates system and reference frame respectively, and equal sign Variable x in right side0,y0,z0It is by the coordinate vector [X under spin matrix R and reference frame0,Y0,Z0] obtained, it can To be expressed as following formula:
Spin matrix R is acquired by formula (8):
Wherein, RZMatrix indicates first to rotate the spin matrix that θ degree obtains, R counterclockwise along Z axisXMatrix was indicated along the X-axis inverse time Needle rotationSpend obtained spin matrix;
The coordinate representation for defining a point P in X-axis in reference frame is [x, 0,0]T, then it is mapped in camera coordinates system Coordinate [ux,vx,wx]TIt can be obtained by following formula:
So that
Step 2 six or three: it solves and corrects line lk1Point from camera coordinates system map to image physical coordinates system conversion close System;
Define the point [u in camera coordinates systemx,vx] to correspond to the coordinate of point in image physical coordinates system be [up,vp], It can be obtained according to the geometric maps relationship of camera
Wherein, f indicates the focal length of camera.Bring formula (11) into formula (10), and it is available about u to eliminate variable xpAnd vp's Equation:
What formula (12) and u+bv+c=0 were indicated on ICS is same straight line, according to the method for undetermined coefficients, it can be deduced that
Other steps and parameter are identical as specific embodiment four or five.
Specific embodiment 7: unlike one of present embodiment and specific embodiment four to six:
In step 2 seven, I is definedonImage pixel coordinates system in the correction line l that has found outk2Image pixel coordinates side Journey is expressed as equation l after transformation on ICS2: u+b1v+c1=0, equation l2Parameter it is known and be equal to l'2, solve Correct line lk2On point from the transformational relation that reference frame maps to image physical coordinates system be equal to solve equation l'2, tool Body process is as follows:
The step 2 July 1st: it solves and corrects line lk2Point the transformational relation of camera coordinates system is mapped to from reference frame;It is fixed Justice corrects line lk2On some coordinate in reference frame be [0,0, z], it is transformed into camera from reference frame and is sat After mark system, the coordinate [u in camera coordinates systemy,vy,wy]TFor
Wherein, matrix R indicates the spin matrix from RCS to ICS, identical as R obtained by formula, through x in formula (7)0,y0,z0 Value bring into after formula, obtain following formula:
Step 2 seven or two: it solves and corrects line lk2Point from camera coordinates system map to image physical coordinates system conversion close System;Solve the point [u in camera coordinates systemy,vy,wy]TMap to the point [u in image physical coordinates systemp,vp], the mapping process It is the process of an equal proportion scaling and projection, passes through mapping relations shown in formula and can be obtained after eliminating variable z
Wherein formula (16) and equation l2: u+b1v+c1=0 fasten expression in image physical coordinates is same straight line; According to the method for undetermined coefficients, it can be deduced that
Wherein, variable b1,c1,Z0,It is known quantity, two equatioies in convolution (17) can obtain
Step 2 seven or three: position coordinates [X of the camera in reference frame is solved0,Y0] and steering angle θ;According to formula (13) it finds out
Wherein, variable b and variable c be database sharing process and homography matrix geometrical-restriction relation obtain as a result, being Known quantity;Variable Z0WithIt is obtained as prior information by artificially measurement;The focal length f of camera is solved according to formula (18);
θ value, which is substituted into formula (13), can obtain Y0Value be
X can be found out by formula and formula0For
Other steps and parameter are identical as one of specific embodiment four to six.
The present invention can also have other various embodiments, without deviating from the spirit and substance of the present invention, this field Technical staff makes various corresponding changes and modifications in accordance with the present invention, but these corresponding changes and modifications all should belong to The protection scope of the appended claims of the present invention.

Claims (5)

1. vision data base construction method in a kind of more correction lines room based on mark characterized by comprising
Step 1 one: Image Acquisition is carried out to mark target using camera sensor, obtained image is denoted as image Ik
Step 1 two: two correction line l are established in the mark targetk1And lk2, in the lk1And lk2Upper each selection n are adopted Sampling point, and l is obtained by the sampled pointk1And lk2Linear equation under image pixel coordinates system;The correction line lk1It is alive Level ground, and l are positioned parallel in boundary's coordinate systemk1For a ray, starting point is located in the mark target;It is described Correct line lk2Position is perpendicular to level ground and perpendicular to l in world coordinate systemk1, starting point is located at the mark target In;
Step 1 three: using acceleration robust features algorithm to IkCharacteristic point matrix is extracted, obtains that image I can be characterizedkThe spy of feature Levy dot matrix Fk
Step 1 four: it solves and corrects line lk1Linear equation, sample point coordinate and origin in image pixel coordinates system, specifically Are as follows: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the abscissa of n sampled point to Amount and ordinate vector, are the column vector for being n × 1;Origin is lk1Left side endpoint;If the l of straight linek1Expression shape Formula is y=αk1x+βk1, parameter alphak1And βk1Use matrix Wk1=[αk1 βk1] indicate, then Wk1It is acquired by following formula:
Wherein X is in column vector XnRight side added a column element 1 and the rank matrix of new n × 2 that is formed;
Step 1 five: it solves and corrects line lk2Linear equation, sample point coordinate and origin in image pixel coordinates system, specifically Are as follows: it sets n sampled point and constitutes pixel coordinate matrixWhereinWithRespectively indicate the abscissa of n sampled point Vector sum ordinate vector is the column vector for being n × 1;If the l of straight linek2Expression-form be y=αk2x+βk2, parameter alphak2 And βk2Use matrix Wk2=[αk2 βk2] indicate, then Wk2It is acquired by following formula:
Step 1 six: by the elevation angle of the camera sensor in real world coordinates systemAnd the camera sensor with entangle The relative altitude Z of main track0, parameter matrix Wk1, parameter matrix Wk2With characteristic point matrix FkIt is stored in database, as image IkInstitute There is information;
Step 1 seven: step 1 one is repeated to step 1 six, until database sharing finishes.
2. the method according to claim 1, wherein the mark target is poster.
3. according to the method described in claim 2, it is characterized in that, the lk1For the upper border line of the poster, the lk2For The left side boundary line of the poster, the origin are the marginal point in the poster upper left corner.
4. a kind of more correction line indoor orientation methods based on mark, which comprises the steps of:
Step 2 one: user obtains with tagged image, is denoted as image Ion, by described image IonBy accelerating robust features to calculate Method extracts characteristic point matrix, and the characteristic point matrix is sent to server end, and the mark includes two correction lines;
Step 2 two: in server end, image I is calculatedonCharacteristic point matrix and database in the characteristic point square of image that stores Arest neighbors Euclidean distance between battle array, retrieves and described image I in the database according to the arest neighbors Euclidean distanceon The image with same mark is denoted as I by the most matched image with same markopt, by IoptWith IonAccelerated Robust features match to obtain m to match point;The database is the database as described in any one of claims 1 to 3, institute Stating arest neighbors Euclidean distance is the Euclidean distance being calculated using nearest neighbor algorithm;
Step 2 three: the m is rejected to the Mismatching point in match point using RANSAC algorithm, and obtains described image Iopt's Feature Descriptor is to described image IonFeature Descriptor homography matrix H;Define image IoptIt is entangled under image pixel coordinates system The coordinate P of n sampled point of main trackiFor (xi,yi,1)T, it is mapped to image IonIn image pixel coordinates system coordinate be Qi= (xi,yi,1)T, wherein i=1,2 ..., nl;Then mapping relations are expressed as:
[Q1 Q2 … Qn]=H [P1 P2 … Pn] (3)
Step 2 four: image I is solvedonMiddle correction line lk1Corresponding parametric equation, detailed process are as follows: define the lk1Coordinate Matrix is Q=[Q1 Q2 … Qn], the Q is separated, the 1st row of matrix Q and the separation of the 3rd row are formed into new matrix X, Using the Q rest part as vector Y, then image IonMiddle correction line lk1Linear equation be expressed as YT=W1XT, wherein Y and X It is known that defined parameters matrix W1=[α11], it acquires:
W1=YXT(XXT)-1 (4)
The W1For in described image IonIn represent correct line lk1
Step 2 five: image I is solvedonMiddle correction line lk2Corresponding parametric equation, detailed process is identical as the step 2 four, asks The parameter matrix W obtained2Are as follows:
W2=YXT(XXT)-1 (5)
The W2For in described image IonIn represent correct line lk2
Step 2 six: it solves and corrects line lk1On point the transformational relation of image physical coordinates system is mapped to from reference frame;
In step 2 six, it is located at described image IonPixel coordinate system in the correction line l that has found outk1Pixel coordinate equation pass through It is fastened after transformation in image physical coordinates and is expressed as equation l1: u+bv+c=0;User is when acquiring image for regulation, locating position Setting the coordinate in the reference frame of current identification target is (X0,Y0,Z0);Wherein, Z0As priori conditions, it is derived by measuring, (X0,Y0) indicate user location to be solved;Correct line lk1There is another expression-form l ' in image physical coordinates system1, l′1Expression-form be equal to the correction line equation l1: u+bv+c=0;Six detailed process of step 2 are as follows:
Step 261: reference frame is converted into camera coordinates system, detailed process are as follows: by origin R in reference frame0 By appraising vector (- X through discussion0,-Y0,-Z0) be converted to the origin of camera coordinates system;X-Y plane is rotated counterclockwise along Z axis, keeps Y-Z flat Face is parallel to the V-W plane in camera coordinates system, rotation angle θ;Y-Z plane is rotated counterclockwise along X-axis, keeps X-Y plane parallel In U-V plane, rotation angle is
Step 2 six or two: it solves and corrects line lk1On point the transformational relation of camera coordinates system is mapped to from reference frame;Regulation Direction of rotation counterclockwise is positive direction, and reference frame is left-handed coordinate system criterion;By any point in reference frame [x,y,z]TMap to the coordinate [u, v, w] in camera coordinates systemTIt indicates are as follows:
Wherein, [u, v, w]T[x, y, z]TIt is any point in camera coordinates system and reference frame respectively, and on the right side of equal sign In variable x0,y0,z0It is by the coordinate vector [X under spin matrix R and reference frame0,Y0,Z0] obtained, it is expressed as Following formula:
Spin matrix R is acquired by formula (8):
Wherein, RZMatrix indicates first to rotate the spin matrix that θ degree obtains, R counterclockwise along Z axisXMatrix expression is revolved counterclockwise along X-axis TurnSpend obtained spin matrix;
The coordinate representation for defining a point P in X-axis in reference frame is [x, 0,0]T, then it is mapped to the seat in camera coordinates system Mark [ux,vx,wx]TIt is obtained by following formula:
So that
Step 2 six or three: it solves and corrects line lk1Point the transformational relation of image physical coordinates system is mapped to from camera coordinates system;
Define the point [u in camera coordinates systemx,vx] to correspond to the coordinate of point in image physical coordinates system be [up,vp], according to phase The geometric maps relationship of machine obtains
Wherein, f indicates the focal length of camera, brings formula (11) into formula (10), and eliminate variable x and obtain about upAnd vpEquation:
What formula and u+bv+c=0 were indicated on ICS is that same straight line is obtained according to the method for undetermined coefficients
Step 2 seven: it solves and corrects line lk2On point the transformational relation of image physical coordinates system is mapped to from reference frame;
In step 2 seven, I is definedonImage pixel coordinates system in the correction line l that has found outk2Image pixel coordinates equation pass through Equation l is expressed as after transformation on ICS2: u+b1v+c1=0, equation l2Parameter it is known and be equal to l'2, solve and correct line lk2On point from the transformational relation that reference frame maps to image physical coordinates system be equal to solve equation l'2, step 2 seven Detailed process are as follows:
The step 2 July 1st: it solves and corrects line lk2Point the transformational relation of camera coordinates system is mapped to from reference frame;Definition is entangled Main track lk2On some coordinate in reference frame be [0,0, z], it is transformed into camera coordinates system from reference frame Afterwards, the coordinate [u in camera coordinates systemy,vy,wy]TFor
Wherein, matrix R indicates the spin matrix from RCS to ICS, identical as R obtained by formula (8), through x in formula (7)0,y0,z0 Value bring into after formula, obtain following formula:
Step 2 seven or two: it solves and corrects line lk2Point the transformational relation of image physical coordinates system is mapped to from camera coordinates system;It asks Solve the point [u in camera coordinates systemy,vy,wy]TMap to the point [u in image physical coordinates systemp,vp], which is one The process of equal proportion scaling and projection passes through mapping relations shown in formula and obtains after eliminating variable z
Wherein formula (16) and equation l2: u+b1v+c1=0 fasten expression in image physical coordinates is same straight line;According to The method of undetermined coefficients obtains
Wherein, variable b1,c1,Z0,It is known quantity, two equatioies in convolution (17) obtain
Step 2 eight: position coordinates [X of the camera in reference frame is solved0,Y0] and steering angle θ, it is asked according to formula (13) Out:
Wherein, variable b and variable c as off-line phase and homography matrix geometrical-restriction relation obtain as a result, being known quantity;Become Measure Z0WithIt is obtained as prior information by measuring;The focal length f of camera is solved according to formula (18);
θ value is substituted into and obtains Y in formula (13)0Value be
X is found out by (10) formula and (11) formula0For
Step 2 nine: position coordinates of the user in real world, detailed process are solved are as follows: define in reference frame and correct line lk1Coordinate value of the starting point in world coordinate system be Tr=[Xr,Yr,Zr]T, plane and world coordinates where mark target Rotation angular dependence between system is Rr, wherein XrIndicate that reference frame and world coordinates tie up to the offset of X-direction;YrTable Show that reference frame and world coordinates tie up to the offset of Y direction;ZrIndicate that reference frame and world coordinates tie up to Z axis side To offset;Use Ls=[X0,Y0,1]TIndicate the coordinate at any point in reference frame, then it corresponds in real world Coordinate XrMeet following relational expression:
Lr=Rr -1·Ls-Tr (22)
According to formula (22), coordinate information of the user in reference frame is converted into position of the user in real world coordinates Coordinate is set, to complete the estimation to user location.
5. according to the method described in claim 4, it is characterized in that, rejecting described m pairs using RANSAC algorithm in step 2 three Mismatching point in match point specifically includes:
Step 231: 0 is set by the initial value of the number of iterations n and match point i;
Step 2 three or two: the m obtained from the step 2 two calculates image I to 4 pairs of match points are selected in match pointon With image IoptBetween homography matrix H, it is corresponding at its to remaining match point in match point that the m is calculated by homography matrix H Identify the theoretical coordinate under the image coordinate system of target;
Step 2 three or three: image I is calculatedoptIn i-th match point theoretical coordinate and described image IonIth feature point coordinate Euclidean distance;Wherein, 1≤i≤m;
Step 2 three or four: judging whether the Euclidean distance in the step 2 three or three is less than Euclidean distance threshold value, if so, executing Step 2 three or five, if it is not, thening follow the steps 233;
Step 2 three or five: adding 1 for the value of n, judges whether the value of n is greater than preset the number of iterations n0=20;It should if so, saving Matching double points, and save homography matrix H;If it is not, then rejecting the matching double points, and execute step 2 three or two.
CN201610599708.2A 2016-07-27 2016-07-27 Vision data base construction method and indoor orientation method in more correction lines room based on mark Active CN106295512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610599708.2A CN106295512B (en) 2016-07-27 2016-07-27 Vision data base construction method and indoor orientation method in more correction lines room based on mark

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610599708.2A CN106295512B (en) 2016-07-27 2016-07-27 Vision data base construction method and indoor orientation method in more correction lines room based on mark

Publications (2)

Publication Number Publication Date
CN106295512A CN106295512A (en) 2017-01-04
CN106295512B true CN106295512B (en) 2019-08-23

Family

ID=57662395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610599708.2A Active CN106295512B (en) 2016-07-27 2016-07-27 Vision data base construction method and indoor orientation method in more correction lines room based on mark

Country Status (1)

Country Link
CN (1) CN106295512B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688500A (en) * 2018-06-20 2020-01-14 华为技术有限公司 Database construction method, positioning method and related equipment thereof

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106990836B (en) * 2017-02-24 2020-01-07 长安大学 Method for measuring spatial position and attitude of head-mounted human input device
CN107481287A (en) * 2017-07-13 2017-12-15 中国科学院空间应用工程与技术中心 It is a kind of based on the object positioning and orientation method and system identified more
CN109115221A (en) * 2018-08-02 2019-01-01 北京三快在线科技有限公司 Indoor positioning, air navigation aid and device, computer-readable medium and electronic equipment
CN109945853B (en) * 2019-03-26 2023-08-15 西安因诺航空科技有限公司 Geographic coordinate positioning system and method based on 3D point cloud aerial image
CN112149659B (en) * 2019-06-27 2021-11-09 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium
CN110705605B (en) * 2019-09-11 2022-05-10 北京奇艺世纪科技有限公司 Method, device, system and storage medium for establishing feature database and identifying actions
CN112667832B (en) * 2020-12-31 2022-05-13 哈尔滨工业大学 Vision-based mutual positioning method in unknown indoor environment
CN113094371B (en) * 2021-04-14 2023-05-12 嘉兴毕格智能科技有限公司 Implementation method of user-defined coordinate system
CN113781559B (en) * 2021-08-31 2023-10-13 南京邮电大学 Robust abnormal matching point eliminating method and image indoor positioning method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN104457758A (en) * 2014-12-19 2015-03-25 哈尔滨工业大学 Video-acquisition-based Visual Map database establishing method and indoor visual positioning method using database
CN104484881A (en) * 2014-12-23 2015-04-01 哈尔滨工业大学 Image capture-based Visual Map database construction method and indoor positioning method using database

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102773862A (en) * 2012-07-31 2012-11-14 山东大学 Quick and accurate locating system used for indoor mobile robot and working method thereof
CN104457758A (en) * 2014-12-19 2015-03-25 哈尔滨工业大学 Video-acquisition-based Visual Map database establishing method and indoor visual positioning method using database
CN104484881A (en) * 2014-12-23 2015-04-01 哈尔滨工业大学 Image capture-based Visual Map database construction method and indoor positioning method using database

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Descriptor Free Visual Indoor Localization with Line Segments;Branislav Micusik 等;《2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)》;20151015;3165-3173
基于电源线和位置指纹的室内定位技术;何坚 等;《电子与信息学报》;20141231;第36卷(第12期);2902-2908
室内射线跟踪定位方法研究;袁正午 等;《计算机应用研究》;20090430;第26卷(第4期);1299-1301

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110688500A (en) * 2018-06-20 2020-01-14 华为技术有限公司 Database construction method, positioning method and related equipment thereof
CN110688500B (en) * 2018-06-20 2021-09-14 华为技术有限公司 Database construction method, positioning method and related equipment thereof

Also Published As

Publication number Publication date
CN106295512A (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN106295512B (en) Vision data base construction method and indoor orientation method in more correction lines room based on mark
CN109631855B (en) ORB-SLAM-based high-precision vehicle positioning method
CN106651942B (en) Three-dimensional rotating detection and rotary shaft localization method based on characteristic point
CN104200086B (en) Wide-baseline visible light camera pose estimation method
CN106228538B (en) Binocular vision indoor orientation method based on logo
CN106204574B (en) Camera pose self-calibrating method based on objective plane motion feature
CN103530881B (en) Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal
CN110332887A (en) A kind of monocular vision pose measurement system and method based on characteristic light punctuate
CN106767810B (en) Indoor positioning method and system based on WIFI and visual information of mobile terminal
CN107677274B (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN103093459B (en) Utilize the method that airborne LiDAR point cloud data assisted image mates
CN101216296A (en) Binocular vision rotating axis calibration method
CN110009682A (en) A kind of object recognition and detection method based on monocular vision
CN104766309A (en) Plane feature point navigation and positioning method and device
CN108470356A (en) A kind of target object fast ranging method based on binocular vision
CN108921889A (en) A kind of indoor 3-D positioning method based on Augmented Reality application
CN108171715A (en) A kind of image partition method and device
CN109376208A (en) A kind of localization method based on intelligent terminal, system, storage medium and equipment
CN106370160A (en) Robot indoor positioning system and method
CN108519102A (en) A kind of binocular vision speedometer calculation method based on reprojection
CN107103056A (en) A kind of binocular vision indoor positioning database building method and localization method based on local identities
CN109870106A (en) A kind of building volume measurement method based on unmanned plane picture
CN104318566B (en) Can return to the new multi-view images plumb line path matching method of multiple height values
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
CN104567879A (en) Method for extracting geocentric direction of combined view field navigation sensor

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210120

Address after: Building 9, accelerator, 14955 Zhongyuan Avenue, Songbei District, Harbin City, Heilongjiang Province

Patentee after: INDUSTRIAL TECHNOLOGY Research Institute OF HEILONGJIANG PROVINCE

Address before: 150001 No. 92 West straight street, Nangang District, Heilongjiang, Harbin

Patentee before: HARBIN INSTITUTE OF TECHNOLOGY

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221208

Address after: 150027 Room 412, Unit 1, No. 14955, Zhongyuan Avenue, Building 9, Innovation and Entrepreneurship Plaza, Science and Technology Innovation City, Harbin Hi tech Industrial Development Zone, Heilongjiang Province

Patentee after: Heilongjiang Industrial Technology Research Institute Asset Management Co.,Ltd.

Address before: Building 9, accelerator, 14955 Zhongyuan Avenue, Songbei District, Harbin City, Heilongjiang Province

Patentee before: INDUSTRIAL TECHNOLOGY Research Institute OF HEILONGJIANG PROVINCE

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221228

Address after: Areas 3-57, Building 21, Zhongxing Homeland Property, No. 66, Aijian Road, Daoli District, Harbin, Heilongjiang, 150000

Patentee after: Virtual Reality Digital Technology Research Institute (Harbin) Co.,Ltd.

Address before: 150027 Room 412, Unit 1, No. 14955, Zhongyuan Avenue, Building 9, Innovation and Entrepreneurship Plaza, Science and Technology Innovation City, Harbin Hi tech Industrial Development Zone, Heilongjiang Province

Patentee before: Heilongjiang Industrial Technology Research Institute Asset Management Co.,Ltd.