CN107977977A - A kind of indoor orientation method, device and the storage medium of VR game - Google Patents

A kind of indoor orientation method, device and the storage medium of VR game Download PDF

Info

Publication number
CN107977977A
CN107977977A CN201710984634.9A CN201710984634A CN107977977A CN 107977977 A CN107977977 A CN 107977977A CN 201710984634 A CN201710984634 A CN 201710984634A CN 107977977 A CN107977977 A CN 107977977A
Authority
CN
China
Prior art keywords
image
identification code
coordinate
coordinate system
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710984634.9A
Other languages
Chinese (zh)
Other versions
CN107977977B (en
Inventor
李坚
文红光
卢念华
周煜翔
陈进兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Overseas Chinese City Kale Technology Co Ltd
Original Assignee
Shenzhen Overseas Chinese City Kale Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Overseas Chinese City Kale Technology Co Ltd filed Critical Shenzhen Overseas Chinese City Kale Technology Co Ltd
Priority to CN201710984634.9A priority Critical patent/CN107977977B/en
Publication of CN107977977A publication Critical patent/CN107977977A/en
Application granted granted Critical
Publication of CN107977977B publication Critical patent/CN107977977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management

Abstract

The present invention provides a kind of VR game indoor orientation method, device and storage medium, the described method includes:Some cameras are laid according to preset rules on the ceiling, and by being pasted with the mobile object of identification code on some camera sync photography ground;The corresponding image of the identification code is obtained at interval of preset time, and the second coordinate of identification code in the picture, and first coordinate of the corresponding camera of described image in world coordinate system are determined according to described image;Movable information of the mobile object in world coordinate system is calculated according to first coordinate and the second coordinate, and the movable information is respectively sent to by its connected all VR equipment by server, so that VR equipment shows indoor scene according to respective visual angle.The present invention goes out the world coordinates and angle of identification code with the identification code inverse shot according to camera, so as to fulfill the accurate indoor positioning of mobile object by distributing fixed world coordinates for each camera.

Description

A kind of indoor orientation method, device and the storage medium of VR game
Technical field
The present invention relates to indoor positioning technologies field, more particularly to a kind of indoor orientation method of VR game, device and Storage medium.
Background technology
In VR game, vital technology point is exactly accurate and economical and practical indoor accurate position technology.By It is higher in the required precision that VR plays to indoor positioning(<10CM), and the positioning accuracy of traditional GPS positioning system can not far reach To the requirement of indoor position accuracy.Therefore many companies research and develop other alignment systems, are such as positioned simultaneously based on GPS Assisted calibration is carried out with wireless network (WIFI), indoor short distance positioning is carried out based on bluetooth, essence is carried out by super-broadband tech Determine the indoor positioning technologies such as the scheme of position.But although first two indoor positioning technologies precision on the basis of GPS positioning has Improved, but still can not meet the requirement of indoor positioning, can not be used in VR game positioning;And super-broadband tech carries out Pinpoint scheme, although precision has reached the requirement of indoor positioning, deployment cost is very high, is unable to reach commercial use
Thus the prior art could be improved and improve.
The content of the invention
The technical problem to be solved in the present invention is, in view of the deficiencies of the prior art, there is provided a kind of indoor orientation method, dress Put and storage medium, to solve the problems, such as that existing indoor position accuracy can not meet required precision and of high cost.
In order to solve the above-mentioned technical problem, the technical solution adopted in the present invention is as follows:
A kind of indoor orientation method of VR game, it includes:
Some cameras are laid according to preset rules on ceiling indoors, and are pasted by some camera sync photographies There is the mobile object of identification code;
The corresponding image of the identification code is obtained at interval of preset time, and determines the corresponding camera of described image in default generation The first coordinate in boundary's coordinate system, and second coordinate of the identification code in described image;
Threeth coordinate of the identification code in default world coordinate system is calculated according to first coordinate and the second coordinate, with To the movable information of the mobile object;
The movable information of the mobile object is sent to server successively, so that server is according to the movable information received The display data of indoor scene is updated, and the display data after renewal is sent to coupled all VR equipment;
VR equipment receives the display data, and shows the corresponding indoor field of the display data according to its corresponding default visual angle Scape.
The indoor orientation method of the VR game, wherein, it is described corresponding at interval of the preset time acquisition identification code Image, and determine that first coordinate of the corresponding camera of described image in default world coordinate system, and the identification code exist The second coordinate in described image specifically includes:
The first image collected at interval of preset time acquisition some cameras, and sieved in the first image got The second image is selected, wherein, the identification code is included in second image;
The corresponding image of the identification code is selected in the second image filtered out according to preset rules, and described image is corresponded to Camera as object of reference;
Determine first coordinate of the object of reference in default world coordinate system, and second seat of the identification code in described image Mark.
The indoor orientation method of the VR game, it is described to obtain what some cameras collected at interval of preset time First image, and the second image is filtered out in the first image got, wherein, the mark is included in second image Code specifically includes:
The first image that some cameras collect is obtained at interval of preset time, and identifies all the got respectively One image, to judge whether include the identification code in described first image;
Retain the first image for including the identification code, and be denoted as the second image.
The indoor orientation method of the VR game, wherein, it is described to be adopted at interval of preset time acquisition some cameras The first image collected, and all first images got are identified respectively, to judge whether include institute in described first image Identification code is stated to specifically include:
The first image collected at interval of preset time acquisition some cameras, and the first image to getting respectively It is identified to extract Candidate key profile;
Pattern in all Candidate key profiles for extracting is identified, to obtain the corresponding identification code of each Candidate key profile ID;
Whether include identification code in the first image that all identification code IDs according to recognizing judge to get respectively.
The indoor orientation method of the VR game, wherein, it is described to be selected according to preset rules in the second image filtered out The corresponding image of the identification code is selected to specifically include:
Pre-set image coordinate system is established in all second images respectively, and calculates institute in the pre-set image coordinate system respectively State Euclidean distance of the identification code apart from each second image center;
The second image of Euclidean distance minimum is selected as the corresponding image of the identification code, and by the corresponding shooting of described image Head is used as object of reference.
The indoor orientation method of the VR game, wherein, it is described according to calculating first coordinate and the second coordinate Threeth coordinate of the identification code in default world coordinate system, is specifically included with obtaining the movable information of the mobile object:
Rotation between the corresponding pre-set image coordinate system of described image and the default world coordinate system is calculated using spin matrix Corner, and obtain the width and height of image;
The mark is calculated according to the width of image and height, the anglec of rotation, first coordinate and second coordinate Know threeth coordinate of the code in the default world coordinate system.
The indoor orientation method of the VR game, wherein, the width according to image and height, the anglec of rotation, institute State the first coordinate and second coordinate calculates threeth coordinate tool of the identification code in the default world coordinate system Body is:
Wherein, xf, yf are that identification code relative to the coordinate shift amount of image center, can schemed by identification code in image coordinate system As the coordinate in coordinate system(X, Y)And the wide Image_width of image, the high Image_height of image are calculated;A is Zoom factor between different coordinates;D1 is the identification code length of side in image coordinate system, and d2 is identification code side in world coordinate system It is long;(x0、y0)For coordinate of the camera central point in world coordinate system;(xt、yt)It is identification code in world coordinate system Coordinate.
The indoor orientation method of VR game, wherein, it is described successively by the movable information of the mobile object send to Server, so that display data of the server according to the movable information renewal indoor scene received, and will be aobvious after renewal Show that data sending to coupled all VR equipment specifically include:
It will be sent successively per the corresponding movable information of two field picture to server;
Server docks received movable information into after row interpolation and filtering process, is updated according to the movable information after processing indoor The display data of scene;
Display data after renewal is sent to coupled all VR equipment.
A kind of storage medium, it is stored with a plurality of instruction, and described instruction is suitable for being loaded by processor and being performed as above any The indoor orientation method of the VR game.
A kind of indoor positioning device of VR game, it includes:
Processor, is adapted for carrying out each instruction;And
Storage device, suitable for storing a plurality of instruction, described instruction is suitable for being loaded by processor and being performed as above any VR trips The indoor orientation method of play.
Beneficial effect:Compared with prior art, the present invention provides a kind of VR game indoor orientation method, device and Storage medium, the described method includes:Some cameras are laid according to preset rules on the ceiling, and pass through some shootings The mobile object of identification code is pasted with head sync pulse jamming ground;The corresponding figure of the identification code is obtained at interval of preset time Picture, and determine the second coordinate of identification code in the picture, and the corresponding camera of described image in the world according to described image The first coordinate in coordinate system;The mobile object is calculated in world coordinate system according to first coordinate and the second coordinate Movable information, and the movable information is respectively sent to by its connected all VR equipment by server, so that VR equipment Indoor scene is shown according to respective visual angle.The present invention is taken the photograph by the world coordinates for the distribution fixation of each camera with basis As the identification code inverse that head is shot goes out the world coordinates and angle of identification code, so as to fulfill the accurate indoor positioning of mobile object.
Brief description of the drawings
Fig. 1 is the flow chart that the indoor orientation method of VR provided by the invention game is preferably implemented.
Fig. 2 is the layout diagram of camera in the indoor orientation method that the VR of the bright offer of this law plays.
Fig. 3 is the usage scenario figure of the indoor orientation method of VR provided by the invention game.
Fig. 4 is the structure principle chart of the indoor positioning device of VR provided by the invention game.
Embodiment
The present invention provides a kind of VR indoor orientation method, device and the storage medium of game, for make the purpose of the present invention, Technical solution and effect are clearer, clear and definite, and the present invention is described in more detail for the embodiment that develops simultaneously referring to the drawings.Should Understand, specific embodiment described herein only to explain the present invention, is not intended to limit the present invention.
Those skilled in the art of the present technique are appreciated that unless expressly stated, singulative " one " used herein, " one It is a ", " described " and "the" may also comprise plural form.It is to be further understood that what is used in the specification of the present invention arranges Diction " comprising " refer to there are the feature, integer, step, operation, element and/or component, but it is not excluded that in the presence of or addition One or more other features, integer, step, operation, element, component and/or their groups.It should be understood that when we claim member Part is " connected " or during " coupled " to another element, it can be directly connected or coupled to other elements, or there may also be Intermediary element.In addition, " connection " used herein or " coupling " can include wireless connection or wireless coupling.It is used herein to arrange Taking leave "and/or" includes whole or any cell and all combinations of one or more associated list items.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein (including technology art Language and scientific terminology), there is the meaning identical with the general understanding of the those of ordinary skill in fields of the present invention.Should also Understand, those terms such as defined in the general dictionary, it should be understood that have with the context of the prior art The consistent meaning of meaning, and unless by specific definitions as here, idealization or the implication of overly formal otherwise will not be used To explain.
Below in conjunction with the accompanying drawings, by the description of the embodiment, being described further to the content of the invention.
Fig. 1 is refer to, Fig. 1 is the flow chart of the preferred embodiment of indoor orientation method provided by the invention.The method For positioning the indoor positioning of a mobile object that can be moved indoors on a bottom surface in space, it includes:
S100, lay some cameras on ceiling according to preset rules indoors, and is synchronously clapped by some cameras Take the photograph the mobile object for being pasted with identification code;
S200, obtain the corresponding image of the identification code at interval of preset time, and determines that the corresponding camera of described image exists The first coordinate in default world coordinate system, and second coordinate of the identification code in described image;
S300, calculate threeth seat of the identification code in default world coordinate system according to first coordinate and the second coordinate Mark, to obtain the movable information of the mobile object;
S400, successively send the movable information of the mobile object to server, so that server is according to the fortune received The display data of dynamic information updating indoor scene, and the display data after renewal is sent to coupled all VR equipment;
S500, VR equipment receive the display data, and show that the display data is corresponding according to its corresponding default visual angle Indoor scene.
The present invention distributes a world coordinates by laying some cameras on the ceiling, while for each identification code, and An identification code is pasted on each mobile object so that after camera gathers the image for including identification code, identify in image Identification code and according to the image coordinate of identification code and the world coordinates inverse of known camera go out identification code world coordinates and Deflection angle, so that the world coordinates and deflection angle of mobile object, realizes the indoor positioning of mobile object.So not only at the same time Corresponded using identification code and mobile object, and the laying mode that camera is installed on ceiling causes camera fix tightly Lean on, shooting angle is stablized, so as to obtain the accurate elements of a fix and angle-data
Specifically, in the step S100, the identification code refers to carrying the coding pattern of characteristic information, example Such as, Aruco codes.In the present embodiment, the characteristic information of the identification code is identification code ID, and each identification code corresponds to a mark Code ID.The specification of the identification code is 0.5m*0.5m, the object of reference as dodgem identification.Certainly, the specification of the identification code It can also be configured according to the size of scene, the specification of dodgem.
The preset rules refer to the queueing discipline of some cameras on the ceiling, for example, by camera It is downward lay on the ceiling, and cam lens level prevents downward, and difference mark cameras are identical in the spacing of X, Y-direction (Such as, it is 0.5m).Certainly, the epicycloid of camera(Unit distance)Field angle and camera with camera is away from Aruco Code plan range is related, and camera field angle is bigger, the high speed of the more remote then required arrangement of camera distance Aruco codes plan range Camera quantity is fewer.The default world coordinate system refers to preassigned world coordinate system, finally to obtain motive objects Coordinate and moving direction of the body in default world coordinate system.The present embodiment is using the indoor ceiling plane lower left corner as coordinate origin World coordinate system is established, camera is laid one by one since the ceiling lower left corner, and layout diagram is as shown in Figure 2.It is in this way, each Camera has first coordinate in the default world coordinate system.In practical applications, it is each camera distribution one Label, can establish the correspondence of shooting header laber and the first coordinate, can be with after subsequently recognizing identification code ID in the picture Shooting header laber by photographing described image determines that it corresponds to the first coordinate.
The mobile object refers to carrying out harmonic motion complexity, the object of high-precision motion.For example, touching in amusement park The game stations such as bumper car, remote-control car.An identification code is configured on each mobile object, and identification code is laid in mobile object On, make the plane residing for it parallel with horizontal plane, so that the imaging plane of camera(It is parallel with ceiling)With identification code Residing plane is parallel.As long as z-axis of the image around world coordinates of camera shooting is considered so in follow-up coordinate transform The anglec of rotation, computing is simplified, so as to improve location efficiency.
In the step S200, second coordinate is referred to after establishing pre-set image coordinate system in the picture, described Coordinate of the central point of identification code in the pre-set image coordinate system.Due to being limited be subject to camera shooting visual angle, work as shifting When animal body moves in scene, not all camera can photograph the mark on mobile object in synchronization Code.Therefore, in the present embodiment, it is necessary to first in some cameras in all first images that synchronization collects Filter out the second image for including the identification code.
Exemplary, it is described to obtain the corresponding image of the identification code at interval of preset time, and determine described image pair First coordinate of the camera answered in default world coordinate system, and second coordinate tool of the identification code in described image Body can include:
S201, obtain the first image that some cameras collect at interval of preset time, and in the first figure got The second image is filtered out as in, wherein, the identification code is included in second image;
S202, select the corresponding image of the identification code according to preset rules in the second image filtered out, and by the figure As corresponding camera is as object of reference;
S203, determine first coordinate of the object of reference in default world coordinate system, and the identification code is in described image Second coordinate.
Specifically, in the step S201, the preset time refers to pre-set camera collection movement The time of the image of object.It can be according to the size of the precision of indoor positioning, and indoor scene, the setting height(from bottom) of camera And acquisition parameters and determine.Described first image refers to the image photographed in some cameras described in synchronization.Institute State the image for including identification code that the second image refers to filtering out in described first image.
Exemplary, first image collected at interval of preset time acquisition some cameras, and obtaining The second image is filtered out in the first image got, wherein, it can specifically include comprising the identification code in second image:
S2011, obtain the first image that some cameras collect at interval of preset time, and identifies what is got respectively All first images, to judge whether include the identification code in described first image;
S2012, reservation include the first image of the identification code, and are denoted as the second image.
Specifically, in the step S2011, since each identification code carries characteristic information, i.e. identification code ID, therefore Candidate key profile therein is extracted after edge and denoising can be carried out to the first all images respectively, and know Pattern in other Candidate key profile is to obtain identification code ID.
Exemplary, first image collected at interval of preset time acquisition some cameras, and respectively All first images for getting are identified, with whether judge in described first image can be with comprising the identification code detailed process For:
N1, obtain the first image that some cameras collect at interval of preset time, and respectively to get first Image is identified to extract Candidate key profile;
N2, be identified the pattern in all Candidate key profiles for extracting, to obtain the corresponding mark of each Candidate key profile Know code ID;
Whether include identification code in N3, the first image for judging to get respectively according to all identification code IDs recognized.
Specifically, in the step N1, the Candidate key profile refer to extracting in each two field picture by marking Know the quadrangle frame of code wheel profile composition.In the present embodiment, the specific extraction process of the Candidate key profile is:
M1, by image carry out edge detection and filtering process, to eliminate the noise in image.
M2, the Extracting contour in the edge image detected, and quadrangle is searched in the contour line extracted, search To quadrangle be Candidate key profile.
Specifically, in the step M1, real time picture is obtained from high speed camera first, then carries out canny edges Detection, is then filtered image using sef-adapting filter, filtering out noise makes border apparent.In the step M2 In, the Extracting contour in the edge image at detection.Further, in order to ensure the clarity of Extracting contour and integrality, One number threshold value is set, and counts the pixel number that the contour line each extracted includes.Then by each pixel number Compared with the number threshold value, if pixel number is less than the number threshold value, judge that its corresponding contour line is unclear Chu, rejects the unclear contour line.If pixel number is more than or equal to the number threshold value, its corresponding profile is judged Line understands, and retains clear contour line.Then quadrangle is searched in the contour line of reservation, and the quadrangle found is made For Candidate key profile.
Further, in order to avoid the quadrangle hypotelorism found, so as to cause the image recognition in Candidate key profile Mistake.One distance threshold is set, and according to preset direction(Clockwise/counterclockwise)Calculate corresponding per two adjacent quadrangles The distance between four vertex, four the distance between are pinpointed and is averaged, flat between two neighboring quadrangle to obtain Equal distance.By the average distance compared with the distance threshold, if the average distance is less than the distance threshold, say The distance between bright two quadrangles are excessively near, then delete the quadrangle of two hypotelorisms;If the average distance is more than described Distance threshold, illustrates that the distance between two quadrangles reach identification and require, retains two quadrangles, as Candidate key profile.
In the step N2 and N3, due in same time frame, in the video image of some cameras collections There may be the image for not collecting or recognizing identification code, therefore after the Candidate key profile in getting every two field picture, Recognition result in each Candidate key profile judges whether recognize identification code in every two field picture.
Exemplary, whether wrapped in the first image that all identification code IDs that the basis recognizes judge to get respectively Specifically included containing identification code:
Whether H1, the identification code ID for judging to recognize in each first image respectively are 0;
H2, when all identification code IDs recognized are not all 0, judge described first image in include identification code.
Specifically, the pattern in all Candidate key profiles for extracting is decoded, to obtain multiple identification code IDs.So Judge whether the multiple identification code ID that decoding obtains is 0 successively afterwards.When identification code ID is 0, illustrate its corresponding candidate Code wheel exterior feature is not identification code;When identification code ID is not 0, it is identification code to illustrate its corresponding Candidate key profile.Further, when one There is the identification code ID for 0 in two field picture, illustrate to have recognized identification code in this two field picture, know when in a two field picture When being clipped to identification code, step S2012 is performed, the first image that will recognize identification code is denoted as the second image.
In one embodiment of the invention, when the corresponding first flag code of the Candidate key profile extracted in the first image When ID is 0, illustrate unidentified in described first image to arrive identification code.Illustrate the corresponding camera of described first image current Moment does not photograph identification code(Mobile object), therefore the discharge does not include the first image of identification code and its corresponding shooting Head.
In one embodiment of the invention, since the image collected at interval of the preset time corresponds to mobile object A movable information.When not including the identification code in all first images, illustrate that current time does not photograph mark Code unidentified arrives identification code, that is to say, that current time can not obtain corresponding movable information, that is, show as mobile object Coordinate data is breakpoint at the time of this two field picture(There is no coordinate), therefore in order to avoid the coordinate and direction number of mobile object Interpolation algorithm can be used according to incoherent problem, to eliminate the incoherence of data, its detailed process can be:
H10, when all first images do not include the identification code, judge that some cameras do not photograph motive objects Body;
H20, using linear interpolation algorithm at the unidentified image to identification code, be inserted into the unidentified image to identification code The coordinate and moving direction of corresponding mobile object.
Specifically, it is to use Unity development of games engines by location algorithm in this present embodiment, and uses OpencvForUnity kits.Therefore, it is right after the default world coordinates and directional information of identification code central point is got The numerical value of obtained world coordinates and deflection carries out linear interpolation to eliminate data incoherence, which is Unity development of games engines carry interpolating function, only need to be by the data input function interface of desired interpolation, and then Tuning function is joined Number is until interpolation reaches requirement, and since linear interpolation algorithm is ripe algorithm, details are not described herein.Certainly, in reality In the application of border, the smoothing processing of the coordinate of mobile object can be carried out in positioner, be able to will be moved in positioner Information is sent to server, and follow-up data processing is carried out by server.
In the step S202, since in order to avoid there is shooting blind angle amount, being laid with the ceiling is multiple Camera, so that when mobile object has been likely to occur multiple cameras in the same time in moving process(Same two field picture During shooting)The identification code on mobile object is collected, i.e., there are multiple second images, it is therefore desirable in multiple second images One image of middle selection is as the corresponding image of the identification code.
Exemplary, it is described that the corresponding image of the identification code is selected in the second image filtered out according to preset rules Specifically include:
S2021, establish pre-set image coordinate system in all second images respectively, and respectively in the pre-set image coordinate system Calculate Euclidean distance of the identification code apart from each second image center;
S2022, select the second image of Euclidean distance minimum as the corresponding image of the identification code, and described image is corresponded to Camera as object of reference.
Specifically, in the step S2021, the pre-set image coordinate system is referred to a left side for second image Inferior horn is the image coordinate system that origin is established.The Euclidean distance of the identification code range image central point refers to the throwing of identification code Shadow is in the picture the distance between with camera.Screened using this Euclidean distance in world coordinate system, each second image The nearest camera of identification code described in distance in corresponding camera.In this way, in the corresponding camera of each second image(Clap Take the photograph the camera to the second image)Object of reference of the one closest camera of middle selection as calculating identification code coordinate, can be with Calculation scale is simplified, avoids computing repeatedly, improves the location efficiency of positioner.
When it is implemented, image coordinate system, Ran Houfen are established by origin of the lower left corner of image in each second image The center point coordinate of each identification code recognized is not determined.The center point coordinate detailed process for determining identification code is:Know respectively The coordinate on four vertex of other identification code, is denoted as(x1,y1)、(x2,y2)、(x3,y3)、(x4,y4), then according to four vertex Coordinate calculates the center point coordinate of identification code, its calculation formula is:
Wherein, abscissa is put centered on X, ordinate is put centered on Y.
Further, the center point coordinate of image is obtained respectively first(Coordinate of the camera in image coordinate system), and image In identification code center point coordinate, then respectively calculate identification code center point coordinate and image center point coordinate between Euclidean distance, finally in getable all Euclidean distances are calculated, choose the Euclidean distance of minimum and determine its corresponding the Two images.Using second image as the corresponding image of the identification code, and using the corresponding camera of described image as ginseng According to thing.Then the first coordinate of not described object of reference distribution described in obtaining, and the center by the identification code in described image Point is used as the second coordinate.
In the step S300, the 3rd coordinate refers to second coordinate obtained using projective transformation principle The corresponding coordinate in default world coordinate system.
Exemplary, it is described that the identification code is calculated in default world coordinate system according to first coordinate and the second coordinate The 3rd interior coordinate, can specifically be included with obtaining the movable information of the mobile object:
S301, calculated between the corresponding pre-set image coordinate system of described image and the default world coordinate system using spin matrix The anglec of rotation, and obtain the width and height of image;
S302, according to the width and height of image, the anglec of rotation, first coordinate and second coordinate calculate institute State threeth coordinate of the identification code in the default world coordinate system.
Specifically, in the step S301, using projective transformation principle calculate described image coordinate system with it is described pre- If the anglec of rotation between world coordinate system.The spin matrix of plane and transfer matrix calculate the anglec of rotation according to where identification code.Its In, the expression formula of spin matrix is:
Wherein, ψ is picture around the rotating angle of world coordinate system x-axis;θ is picture around the rotating angle of world coordinate system y-axis;φ It is picture around the rotating angle of world coordinate system z-axis.Carried out with the calculating of upper angle under world coordinate system, wherein x-axis y Axis is in ceiling plane, and z-axis is perpendicular to ceiling plane.
In the present embodiment, due to being laid under the imaging surface horizontal direction of camera on ceiling, and identification code tiling is viscous It is attached on dodgem, camera imaging plane is parallel with identification code plane.So the change of angle is only around the rotation of z-axis Angle, the anglec of rotation ψ around x-axis and the anglec of rotation around y-axis are all definite value in spin matrix.The anglec of rotation around z-axis is that camera exists Deflection in X0Y planes, that is, need the real-time direction of dodgem calculated.For rotationangleφ, the calculating used is public Formula is:
The rotation angle between image coordinate system and world coordinate system can so be calculated.Then, each frame is obtained respectively The width and height of image, finally by the ginseng such as the first obtained coordinate, the second coordinate, the width of image and height and the anglec of rotation Number, the world coordinates of camera central point is calculated using following equation:
Wherein, xf, yf are that Aruco codes relative to the coordinate shift amount of image center, can be existed by Aruco codes in image coordinate system Coordinate in image coordinate system(X, Y)And the wide Image_width of image, the high Image_height of image are calculated;a Zoom factor between different coordinates;D1 is the identification code length of side in image coordinate system, and d2 is identification code in world coordinate system The length of side;X0, y0 are coordinate of the camera central point in world coordinate system,;Xt, yt are seat of the identification code in world coordinate system Mark.
In the present embodiment, by identification code code in image coordinate system relative to the offset of image center, wait ratio Be amplified in world coordinate system, thus draw identification code in world coordinate system relative to the offset of camera center point, most Using each high speed camera, known coordinate calculates the central point world of identification code on dodgem in world coordinate system afterwards Coordinate, i.e. the 3rd coordinate.
In the step S302, the movable information refers to movable information of the mobile object in world coordinate system, It includes coordinate and direction.Wherein, the coordinate refers to the 3rd coordinate, and the direction is represented with the anglec of rotation.By A movable information is corresponded at the time of some camera sync photographies, each collection.Further, in practical applications, , can be at interval of default two field picture since the speed of camera shooting image is greater than the rate travel of the mobile object The 3rd coordinate and rotation angle are calculated, the computing that can so reduce positioner meets, while disclosure satisfy that mobile object Location requirement.
In the step S400, the display data refers to that server is used to show according to what the movable information generated Show the data of indoor scene.As shown in figure 3, the indoor scene further includes and the one-to-one VR equipment of each mobile object (For example, the VR helmets, VR glasses etc.).In the scene, due to the world coordinates of the central point of identification code that acquires i.e. For the world coordinates of mobile object, the rotation angle is the deflection angle of the mobile object(That is direction), therefore position dress Put the 3rd coordinate that will be acquired and rotation angle is sent to server, coordinate and direction by server issue mobile object Information.Its detailed process is:
S401, will send to server per the corresponding movable information of two field picture successively;
S402, server dock received movable information into after row interpolation and filtering process, according to the movable information after processing more The display data of new indoor scene;
S403, send the display data after renewal to coupled all VR equipment.
Specifically, in the step S402, None- identified goes out in the high speed camera visual field demo algorithms of Aruco sometimes Aruco codes, that is to say, that some frames that camera obtains image can not obtain the information in coordinate and direction, this may result in seat Mark is discontinuous with bearing data.At the same time, coordinate of the Aruco codes in image coordinate system is calculated each time(X, Y)And around z A little error can be all produced during the rotationangleφ of axis, so as to cause coordinate and the data dithering in direction.For first problem, After getting central point world coordinates and the directional information of Aruco codes, to the numerical value of obtained world coordinates and deflection Linear interpolation is carried out to eliminate data incoherence, which carries interpolating function for Unity development of games engines, Therefore only need to be by the data input function interface of desired interpolation, then Tuning function parameter is until interpolation reaches requirement i.e. Can;For Second Problem, Kalman filter has been used with direction angular data to obtained world coordinates to eliminate respectively The shake produced by error, Kalman filter function come from OpencvForUnity kits, therefore only need to be by desired filter The data input function interface of ripple, then adjusts process noise and the two parameters of measurement noise until filter effect reaches requirement .
Further, it is possible to pre-set the correspondence of mobile object and VR equipment so that the two one-to-one corresponding.It is i.e. each A positioner and a VR equipment are configured with a dodgem.Being used for for each dodgem there is provided an IP will Positioner on the car is associated with VR equipment.Therefore, server is receiving the positioner of positioner transmission When the coordinate and moving direction information of corresponding dodgem, the IP carried in coordinate and the moving direction information is obtained, and Its corresponding first VR equipment is found according to the IP, the coordinate and moving direction information are distributed to the IP corresponds to The first VR equipment, to demarcate coordinate and the moving direction information as the corresponding location information of the first VR equipment.Then To demarcate the coordinate for having the first VR equipment and moving direction information by udp broadcast be transferred to all VR equipment in LAN this Sample, by server, on the one hand can at the same time receive and send the data of all positioning ends, on the other hand can be by specifying IP To allow the VR helmets on each dodgem to see the dodgem of oneself and others' dodgem in scene of game.
Accordingly, in the step S500, what the VR equipment reception server in LAN was sent carries coordinate With the broadcast of mobile message, and the first VR equipment is updated in default scene of game according to the coordinate and mobile message The positional information of corresponding dodgem..In the present embodiment, each VR equipment corresponds to a visual angle, therefore also wraps before this The step of including often to distribute visual angle with VR equipment, when VR equipment receives the display data of server transmission, by the display Data show the indoor scene according to its corresponding default visual angle.In this way, the player on each dodgem can pass through VR The helmet sees the real-time dynamic of the car of oneself and the car of other players, can thus be sat by VR scene of game and the default world Mark system is associated, to strengthen player's impression on the spot in person, so as to add the interest of game.
Present invention also offers a kind of storage medium, it is stored with a plurality of instruction, and described instruction is suitable for being loaded by processor And perform as above any indoor orientation method.
Present invention also offers a kind of positioner, as shown in figure 4, it includes at least one processor(processor) 20;Display screen 21;And memory(memory)22, communication interface can also be included(Communications Interface) 23 and bus 24.Wherein, processor 20, display screen 21, memory 22 and communication interface 23 can be completed mutually by bus 24 Between communication.Display screen 21 is arranged to default user in display initial setting mode and guides interface.Communication interface 23 can pass Defeated information.Processor 20 can call the logical order in memory 22, to perform the method in above-described embodiment.
In addition, the logical order in above-mentioned memory 22 can be realized by the form of SFU software functional unit and is used as solely Vertical production marketing in use, can be stored in a computer read/write memory medium.
Memory 22 is used as a kind of computer-readable recording medium, may be configured as storage software program, computer can perform Program, such as the corresponding programmed instruction of method or module in the embodiment of the present disclosure.Processor 30 is stored in memory by operation Software program, instruction or module in 22, so that perform function application and data processing, that is, realize the side in above-described embodiment Method.
Memory 22 may include storing program area and storage data field, wherein, storing program area can storage program area, extremely Application program needed for a few function;Storage data field can be stored uses created data etc. according to positioner.This Outside, memory 22 can include high-speed random access memory, can also include nonvolatile memory.For example, USB flash disk, movement Hard disk, read-only storage (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disc or CD etc. are a variety of can be with the medium or transitory memory medium of store program codes.
In addition, the detailed process that a plurality of instruction processing unit in above-mentioned storage medium and positioner is loaded and performed exists It has been described in detail in the above method, has just no longer stated one by one herein.
Finally it should be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although The present invention is described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that:It still may be used To modify to the technical solution described in foregoing embodiments, or equivalent substitution is carried out to which part technical characteristic; And these modification or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical solution spirit and Scope.

Claims (10)

1. a kind of indoor orientation method of VR game, it is characterised in that it includes:
Some cameras are laid according to preset rules on ceiling indoors, and are pasted by some camera sync photographies There is the mobile object of identification code;
The corresponding image of the identification code is obtained at interval of preset time, and determines the corresponding camera of described image in default generation The first coordinate in boundary's coordinate system, and second coordinate of the identification code in described image;
Threeth coordinate of the identification code in default world coordinate system is calculated according to first coordinate and the second coordinate, with To the movable information of the mobile object;
The movable information of the mobile object is sent to server successively, so that server is according to the movable information received The display data of indoor scene is updated, and the display data after renewal is sent to coupled all VR equipment;
VR equipment receives the display data, and shows the corresponding indoor field of the display data according to its corresponding default visual angle Scape.
2. the indoor orientation method of VR game according to claim 1, it is characterised in that described to be obtained at interval of preset time The corresponding image of the identification code, and determine first coordinate of the corresponding camera of described image in default world coordinate system, And second coordinate of the identification code in described image specifically includes:
The first image collected at interval of preset time acquisition some cameras, and sieved in the first image got The second image is selected, wherein, the identification code is included in second image;
The corresponding image of the identification code is selected in the second image filtered out according to preset rules, and described image is corresponded to Camera as object of reference;
Determine first coordinate of the object of reference in default world coordinate system, and second seat of the identification code in described image Mark.
3. the indoor orientation method of VR game according to claim 2, described to obtain described some take the photograph at interval of preset time As the first image that head collects, and the second image is filtered out in the first image got, wherein, in second image Specifically included comprising the identification code:
The first image that some cameras collect is obtained at interval of preset time, and identifies all the got respectively One image, to judge whether include the identification code in described first image;
Retain the first image for including the identification code, and be denoted as the second image.
4. the indoor orientation method of VR game according to claim 3, it is characterised in that described to be obtained at interval of preset time The first image that some cameras collect, and all first images got are identified respectively, to judge described first Whether specifically included in image comprising the identification code:
The first image collected at interval of preset time acquisition some cameras, and the first image to getting respectively It is identified to extract Candidate key profile;
Pattern in all Candidate key profiles for extracting is identified, to obtain the corresponding identification code of each Candidate key profile ID;
Whether include identification code in the first image that all identification code IDs according to recognizing judge to get respectively.
5. the indoor orientation method of VR game according to claim 2, it is characterised in that described to be screened according to preset rules The corresponding image of the identification code is selected to specifically include in the second image gone out:
Pre-set image coordinate system is established in all second images respectively, and the mark is calculated in each pre-set image coordinate system Know the Euclidean distance of each second image center of code distance;
The second image of Euclidean distance minimum is selected as the corresponding image of the identification code, and by the corresponding shooting of described image Head is used as object of reference.
6. the indoor orientation method of VR game according to claim 5, it is characterised in that it is described according to first coordinate and Second coordinate calculates threeth coordinate of the identification code in default world coordinate system, to obtain the movement of mobile object letter Breath specifically includes:
Rotation between the corresponding pre-set image coordinate system of described image and the default world coordinate system is calculated using spin matrix Corner, and obtain the width and height of image;
The mark is calculated according to the width of image and height, the anglec of rotation, first coordinate and second coordinate Know threeth coordinate of the code in the default world coordinate system.
7. the indoor orientation method of VR game according to claim 6, it is characterised in that the width and height according to image Degree, the anglec of rotation, first coordinate and second coordinate calculate the identification code in the default world coordinates The 3rd coordinate in system is specially:
Wherein, xf, yf are that identification code relative to the coordinate shift amount of image center, can schemed by identification code in image coordinate system As the coordinate in coordinate system(X, Y)And the wide Image_width of image, the high Image_height of image are calculated;A is Zoom factor between different coordinates;D1 is the identification code length of side in image coordinate system, and d2 is identification code side in world coordinate system It is long;(x0、y0)For coordinate of the camera central point in world coordinate system;(xt、yt)It is identification code in world coordinate system Coordinate.
8. the indoor orientation method of VR game according to claim 1, it is characterised in that described successively by the mobile object Movable information send to server so that server according to receive movable information renewal indoor scene display number According to, and the display data after renewal is sent to coupled all VR equipment and is specifically included:
It will be sent successively per the corresponding movable information of two field picture to server;
Server docks received movable information into after row interpolation and filtering process, is updated according to the movable information after processing indoor The display data of scene;
Display data after renewal is sent to coupled all VR equipment.
9. a kind of storage medium, it is characterised in that it is stored with a plurality of instruction, and described instruction is suitable for being loaded and being performed by processor The indoor orientation method of VR game as described in claim 1-8 is any.
10. a kind of indoor positioning device of VR game, it is characterised in that it includes:
Processor, is adapted for carrying out each instruction;And
Storage device, suitable for storing a plurality of instruction, described instruction is suitable for being loaded by processor and being performed as claim 1-8 is any The indoor orientation method of the VR game.
CN201710984634.9A 2017-10-20 2017-10-20 Indoor positioning method and device for VR game and storage medium Active CN107977977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710984634.9A CN107977977B (en) 2017-10-20 2017-10-20 Indoor positioning method and device for VR game and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710984634.9A CN107977977B (en) 2017-10-20 2017-10-20 Indoor positioning method and device for VR game and storage medium

Publications (2)

Publication Number Publication Date
CN107977977A true CN107977977A (en) 2018-05-01
CN107977977B CN107977977B (en) 2020-08-11

Family

ID=62012538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710984634.9A Active CN107977977B (en) 2017-10-20 2017-10-20 Indoor positioning method and device for VR game and storage medium

Country Status (1)

Country Link
CN (1) CN107977977B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108771866A (en) * 2018-05-29 2018-11-09 网易(杭州)网络有限公司 Virtual object control method in virtual reality and device
CN109063799A (en) * 2018-08-10 2018-12-21 珠海格力电器股份有限公司 The localization method and device of equipment
CN109374003A (en) * 2018-11-06 2019-02-22 山东科技大学 A kind of mobile robot visual positioning and air navigation aid based on ArUco code
CN109540144A (en) * 2018-11-29 2019-03-29 北京久其软件股份有限公司 A kind of indoor orientation method and device
CN109544472A (en) * 2018-11-08 2019-03-29 苏州佳世达光电有限公司 Object drive device and object driving method
CN109743675A (en) * 2018-12-30 2019-05-10 广州小狗机器人技术有限公司 Indoor orientation method and device, storage medium and electronic equipment
CN109886278A (en) * 2019-01-17 2019-06-14 柳州康云互联科技有限公司 A kind of characteristics of image acquisition method based on ARMarker
CN110826375A (en) * 2018-08-10 2020-02-21 广东虚拟现实科技有限公司 Display method, display device, terminal equipment and storage medium
CN110887469A (en) * 2018-09-10 2020-03-17 和硕联合科技股份有限公司 Positioning method and positioning system of mobile electronic device
CN112631431A (en) * 2021-01-04 2021-04-09 杭州光粒科技有限公司 AR (augmented reality) glasses pose determination method, device and equipment and storage medium
CN113108793A (en) * 2021-03-25 2021-07-13 深圳宏芯宇电子股份有限公司 Indoor co-location method, apparatus and computer-readable storage medium
CN113436178A (en) * 2021-07-02 2021-09-24 鹏城实验室 Robot state detection method, device, equipment, program product and storage medium
CN113465600A (en) * 2020-03-30 2021-10-01 浙江宇视科技有限公司 Navigation method, navigation device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696606A (en) * 2004-05-14 2005-11-16 佳能株式会社 Information processing method and apparatus for finding position and orientation of targeted object
CN101026778A (en) * 2007-03-14 2007-08-29 北京理工大学 Distortion measurement and correction method for CCD shooting system and comprehensive test target
CN101809993A (en) * 2007-07-29 2010-08-18 奈米光子有限公司 methods of obtaining panoramic images using rotationally symmetric wide-angle lenses and devices thereof
CN102592124A (en) * 2011-01-13 2012-07-18 汉王科技股份有限公司 Geometrical correction method, device and binocular stereoscopic vision system of text image
CN105737820A (en) * 2016-04-05 2016-07-06 芜湖哈特机器人产业技术研究院有限公司 Positioning and navigation method for indoor robot
CN106352871A (en) * 2016-08-31 2017-01-25 杭州国辰牵星科技有限公司 Indoor visual positioning system and method based on artificial ceiling beacon

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1696606A (en) * 2004-05-14 2005-11-16 佳能株式会社 Information processing method and apparatus for finding position and orientation of targeted object
CN101026778A (en) * 2007-03-14 2007-08-29 北京理工大学 Distortion measurement and correction method for CCD shooting system and comprehensive test target
CN101809993A (en) * 2007-07-29 2010-08-18 奈米光子有限公司 methods of obtaining panoramic images using rotationally symmetric wide-angle lenses and devices thereof
CN102592124A (en) * 2011-01-13 2012-07-18 汉王科技股份有限公司 Geometrical correction method, device and binocular stereoscopic vision system of text image
CN105737820A (en) * 2016-04-05 2016-07-06 芜湖哈特机器人产业技术研究院有限公司 Positioning and navigation method for indoor robot
CN106352871A (en) * 2016-08-31 2017-01-25 杭州国辰牵星科技有限公司 Indoor visual positioning system and method based on artificial ceiling beacon

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108771866A (en) * 2018-05-29 2018-11-09 网易(杭州)网络有限公司 Virtual object control method in virtual reality and device
CN110826375A (en) * 2018-08-10 2020-02-21 广东虚拟现实科技有限公司 Display method, display device, terminal equipment and storage medium
CN109063799A (en) * 2018-08-10 2018-12-21 珠海格力电器股份有限公司 The localization method and device of equipment
CN109063799B (en) * 2018-08-10 2020-06-16 珠海格力电器股份有限公司 Positioning method and device of equipment
CN110887469A (en) * 2018-09-10 2020-03-17 和硕联合科技股份有限公司 Positioning method and positioning system of mobile electronic device
CN109374003A (en) * 2018-11-06 2019-02-22 山东科技大学 A kind of mobile robot visual positioning and air navigation aid based on ArUco code
CN109544472A (en) * 2018-11-08 2019-03-29 苏州佳世达光电有限公司 Object drive device and object driving method
CN109540144A (en) * 2018-11-29 2019-03-29 北京久其软件股份有限公司 A kind of indoor orientation method and device
CN109743675A (en) * 2018-12-30 2019-05-10 广州小狗机器人技术有限公司 Indoor orientation method and device, storage medium and electronic equipment
CN109886278A (en) * 2019-01-17 2019-06-14 柳州康云互联科技有限公司 A kind of characteristics of image acquisition method based on ARMarker
CN113465600A (en) * 2020-03-30 2021-10-01 浙江宇视科技有限公司 Navigation method, navigation device, electronic equipment and storage medium
CN112631431A (en) * 2021-01-04 2021-04-09 杭州光粒科技有限公司 AR (augmented reality) glasses pose determination method, device and equipment and storage medium
CN113108793A (en) * 2021-03-25 2021-07-13 深圳宏芯宇电子股份有限公司 Indoor co-location method, apparatus and computer-readable storage medium
CN113436178A (en) * 2021-07-02 2021-09-24 鹏城实验室 Robot state detection method, device, equipment, program product and storage medium

Also Published As

Publication number Publication date
CN107977977B (en) 2020-08-11

Similar Documents

Publication Publication Date Title
CN107977977A (en) A kind of indoor orientation method, device and the storage medium of VR game
CN107992793A (en) A kind of indoor orientation method, device and storage medium
KR101121034B1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
CN103852066B (en) Method, control method, electronic equipment and the control system of a kind of equipment location
US11321867B2 (en) Method and system for calculating spatial coordinates of region of interest, and non-transitory computer-readable recording medium
CN106101522A (en) Use the method and apparatus that non-optical field imaging equipment obtains light field data
CN103155001A (en) Online reference generation and tracking for multi-user augmented reality
WO2020007483A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
CN108668108B (en) Video monitoring method and device and electronic equipment
CN112449152A (en) Method, system and equipment for synchronizing multiple paths of videos
WO2021005977A1 (en) Three-dimensional model generation method and three-dimensional model generation device
CN110544278A (en) rigid body motion capture method and device and AGV pose capture system
CN111105351A (en) Video sequence image splicing method and device
CN112529006A (en) Panoramic picture detection method and device, terminal and storage medium
CN108289327A (en) A kind of localization method and system based on image
CN110191284B (en) Method and device for collecting data of house, electronic equipment and storage medium
CN105157681B (en) Indoor orientation method, device and video camera and server
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN115546034A (en) Image processing method and device
CN114612875A (en) Target detection method, target detection device, storage medium and electronic equipment
CN113469130A (en) Shielded target detection method and device, storage medium and electronic device
WO2024083010A1 (en) Visual localization method and related apparatus
CN115115708B (en) Image pose calculation method and system
Cao et al. Self-calibration using constant camera motion
WO2023063088A1 (en) Method, apparatus, system and non-transitory computer readable medium for adaptively adjusting detection area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant