CN113680059A - Outdoor scene AR game positioning device and method - Google Patents

Outdoor scene AR game positioning device and method Download PDF

Info

Publication number
CN113680059A
CN113680059A CN202111010213.9A CN202111010213A CN113680059A CN 113680059 A CN113680059 A CN 113680059A CN 202111010213 A CN202111010213 A CN 202111010213A CN 113680059 A CN113680059 A CN 113680059A
Authority
CN
China
Prior art keywords
game
camera
luminous
target object
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111010213.9A
Other languages
Chinese (zh)
Inventor
温雷华
解选本
刘新
冯永强
王爱爱
杨凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Ruixin Beijing Technology Co ltd
Original Assignee
Zhongke Ruixin Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Ruixin Beijing Technology Co ltd filed Critical Zhongke Ruixin Beijing Technology Co ltd
Priority to CN202111010213.9A priority Critical patent/CN113680059A/en
Publication of CN113680059A publication Critical patent/CN113680059A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention belongs to the technical field of AR, and particularly relates to an outdoor scene AR game positioning device and method. The device comprises a game instruction server, a color-changing controller and a target object. The positioning system can simply and efficiently calculate the corresponding positions of the camera of the AR game machine and the player in the game scene, has low comprehensive cost and high algorithm precision, and can be popularized and used in society.

Description

Outdoor scene AR game positioning device and method
The technical field is as follows:
the invention belongs to the technical field of AR, and particularly relates to an outdoor scene AR game positioning device and method.
Background art:
augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding corresponding images, videos and 3D models, and aims to cover a virtual world on a screen in the real world and perform interaction. This technique was proposed in 1990. The realization of the existing AR technology mainly depends on AR equipment, and the existing mainstream AR equipment is divided into two types: a handheld AR device: the principle of the handheld AR equipment represented by an apple ARKit development platform and an android ARCode is that a real world is recorded through a camera, a virtual article is mixed through an algorithm, and finally a mixed result is displayed through a screen. Head-mounted AR device: represented by microsoft HoloLens, the system generally exists in the form of glasses, a player can see the real world through the glasses, and the system directly projects virtual objects on the glasses and finally mixes the virtual objects into an image.
On existing AR gaming machine devices, three ways are typically used to locate the device:
1、Marker-Based AR
2、Marker-Less AR
3、LBS-Based AR
the disadvantages of these three positioning methods are analyzed as follows:
1、Marker-Based AR
the implementation method needs a Marker (for example, a template card or a two-dimensional code with a certain specification and shape is drawn) which is manufactured in advance, and then the Marker is put at a position in reality, which is equivalent to determining a plane in a real scene, then, the Marker is identified and evaluated in attitude (position Estimation) through a camera, and the position of the Marker is determined, then, the coordinate system with the Marker center as the origin is called Marker Coordinates, namely a template coordinate system, what we want to do is to obtain a transformation so as to establish a mapping relation between the template coordinate system and the screen coordinate system, therefore, the effect that the graph is attached to the Marker can be achieved by drawing the graph on the screen according to the transformation, the knowledge of 3D projective geometry is needed for understanding the principle, and the transformation from the template coordinate system to the real screen coordinate system needs to be firstly rotated and translated to a Camera coordinate system (Camera Coordinates) and then mapped to the screen coordinate system from the Camera coordinate system.
The disadvantages are as follows: in a game scene, such a Marker is very obtrusive and cannot be well fused with the surrounding environment.
2、Marker-Less AR
The basic principle is the same as that of the Marker based AR, but it can use any object with enough characteristic points (such as book cover) as the plane reference without making special template in advance, and gets rid of the constraint of the template on AR application. The principle of the method is to extract characteristic points of a template object through a series of algorithms (such as SURF, ORB, FERN and the like) and record or learn the characteristic points. When a camera scans surrounding scenes, feature points of the surrounding scenes are extracted and compared with recorded feature points of a template object, if the matching number of the scanned feature points and the template feature points exceeds a threshold value, the template is considered to be scanned, a Tm matrix is estimated according to corresponding feature point coordinates, and then graph drawing is performed according to Tm (the method is similar to a Marker-Based AR).
The disadvantages are as follows: the environment needs to be analyzed to extract feature points of the surrounding scene, and the feature points are easily affected by factors such as weather, illumination, reconstruction, decoration and the like.
3、LBS-Based AR
The basic principle is that the geographic position of a player is obtained through GPS, POI information of objects (such as surrounding restaurants, banks, schools and the like) nearby the position is obtained from some data source (such as wiki, google) and the like, the direction and the inclination angle of a handheld device of the player are obtained through an electronic compass and an acceleration sensor of a mobile device, a plane reference (corresponding to a Marker) of a target object in a real scene is established through the POI information, and then the principle of coordinate transformation display and the like is similar to the principle of the Marker-Based AR.
The AR technology is realized by utilizing the GPS function and the sensor of the equipment, the dependence of the application on the Marker is eliminated, the player experience is better than that of the Marker-Based AR, and the performance is better than that of the Marker-Based AR and the Marker-Less AR because the Marker attitude and the characteristic point are not required to be recognized in real time, so that the AR technology can be better applied to the mobile equipment compared with the Marker-Based AR and the Marker-Less AR and the LBS-Based AR.
The disadvantages are as follows: the positioning accuracy is inferior to the former two methods, and a huge external database (wiki, google, etc.) is needed, so that the operation task is heavy due to the complexity of nearby objects (the problem of synchronizing the database with the information of real objects).
However, the existing positioning method is high in computational complexity, and has high requirements on environment and illumination conditions. At present, no device and method exist, which can realize self-positioning in a data processor at the AR game machine end through a simple algorithm.
The invention content is as follows:
by means of setting the luminous target object in an outdoor scene, the method greatly simplifies the calculation amount required by camera positioning in the AR game, and therefore positioning calculation is achieved through a simple algorithm.
An AR game positioning device for outdoor scenes comprises a game instruction server, a luminous target object and a remote controller capable of controlling the luminous target object.
The game instruction server controls the remote controller through the wireless network, and the remote controller controls the light-emitting color and the brightness of the light-emitting object through the wireless network. The game instruction server wirelessly transmits the light-emitting characteristics of the light-emitting object and the position signal in the game scene to the AR game machine, and the AR game machine calculates the position of the player according to the characteristics in the image captured by the physical camera.
The luminous target object is arranged in an outdoor scene and consists of a base, a supporting part and a light part; the base is connected with the lamp light part through the supporting part; an energy supply assembly is arranged in the base, and a wireless receiving and transmitting module is arranged in the luminous target. Preferably: the base can be made of corrosion-resistant materials and can also be made of low-cost materials such as reinforced cement and the like; the base can be placed on the ground and/or buried underground, so that the whole body is firmer and more stable.
The light part is composed of lamps capable of emitting more than two colors, and the two colors are non-similar colors. Non-similar colors refer to colors that differ significantly in contrast from each other, such as red, yellow, blue, and green. Preferably: the light part can be set into a vertical column shape, a spherical shape, an ellipsoidal shape or the like, so that the computer can extract effective lines from the image through Hough transform conveniently. The method and the device have the advantages that obvious image characteristics are presented in a scene image which is captured by the solid camera and contains the lighted luminous target object, so that the accuracy rate of recognizing the luminous target object is improved.
The top of the lamplight part is provided with a solar and/or wind power generation module, the energy supply component is a storage battery and/or a charge-discharge battery pack, and the solar and/or wind power generation module is connected with the charge-discharge battery pack. The purpose is to store redundant electric energy for use when the solar energy and/or wind energy generation capacity is insufficient.
A player is provided with AR equipment, and the AR equipment consists of a solid camera, a display screen, an acceleration sensor and a data processor; the AR equipment establishes signal communication with the game instruction server through wireless;
s1, a player enters a game area of an outdoor scene, a game instruction server lights a luminous target object of the game area through controlling a remote controller, and the luminous target object displays a specified color;
s2, collecting scene images containing the lighted luminous target objects in the game through the entity camera, filtering the images according to a color filter appointed by the game instruction server, and identifying the luminous target objects; extracting lines of a specified type from a scene image by using Hough transform; processing the effective lines extracted by the Hough transform to form continuous lines; lines of specific colors extracted by Hough transform may be discontinuous due to reasons such as the target object being blocked by leaves or other objects, and the discontinuous lines can be connected into continuous lines only by the aid of an expansion corrosion algorithm.
S3, collecting length and position data of the effective lines, and obtaining position coordinates of the virtual camera in the AR game through calculation processing of the data processor;
and S4, after the position of the virtual camera is obtained, a rendering picture of the 3D world is obtained through the 3D game engine, the background is kept transparent, the rendering picture of the 3D world is fused with a picture shot by the AR camera, and the AR picture can be synthesized and displayed on a display screen of a player.
The number of the lighted luminous target objects in the same game scene is not less than 3, and the luminous target objects are not positioned on the same plane; avoiding the data error caused by the mutual shielding of the luminous objects due to the problem of the view angle.
If the number of the luminous target objects is less than 3 or more than 5, the luminous target objects are identified again;
if the number of the luminous target objects is 3, 4 or 5, the camera is used for calculating the position of the camera according to the luminous target objects between every two cameras, and a consistent value is taken as a calculation result; if the numerical deviation is large, the outlier is removed, and the average position is taken as a calculation result.
The entity camera simultaneously identifies 3 luminous targets, namely a luminous target a, a luminous target b and a luminous target C, and the position coordinate of the virtual camera is set as (C)x,Cz) Wherein the distance between the luminous target object a and the luminous target object b is 2n, and the distance between the solid camera and the luminous target object a is daThe distance between the solid camera and the luminous target object b is dbThen, we can get:
Figure BDA0003238289940000061
Figure BDA0003238289940000062
or
Figure BDA0003238289940000063
Wherein is solved to obtain CzTwo values of (C) are obtained, one is positive and one is negative, and the position coordinate (C) of the virtual camera is obtained by discarding the positive value and taking the negative valuex,Cz);
Two luminous objects a and c with different distancesThe heights from the tops or centers of the light target object a and the light-emitting target object c to the horizontal geodetic reference plane are respectively HaAnd HcDistances to the solid cameras are d respectivelyaAnd dcThe difference in height of the tips of the light-emitting target object a and the light-emitting target object c in the imaging of the light-emitting target object image is dhprj;dscnIs the distance from the virtual camera to the virtual screen, which is a virtual distance, is a known invariant with respect to screen resolution and design of the 3D scene;
Figure BDA0003238289940000064
in summary, the position coordinate of the virtual camera is obtained as (C)x,Cy,Cz)。
The acceleration sensor is fused with the computer vision positioning phase through inertial navigation by collecting the acceleration value of the player in the advancing process, so that the position coordinate information of the virtual camera in the AR game is corrected.
Among them, Hough transform (Hough) is a very important method for detecting the shape of the boundary of a discontinuity. The method realizes the fitting of a straight line and a curve by transforming the image coordinate space to the parameter space. In step S2, the desired line is extracted from the target image, and the boundary line is expanded by erosion to form a continuous line. Erosion dilation is a term of morphological image processing, erosion performs "contraction" or "thinning" operation on the basis of a binary image, and dilation performs "lengthening" or "thickening" operation on the basis of a binary image.
Erosion is a process by which boundary points are eliminated and the boundaries are shrunk inward. Can be used to eliminate small and meaningless objects. Each pixel of the image is scanned with a structuring element of 3X3, and the structuring element is anded with the binary image it covers, resulting in a pixel of the image that is 1 if both are 1, and a 0 otherwise resulting in a binary image that is one turn smaller.
Dilation is the process of merging all background points in contact with an object into the object, expanding the boundary outward. Can be used to fill in voids in objects. Each pixel of the image is scanned with a 3X3 structuring element, and the structuring element and its overlaid binary image are anded with each resulting pixel of the image being 0 if both are 0 and 1 otherwise. The result is a one-turn enlargement of the binary image.
The process of erosion followed by dilation is called an on operation. To eliminate small objects, to separate objects at fine points, and to smooth the boundaries of larger objects without significantly changing their area. The process of expansion first and then erosion is called ratio operation, and is used for filling tiny spaces in an object, connecting adjacent objects, smoothing the boundary of the objects and not obviously changing the area of the objects.
Wherein d isscnIs the distance from the virtual camera to the virtual screen, which is a virtual distance, is a known invariant with respect to screen resolution and design of the 3D scene; the constant can be calculated by technical data of the camera and can also be calculated by photographing a standard object: e.g. a physical camera having a length h to the origin of coordinatesobjAfter the rod is shot, the length h of the rod on the picture image is recordedprjD is obtained by the following formulascnThe value of (c):
Figure BDA0003238289940000081
thereby obtain the distance of virtual camera to target object:
Figure BDA0003238289940000082
wherein: let the coordinate of the object a in the virtual game space be (-n,0), the coordinate of the object b in the virtual game space be (n,0), and the physical distance of the camera from the object a be daThe physical distance between the camera and the target object b is dbThe position coordinate of the camera is (C)x,Cz),
Figure BDA0003238289940000083
Figure BDA0003238289940000084
Or
Figure BDA0003238289940000085
Wherein is solved to obtain CzThere are two values of (1), one positive and one negative, and the negative values are discarded.
And finally, after the position of the virtual camera is obtained, a rendering picture of the 3D world is obtained through the 3D game engine, except the AR luminous target object, other backgrounds are kept transparent, the rendering picture of the 3D world is fused with the picture shot by the AR camera, the AR picture can be synthesized, and the AR picture can be displayed on a display screen of a player.
The invention has the beneficial effects that: the positioning system can simply and efficiently calculate the corresponding positions of the camera of the AR game machine and the player in the game scene, has low comprehensive cost and high algorithm precision, and can be popularized and used in society.
Description of the drawings:
FIG. 1 is a simulated view of a game scenario of the present invention;
FIG. 2 is a picture of a target object captured in a current game scene area according to the present invention;
FIG. 3 is a diagram illustrating a method for obtaining a desired line by Hough transform according to the present invention;
FIG. 4 is a diagram of a continuous line formed after Hough transform processing according to the present invention;
FIG. 5 is a graphical representation of the present invention determining the position of a light emitting target to a physical camera;
FIG. 6 is a graphical representation of the present invention determining the distance from a virtual camera to a virtual screen;
FIG. 7 shows a method for determining the position coordinates of a virtual camera as (C)x,Cz) A schematic representation;
FIG. 8 is a diagram illustrating the determination of the position coordinates C of the virtual camera according to the present inventionyA schematic representation;
the specific implementation mode is as follows:
the invention is further described with reference to the accompanying drawings and specific embodiments. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and these equivalents also fall within the scope of the present application.
Example 1:
an AR game positioning device for outdoor scenes comprises a game instruction server, a luminous target object and a remote controller capable of controlling the luminous target object.
The game instruction server controls the remote controller through the wireless network, and the remote controller controls the light-emitting color and the brightness of the light-emitting object through the wireless network. The game instruction server wirelessly transmits the light-emitting characteristics of the light-emitting object and the position signal in the game scene to the AR game machine, and the AR game machine calculates the position of the player according to the characteristics in the image captured by the physical camera.
The luminous target object is arranged in an outdoor scene and consists of a base, a supporting part and a light part; the base is connected with the lamp light part through the supporting part; an energy supply assembly is arranged in the base, and a wireless receiving and transmitting module is arranged in the luminous target. The base is made of reinforced cement and is buried underground, so that the whole body is firmer and more stable.
The light part is composed of red and green lights, and is arranged in a vertical column shape, so that the computer can extract effective lines from the image through Hough transform. The method and the device have the advantages that obvious image characteristics are presented in a scene image which is captured by the solid camera and contains the lighted luminous target object, so that the accuracy rate of recognizing the luminous target object is improved.
The top of the lamplight part is provided with a solar power generation module, the energy supply assembly is a charge-discharge battery pack, and the solar power generation module is connected with the charge-discharge battery pack. The purpose is to store redundant electric energy so as to be convenient for use when the solar energy generating capacity is insufficient.
Example 2:
an AR game positioning device for outdoor scenes comprises a game instruction server, a luminous target object and a remote controller capable of controlling the luminous target object.
The game instruction server controls the remote controller through the wireless network, and the remote controller controls the light-emitting color and the brightness of the light-emitting object through the wireless network. The method and the device have the advantages that obvious image characteristics are presented in a scene image which is captured by the camera and contains the lighted luminous target object, so that the accuracy rate of the luminous target object which is identified by a computer algorithm is improved. Meanwhile, the game instruction server wirelessly transmits the light-emitting characteristics of the light-emitting object and the position signal in the game scene to the AR game machine, and the AR game machine calculates the position of the player according to the characteristics in the image captured by the camera.
The luminous target object is arranged in an outdoor scene and consists of a base, a supporting part and a light part; the base is connected with the lamp light part through the supporting part; an energy supply assembly is arranged in the base, and a wireless receiving and transmitting module is arranged in the luminous target. The base is made of corrosion-resistant materials and is placed on the ground.
The light part is composed of lamps capable of emitting red, yellow and blue colors, a large number of straight lines exist for buildings in urban scenes, and the light part is spherical, so that a computer can extract effective lines from images through Hough transformation conveniently.
The top of the lamplight part is provided with a wind power generation module, the energy supply assembly is a storage battery and a charge-discharge battery pack, and the wind power generation module is connected with the charge-discharge battery pack. The purpose is to store redundant electric energy so as to be convenient for use when the wind power generation capacity is insufficient.
Example 3:
using the apparatus of example 1 or 2; a player is provided with AR equipment, and the AR equipment consists of a solid camera, a display screen, an acceleration sensor and a data processor; the AR equipment establishes signal communication with the game instruction server through wireless;
s1, a player enters a game area of an outdoor scene, a game instruction server lights a luminous target object of the game area by controlling a remote controller, and the luminous target object displays specified red and green colors;
s2, collecting scene images containing the lighted luminous target objects in the game through the entity camera, filtering the images according to a color filter appointed by the game instruction server, and identifying the luminous target objects; extracting lines of a specified type from a scene image by using Hough transform; processing the effective lines extracted by the Hough transform to form continuous lines; lines of specific colors extracted by Hough transform may be discontinuous due to reasons such as the target object being blocked by leaves or other objects, and the discontinuous lines can be connected into continuous lines only by the aid of an expansion corrosion algorithm.
S3, collecting length and position data of the effective lines, and obtaining position coordinates of the virtual camera in the AR game through calculation processing of the data processor;
and S4, after the position of the virtual camera is obtained, a rendering picture of the 3D world is obtained through the 3D game engine, except the AR luminous target object, other backgrounds are kept transparent, the rendering picture of the 3D world is fused with the picture shot by the AR camera, the AR picture can be synthesized, and the AR picture can be displayed on a display screen of a player.
When 4 lighted luminous target objects are in the same game scene, the position of the camera is calculated according to the luminous target objects between every two luminous target objects, and a consistent value is taken as a calculation result; if the numerical deviation is large, the outlier is removed, and the average position is taken as a calculation result.
The entity camera simultaneously identifies 3 luminous targets, namely a luminous target a, a luminous target b and a luminous target C, and the position coordinate of the virtual camera is set as (C)x,Cz) Wherein the distance between the luminous target object a and the luminous target object b is 2n, and the distance between the solid camera and the luminous target object a is daThe distance between the solid camera and the luminous target object b is dbThen, we can get:
Figure BDA0003238289940000121
Figure BDA0003238289940000122
or
Figure BDA0003238289940000123
Wherein, two values of Cz are obtained, one is positive and one is negative, and the position coordinate (C) of the virtual camera is obtained by discarding the positive value and taking the negative valuex,Cz);
Setting the heights from the tops or centers of two luminous targets a and c to a horizontal ground reference surface as HaAnd HcDistances to the solid cameras are d respectivelyaAnd dcThe difference in height of the tips of the light-emitting target object a and the light-emitting target object c in the imaging of the light-emitting target object image is dhprj;dscnIs the distance from the virtual camera to the virtual screen, which is a virtual distance, is a known invariant with respect to screen resolution and design of the 3D scene;
Figure BDA0003238289940000131
in summary, the position coordinate of the virtual camera is obtained as (C)x,Cy,Cz)。
The acceleration sensor is fused with the computer vision positioning phase through inertial navigation by collecting the acceleration value of the player in the advancing process, so that the position coordinate information of the virtual camera in the AR game is corrected.
Example 4:
with the method in embodiment 2, the position of the object a is (-n,0), and the position of the object b is (n,0), which are measured in the game scene as n-10 (m).
A standard rod with the height of 1 meter is placed at the origin of the coordinate system of the physical scene (in the game virtual world, the position is also set as the origin of coordinates, and the direction of the coordinate axis of the game virtual world is the same as that of the coordinate axis of the physical scene), hobj1 m, the distance between the solid camera and the standard rod dobjThe optical axis of the solid camera is aligned with a standard rod for shooting, and the height h of the standard rod in the image is 10 metersprj0.025 m, substituting into the formula
Figure BDA0003238289940000132
Obtaining:
Figure BDA0003238289940000133
i.e. dscn0.25 (m), in the present system, dscnUsed as a constant, no change after one-time measurement.
It is known that the light portions of the solid light-emitting object a, the light-emitting object b, and the light-emitting object c are all 0.8 m, ha=hb=hc0.8 m, the height of the image shot by the solid camera is haprj0.016 m, hbprj0.014 m, substituting into the formula
Figure BDA0003238289940000141
Obtaining:
Figure BDA0003238289940000142
Figure BDA0003238289940000143
to obtain da12.5 (m), db14.3 (meters).
Substitution formula
Figure BDA0003238289940000144
Obtaining:
Figure BDA0003238289940000145
to obtain CxAs 1.2 (rice)
Substitution formula
Figure BDA0003238289940000146
Obtaining:
Figure BDA0003238289940000147
or:
Figure BDA0003238289940000151
the same results of the two equations gave a Cz of ± 8.88 (m), and a negative value gave a Cz of-8.88 (m) of the position coordinates (C) of the virtual camerax,Cz) Is (1.2, -8.88).
When the position of the other light emitting object c is (-p, q), the light emitting object a and the light emitting object c are known to have the same specification, and the height of the top of the lamp part from the ideal horizontal ground is also known to be the same Hc=Hb=Ha3 meters, which is a known quantity, the light-emitting object hc is on the image taken by the solid-state cameraprj0.0096 m. Substituting into a formula:
Figure BDA0003238289940000152
obtaining:
Figure BDA0003238289940000153
to obtain dc20.83 (meters).
When the solid camera is placed horizontally, namely when the lens is not pitching or inclined, the image can be corrected by an image rotation method, the direction of the gravity acceleration g can also be measured by an acceleration sensor, so that the horizontal plane of the camera is perpendicular to the g, and the camera is fused with the computer vision positioning position by inertial navigation. At this time, a height difference dh between the top of the light-emitting object a and the top of the light-emitting object c is measured from the imageprjIs 0.012 m. Substituting into a formula:
Figure BDA0003238289940000154
obtaining:
Figure BDA0003238289940000161
to obtain Cy1.5 (meters), the position coordinates (C) of the virtual camera are finally obtainedx,Cy,Cz) Is (1.2, 1.5, -8.88).
The acceleration sensor collects the acceleration value of the player in the process of traveling, and is fused with the computer vision positioning phase through inertial navigation, so that the position coordinate information of the virtual camera in the AR game is corrected.
Wherein, in the 3D virtual world, there is also a camera; the position of this camera relative to the 3D world determines the "view angle" of the 3D world; the key point of the AR game is that the real world and the virtual world are coincident, a player moves the AR game machine in the real world, and the data processor correspondingly moves the position of the virtual camera in the 3D virtual world, so that the visual angle of the virtual world is consistent with the visual angle of the real world.
The position of the camera in the physical world is obtained and is equal to the position of the virtual camera in the virtual world, the two worlds are in an equivalent relationship, and the images can be superposed together. And after the position of the virtual camera is obtained, a rendering picture of the 3D world is obtained through the 3D game engine, the background is kept transparent, the rendering picture of the 3D world is fused with the picture shot by the AR camera, and the AR picture can be synthesized and displayed on a display screen of a player.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention, and thus, it is intended that the present invention encompass such changes and modifications as fall within the scope of the appended claims, along with the full scope of equivalents to which such changes and modifications are entitled.

Claims (9)

1. An AR game positioning device for outdoor scenes is characterized by comprising a game instruction server, a luminous target object and a remote controller capable of controlling the luminous target object.
2. The apparatus of claim 1, wherein the game command server controls the remote controller through a wireless network, and the remote controller controls the light color and brightness of the light-emitting object through the wireless network.
3. The device of claim 2, wherein the light-emitting object is arranged in an outdoor scene and is composed of a base, a support part and a light part; the base is connected with the lamp light part through the supporting part; an energy supply assembly is arranged in the base, and a wireless receiving and transmitting module is arranged in the luminous target.
4. The device according to claim 3, wherein the lighting part is composed of lamps capable of emitting two or more colors, and the two colors are non-similar colors.
5. The device as claimed in claim 4, wherein the top of the light part is provided with a solar and/or wind power generation module, the energy supply component is a storage battery and/or a charge-discharge battery pack, and the solar and/or wind power generation module is connected with the charge-discharge battery pack.
6. An outdoor scene AR game positioning method is characterized in that: the player is provided with AR equipment, and the AR equipment consists of an entity camera, a display screen, an acceleration sensor and a data processor; the AR equipment establishes signal communication with the game instruction server through wireless;
s1, a player enters a game area of an outdoor scene, a game instruction server lights a luminous target object of the game area through controlling a remote controller, and the luminous target object displays a specified color;
s2, collecting scene images containing the lighted luminous target objects in the game through the entity camera, filtering the images according to a color filter appointed by the game instruction server, and identifying the luminous target objects; extracting lines of a specified type from a scene image by using Hough transform; processing the effective lines extracted by the Hough transform to form continuous lines;
s3, collecting length and position data of the effective lines, and obtaining position coordinates of the virtual camera in the AR game through calculation processing of the data processor;
and S4, after the position of the virtual camera is obtained, a rendering picture of the 3D world is obtained through the 3D game engine, the background is kept transparent, the rendering picture of the 3D world is fused with a picture shot by the AR camera, and the AR picture can be synthesized and displayed on a display screen of a player.
7. The method of claim 6, wherein the number of lighted luminous objects in the same game scene should not be less than 3 and not be on the same plane;
if the number of the luminous target objects is less than 3 or more than 5, the luminous target objects are identified again;
if the number of the luminous target objects is 3, 4 or 5, the camera is used for calculating the position of the camera according to the luminous target objects between every two cameras, and a consistent value is taken as a calculation result; if the numerical deviation is large, the outlier is removed, and the average position is taken as a calculation result.
8. The method of claim 6, wherein the physical camera simultaneously recognizes 3 luminous targetsa. A luminous object b and a luminous object C, and the position coordinates of the virtual camera are set as (C)x,Cz),
Wherein the distance between the luminous target object a and the luminous target object b is 2n, and the distance between the solid camera and the luminous target object a is daThe distance between the solid camera and the luminous target object b is dbThen, we can get:
Figure FDA0003238289930000021
Figure FDA0003238289930000022
or
Figure FDA0003238289930000023
Wherein, two values of Cz are obtained, one is positive and one is negative, and the position coordinate (C) of the virtual camera is obtained by discarding the positive value and taking the negative valuex,Cz);
Setting the heights from the tops or centers of two luminous targets a and c to a horizontal ground reference surface as HaAnd HcDistances to the solid cameras are d respectivelyaAnd dcThe difference in height of the tips of the light-emitting target object a and the light-emitting target object c in the imaging of the light-emitting target object image is dhprj;dscnIs the distance from the virtual camera to the virtual screen, which is a virtual distance, is a known invariant with respect to screen resolution and design of the 3D scene;
Figure FDA0003238289930000031
in summary, the position coordinate of the virtual camera is obtained as (C)x,Cy,Cz)。
9. The method of claim 6, wherein the acceleration sensor is fused with the computer vision positioning phase through inertial navigation by collecting the acceleration value of the player during the traveling process, thereby correcting the virtual camera position coordinate information in the AR game.
CN202111010213.9A 2021-08-31 2021-08-31 Outdoor scene AR game positioning device and method Pending CN113680059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111010213.9A CN113680059A (en) 2021-08-31 2021-08-31 Outdoor scene AR game positioning device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111010213.9A CN113680059A (en) 2021-08-31 2021-08-31 Outdoor scene AR game positioning device and method

Publications (1)

Publication Number Publication Date
CN113680059A true CN113680059A (en) 2021-11-23

Family

ID=78584322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111010213.9A Pending CN113680059A (en) 2021-08-31 2021-08-31 Outdoor scene AR game positioning device and method

Country Status (1)

Country Link
CN (1) CN113680059A (en)

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120244939A1 (en) * 2011-03-27 2012-09-27 Edwin Braun System and method for defining an augmented reality character in computer generated virtual reality using coded stickers
US20130281207A1 (en) * 2010-11-15 2013-10-24 Bally Gaming, Inc. System and Method for Enhanced Augmented Reality Tracking
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
US20140287806A1 (en) * 2012-10-31 2014-09-25 Dhanushan Balachandreswaran Dynamic environment and location based augmented reality (ar) systems
US20140306996A1 (en) * 2013-04-15 2014-10-16 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for implementing augmented reality
CN104436634A (en) * 2014-11-19 2015-03-25 重庆邮电大学 Real person shooting game system adopting immersion type virtual reality technology and implementation method of real person shooting game system
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN104919507A (en) * 2012-06-14 2015-09-16 百利游戏技术有限公司 System and method for augmented reality gaming
CN106390454A (en) * 2016-08-31 2017-02-15 广州麦驰网络科技有限公司 Reality scene virtual game system
WO2017029279A2 (en) * 2015-08-17 2017-02-23 Lego A/S Method of creating a virtual game environment and interactive game system employing the method
CN107833280A (en) * 2017-11-09 2018-03-23 交通运输部天津水运工程科学研究所 A kind of outdoor moving augmented reality method being combined based on geographic grid with image recognition
CN107979418A (en) * 2017-11-22 2018-05-01 吴东辉 Determine that its client corresponds to the AR method and systems of id based on mobile phone flashlight
CN108325208A (en) * 2018-03-20 2018-07-27 昆山时记信息科技有限公司 Augmented reality implementation method applied to field of play
JP6410874B1 (en) * 2017-05-30 2018-10-24 株式会社タカラトミー AR video generator
KR20190001348A (en) * 2017-06-27 2019-01-04 (주)셀빅 Virtual reality·argumented reality complex arcade game system
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
CN110187774A (en) * 2019-06-06 2019-08-30 北京悉见科技有限公司 The AR equipment and its entity mask method of optical perspective formula
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN111192365A (en) * 2019-12-26 2020-05-22 江苏艾佳家居用品有限公司 Virtual scene positioning method based on ARkit and two-dimensional code
WO2021102566A1 (en) * 2019-11-25 2021-06-03 Eidos Interactive Corp. Systems and methods for improved player interaction using augmented reality

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130281207A1 (en) * 2010-11-15 2013-10-24 Bally Gaming, Inc. System and Method for Enhanced Augmented Reality Tracking
US20120244939A1 (en) * 2011-03-27 2012-09-27 Edwin Braun System and method for defining an augmented reality character in computer generated virtual reality using coded stickers
CN104919507A (en) * 2012-06-14 2015-09-16 百利游戏技术有限公司 System and method for augmented reality gaming
US20140287806A1 (en) * 2012-10-31 2014-09-25 Dhanushan Balachandreswaran Dynamic environment and location based augmented reality (ar) systems
US20140306996A1 (en) * 2013-04-15 2014-10-16 Tencent Technology (Shenzhen) Company Limited Method, device and storage medium for implementing augmented reality
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN104436634A (en) * 2014-11-19 2015-03-25 重庆邮电大学 Real person shooting game system adopting immersion type virtual reality technology and implementation method of real person shooting game system
WO2017029279A2 (en) * 2015-08-17 2017-02-23 Lego A/S Method of creating a virtual game environment and interactive game system employing the method
CN106390454A (en) * 2016-08-31 2017-02-15 广州麦驰网络科技有限公司 Reality scene virtual game system
JP6410874B1 (en) * 2017-05-30 2018-10-24 株式会社タカラトミー AR video generator
KR20190001348A (en) * 2017-06-27 2019-01-04 (주)셀빅 Virtual reality·argumented reality complex arcade game system
CN107833280A (en) * 2017-11-09 2018-03-23 交通运输部天津水运工程科学研究所 A kind of outdoor moving augmented reality method being combined based on geographic grid with image recognition
CN107979418A (en) * 2017-11-22 2018-05-01 吴东辉 Determine that its client corresponds to the AR method and systems of id based on mobile phone flashlight
US20200074743A1 (en) * 2017-11-28 2020-03-05 Tencent Technology (Shenzhen) Company Ltd Method, apparatus, device and storage medium for implementing augmented reality scene
CN109840949A (en) * 2017-11-29 2019-06-04 深圳市掌网科技股份有限公司 Augmented reality image processing method and device based on optical alignment
CN108325208A (en) * 2018-03-20 2018-07-27 昆山时记信息科技有限公司 Augmented reality implementation method applied to field of play
CN110187774A (en) * 2019-06-06 2019-08-30 北京悉见科技有限公司 The AR equipment and its entity mask method of optical perspective formula
WO2021102566A1 (en) * 2019-11-25 2021-06-03 Eidos Interactive Corp. Systems and methods for improved player interaction using augmented reality
CN111192365A (en) * 2019-12-26 2020-05-22 江苏艾佳家居用品有限公司 Virtual scene positioning method based on ARkit and two-dimensional code

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姚远;朱淼良;卢广;: "增强现实场景光源的实时检测方法和真实感渲染框架", 计算机辅助设计与图形学学报, no. 08, 20 August 2006 (2006-08-20) *
章国锋;: "商汤科技:面向增强现实的视觉定位技术的创新突破与应用", 杭州科技, no. 06, 15 December 2019 (2019-12-15) *

Similar Documents

Publication Publication Date Title
CN110568447B (en) Visual positioning method, device and computer readable medium
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN103530881B (en) Be applicable to the Outdoor Augmented Reality no marks point Tracing Registration method of mobile terminal
CN109520500B (en) Accurate positioning and street view library acquisition method based on terminal shooting image matching
US9898857B2 (en) Blending between street view and earth view
CN109801374B (en) Method, medium, and system for reconstructing three-dimensional model through multi-angle image set
JP5075182B2 (en) Image processing apparatus, image processing method, and image processing program
CN108288292A (en) A kind of three-dimensional rebuilding method, device and equipment
CN104322052A (en) A system for mixing or compositing in real-time, computer generated 3D objects and a video feed from a film camera
CN104180814A (en) Navigation method in live-action function on mobile terminal, and electronic map client
CN109242966B (en) 3D panoramic model modeling method based on laser point cloud data
JP7273927B2 (en) Image-based positioning method and system
CN115641401A (en) Construction method and related device of three-dimensional live-action model
WO2021027676A1 (en) Visual positioning method, terminal, and server
CN106908043A (en) The three-dimensional amount measuring method of geographic position of target object and height based on Streetscape picture
KR20210095913A (en) Map creation method, apparatus, and system, and storage medium
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
CN114969221A (en) Method for updating map and related equipment
CN110322541A (en) A method of selecting optimal metope texture from five inclined cameras
CN112348887A (en) Terminal pose determining method and related device
CN116858215B (en) AR navigation map generation method and device
JP3791186B2 (en) Landscape modeling device
CN111194015A (en) Outdoor positioning method and device based on building and mobile equipment
WO2022078438A1 (en) Indoor 3d information acquisition device
CN113680059A (en) Outdoor scene AR game positioning device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination