CN104138661B - Object positioning method adopting giant screen for multi-user shoot interaction - Google Patents

Object positioning method adopting giant screen for multi-user shoot interaction Download PDF

Info

Publication number
CN104138661B
CN104138661B CN201410120796.4A CN201410120796A CN104138661B CN 104138661 B CN104138661 B CN 104138661B CN 201410120796 A CN201410120796 A CN 201410120796A CN 104138661 B CN104138661 B CN 104138661B
Authority
CN
China
Prior art keywords
infrared lamp
geometric center
coordinate
matrix
compact district
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410120796.4A
Other languages
Chinese (zh)
Other versions
CN104138661A (en
Inventor
刘宣付
彭健
黄江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sun Light Technology Co Ltd
Original Assignee
Beijing Sun Light Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sun Light Technology Co Ltd filed Critical Beijing Sun Light Technology Co Ltd
Priority to CN201410120796.4A priority Critical patent/CN104138661B/en
Publication of CN104138661A publication Critical patent/CN104138661A/en
Application granted granted Critical
Publication of CN104138661B publication Critical patent/CN104138661B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an object positioning method adopting a giant screen for multi-user shoot interaction. The object positioning method comprises steps as follows: an optical wall is arranged, an optical coordinate system is established, world coordinates of a geometric center of each infrared lamp dense region are measured, and the number of modes of four adjacent infrared lamp dense regions forming a rectangle is set; cameras and imitation guns are mounted; images are preprocessed; the number of infrared lamps contained in the selected four infrared lamp dense regions is the number of modes, and a pixel coordinate matrix corresponding to geometric centers of the four infrared lamp dense regions is obtained; a projection transformation matrix H is obtained according to the world coordinate matrix and the pixel coordinate matrix; and world coordinates of the optical wall which the imitation guns are aligned to are obtained through the projection transformation matrix H according to the corresponding pixel positions of the imitation guns in the images.

Description

A kind of many people of huge curtain shoot interactive scenery localization method
Technical field
Patent of the present invention provides and a kind of solves huge curtain, many people shoot the scenery location side of interaction, relates to calculating Machine vision and the technical field such as photogrammetric.
Technical background
Simulated gunnery system is applied to the shooting operation quality when participating in the cintest of army's gunnery training or police for many years Training, decrease use and the training court area of bullet ammunition, be a new and high technology having many advantages Application, but the development of the material progress and cultural and ideological progress along with society, the application of simulated gunnery technology is joined the army With moving towards the abstract the recreational consumption market of ordinary populace, the material of people and the quality of psychological consumption also step up, no The disconnected Consumer's Experience to the amusing products of shooting interaction style proposes high request.
The point of impact localization method being traditionally used for gunnery training is typically to send out at the muzzle installation laser of gun-simulation Emitter, homonymy or heteropleural installation infrared sensing device at curtain (mostly are through filtering the infrared photosensitive of visible ray Video camera), Data Computation Unit, by the image acquired in infrared sensing device, calculates with curtain frame for finding a view (with curtain upper left arm of angle zero, horizontal edge is x-axis to the pixel coordinate of the laser facula in the image of frame, longitudinal edge Image pixel coordinates system is set up for y-axis), thus obtain gun-simulation and hit the position coordinates of curtain.The method is permissible Position is hit in the shooting being pin-pointed to gun-simulation, but cannot be distinguished by infrared sensing device when many people shoot simultaneously Gather the ejaculator's identity corresponding to each laser facula in image, the application that many people shoot simultaneously cannot be realized Demand.
Utility model [CN2692622Y] provides a kind of curtain behind that is installed on by light source, infrared sensing device peace Being loaded on the Virtal shooting system of gun-simulation, this system can realize multiple ejaculator and use the gun-simulation in hands simultaneously In infrared sensing device catch the image of same light source, and according to the motion time source image point of gun-simulation muzzle Position is hit in the shooting of the relative motion computer sim-ulation rifle in infrared sensing visual field.Invention [CN103418132A] provides a kind of pipper with little area imaging screens such as domestic TVs at gun-simulation Relative motion in the muzzle video camera imaging visual field determines the method hitting screen position, but this type of method It is important that assume that aiming line and the optical axis of the video camera on muzzle of gun-simulation is parallel or the position that overlaps of approximation Put relation, the extended line of camera optical axis and the intersection point of screen is the most corresponding changes when ejaculator makes muzzle move Become (this position of intersecting point is required shooting and hits position), the simultaneously imaging in video camera of screen center's luminous point Picture point also can occur relatively to change therewith, uses the picture point of this pipper and optical axis extended line and screen intersection point Relative motion relation determines the coordinate hitting position.Though the method achieves light source in screen position photoinduction Point location is hit in shooting when device is in gun-simulation, but when screen area is sufficiently large, owing to gun-simulation images Screen cannot all be absorbed by the visual field of machine, and camera optical axis and screen intersection point can be caused also not to arrive screen During edge, the visual field of the picture point of screen center's luminous point early already out video camera is so that this shooting point position is true Method of determining lost efficacy, and therefore the method cannot be applicable to the shooting interaction occasion of the excessive huge screen imaging of curtain area.
Arc or spherical huge curtain can provide more high-quality image effect and 3D specially good effect to ejaculator, are more to penetrating The person of hitting brings immersion to incorporate the realism of scene, and burning the hotest interactive movie is by the shadow of high-quality As effect, scene special effect and shooting interaction technique perfectly combine, hew out an emerging recreation experience pattern, The market demand is flourishing and vigorous, and patent of the present invention provides and a kind of solves huge curtain, many people shoot the scenery of interaction Localization method, has catered to virtual shooting game and interactive movie market to the new demand of technology and player to amusement body Test the high request of quality, on the premise of current development in science and technology, more and more shoot interaction entertainment Goods producer's arc to be introduced or spherical huge curtain are to bring the high-quality recreation experience of the higher sense of participation of player.
Summary of the invention
The present invention is directed to the casino market high request to the development of virtual shooting interaction technique now, provide one Plant and solve huge curtain, many people shoot interactive scenery localization method.The method can realize many people and shoot mutually simultaneously Dynamic, and in huge screen imaging scene, the piecemeal localization projection again projection location shooting of curtain elder generation can be hit a position Put.The method arranges optics wall in screen position, arranges a number of by the infrared lamp formation of particular law arrangement The blocking characteristic of whole screen is encoded, uses the camera acquisition infrared lamp picture point image filtering off visible ray, embedding Enter the image that in formula wiring board, camera transmissions is come by loading kernel program to process, finally realize above-mentioned merit Energy.
The present invention is by the following technical solutions:
This huge curtain, many people shoot interactive scenery localization method and include taking the photograph in optics wall method for arranging, gun-simulation The particularity that camera collocation method and light source image gather sets and in embedded lines plate, data processing core is calculated Method.
This optics wall method for arranging comprises the following steps:
Step 1: with the screen upper left corner as zero, with screen longitudinal direction frame for the horizontal frame of X-axis as Y-axis (being constituted decision by the pixel coordinate system orientation of image acquisition camera) sets up optics wall coordinate system;
Step 2: by arrange on optics wall a number of infrared lamp (infrared lamp is drawn materials according to screen material, if Then select wavelength long enough for projection screen, the sufficiently large infrared lamp of angle of scattering is placed in curtain rear;If LED is electric Son screen then takes some LED lamp tube in screen light emitting array and is replaced by infrared emission without visible LED of the same area Pipe;If other material visual field is closed selects other specification infrared lamp else), following rule is followed in the position of lamp:
Being divided into intensive and sparse two kinds of position relationships by the big I of spacing between a lamp and lamp, distance is less than marginal value R is then intensive relation, is then sparse relation more than marginal value R;
B is in lamp one close quarters of composition of intensive relation each other;
Lamp in any one compact district of c is in sparse relation each other with the lamp in another compact district;
The quantity of the lamp of each close quarters of d can be the random natural number of more than 1, and all infrared lamps are intensive The geometric center point rectangular array arrangement in region.
Step 3: accurately measure the Geometric center coordinates of each infrared lamp close quarters.I.e. infrared lamp compact district The world coordinates of geometric center.
The particularity of video camera collocation method and light source image collection in this gun-simulation sets and comprises the following steps:
Step 1: photographic head is installed on the muzzle of gun-simulation, makes the aiming line of gun-simulation and the optical axis of video camera It is in the relation of parallel relation or coincidence;
Step 2: by blocking optical filtering ink sheet before camera lens closely to filter visible ray, collect the reddest The image of outer lamp picture point;
Step 3: camera lens specification is chosen and to meet following condition:
Lens aperture wants suitable, will ensure that ejaculator (is to ensure Consumer's Experience at point blank in application scenario The minimum range of ejaculator's shooting course middle-range curtain set) under in the video camera imaging visual field at least 4 red Outer lamp close packing region;Use the camera lens that flake distortion is less.
In embedded lines plate, its algorithm substep of data processing core program is described as follows:
Step 1: Image semantic classification;
Step 2: all of connected domain (figure that whole pixels that infrared lamp picture point is covered surround in search graph picture As region), and solve the pixel coordinate of each connected domain barycenter one by one, obtain all infrared lamp pictures in camera review The pixel coordinate of point;
Step 3: choose the point that abscissa is minimum from tried to achieve connected domain barycenter (infrared lamp picture point pixel coordinate) For reference point (if there is multiple abscissa smallest point, appoint take one), to set radius R as neighborhood, search for The whole connected domain barycenter occurred in this neighborhood, all center of mass point calculate first close quarters barycenter of complete judgement Point delineation is complete, and the center of mass point belonging to same compact district oneself drawn a circle to approve is got rid of, then starts to choose the next one and search Radix Ginseng selection examination point, circulates aforesaid operations, and traveled through the delineation of all intensive connected domain center of mass point (neighbor distance accords with herein Close the classification of the center of mass point less than R);
Step 4: calculate the quantity of the connected domain center-of-mass coordinate that each compact district is comprised, obtain each close The eigenvalue in collection region;
Step 5: calculate the geometric center pixel coordinate of each infrared close quarters in video camera imaging plane;
Step 6: the pixel coordinate of the infrared lamp compact district geometric center point exported in step 5 is screened, The compact district geometric center point exporting transverse and longitudinal coordinate minimum and the other three constituting quadrangle relation therewith are intensive District's geometric center point (this is simplest screening technique, also has other method, but algorithm complex is higher), will sieve 4 the compact district eigenvalues selected are by the position relationship of the compact district geometric center of its correspondence from transverse and longitudinal coordinate Little person's arranged clockwise, obtains 4 figure places, and this 4 figure place i.e. constitutes region recognition feature coding i.e. pattern count;
Step 7: be identified result according to pattern count, compact district corresponding to 4 eigenvalues in output mode number The pixel coordinate matrix of territory geometric center and world coordinates matrix, each element all intensive by its correspondence in two matrixes The position relationship of district's geometric center is from transverse and longitudinal coordinate reckling arranged clockwise;
Step 8: according to video camera linear imaging model, four pairs of coordinates of step 7 output constitute the world in model Coordinate and the corresponding relation of pixel coordinate, utilize that the method solving system of linear equations solves in projective transformation matrix is each Individual element, exports projective transformation matrix H;
Step 9: use refining algorithms refinement projective transformation matrix H;
Step 10: hit coordinate according to video camera imaging inverse projection (projection inverse transformation) computer sim-ulation rifle, This coordinate is sent to main frame, and the virtual image in main frame is play and is controlled engine according to coordinate setting to virtual image Scenery corresponding to this coordinate in coordinate system.
As the Image semantic classification in above-mentioned steps 1 includes image gray processing, binaryzation, medium filtering, removal orphan Standing statue vegetarian refreshments, opening operation and closed operation.
As the video camera linear imaging model in above-mentioned steps 8 refers to following mathematical relationship:
In video camera linear imaging model (also known as pin-hole model), space any point P projected position on image It is line OP and the intersection point of the plane of delineation of photocentre O and some P for p, p.If some P is in world coordinate system Coordinate is (Xw, Yw, Zw), and the coordinate under photographic head coordinate system is (Xc, Yc, Zc), and some p is at image physics Coordinate under coordinate system is that (x, y), the pixel coordinate in video camera imaging plane is that (u, v), by by taking the photograph phase Machine optics geometrical principle can be obtained by the P point coordinates (Xw, Yw, Zw, 1) that represents with world coordinate system and its Relation between subpoint p coordinate (u, v, 1) under image pixel coordinates system:
Z c u v 1 = k x 0 u 0 0 0 k y v 0 0 0 0 1 0 R t 0 T 1 X W Y W Z W 1
Wherein, spin matrix R and translation matrix t describe the pass between world coordinate system and photographic head coordinate system System, kx,ky,u0,v0It is intrinsic parameters of the camera, solves it is generally required to demarcate.
In the present invention on optics wall at compact district geometric center point all approximations of infrared lamp and composition quadrangle relation In a plane, so this model can be further simplified as plane (2D) projective transformation model, this model is to close In the linear transformation of homogeneous trivector, it represents with nonsingular 3 × 3 matrixes:
Plane (2D) projective transformation is the linear transformation about homogeneous trivector, and it is with one nonsingular 3 × 3 Matrix represents:
a 1 ′ a 2 ′ a 3 ′ h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 1 a 1 a 2 a 3
Wherein (a '1,a′2,a′3) it is that the homogeneous coordinate system of compact district geometric center pixel coordinate represents, (a1,a2,a3) Homogeneous coordinate system for the world coordinates of compact district geometric center represents, above formula or be expressed as more compactly:
X’=HX
H is video camera projection matrix, i.e. H = h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 1 . X ' characterizes camera image plane pixel coordinate Matrix, X characterizes the world coordinates matrix of subpoint in world coordinate system, and H-matrix has 8 degree of freedom, needs N (2N >=8) individual point is to solving projection matrix H, 4 couple point of step 7 output solves requirement to meeting.
As follows in above-mentioned steps 8 solves projective transformation matrix H method:
Turn to nonhomogeneous for above formula:
x i ′ = a 1 ′ a 3 ′ = h 0 x i + h 1 y i + h 2 h 6 x i + h 7 y i + 1 y i ′ = a 2 ′ a 3 ′ = h 3 x i + h 4 y i + h 5 h 6 xx i + h 7 y i + 1
Wherein (xi′,yi') it is that the pixel coordinate inhomogeneous coordinate system of compact district geometric center represents, (xi,yi) it is intensive The inhomogeneous coordinate system of the world coordinates of district's geometric center represents;
Make (xi, yi) ∈ (xi', yi') it is a pair match point, i=1,2 ..., N.By every a pair match point, Two linear equation can be obtained according to formula (11):
(xi, yi, 1,0,0,0 ,-xixi', yi xi′)h=xi
(0,0,0, xi, yi, 1 ,-xiyi' ,-yi yi′)h=yi
Wherein, h=h0, h1, h2, h3, h4, h5, h6, h7 T
Then we can obtain 2N about parameter h0, h1, h2, h3, h4, h5, h6, h7Equation. Write as matrix form:
Ah=b
Wherein
A = x 1 y 1 1 0 0 0 - x 1 x 1 ′ y 1 x 1 ′ 0 0 0 x 1 y 1 1 - x 1 y 1 ′ - y 1 y 1 ′ . . . . . . . . . . . . . . . . . . . . . . . . x N y N 1 0 0 0 - x N x N ′ y N x N ′ 0 0 0 x N y N 1 - x N y N ′ - y N y N ′
b=(x′1,y′1,…,x′N,y′N)T
As N >=4 time, we can seek h=A with method of least square-1B wherein A?1Pseudo-inverse matrix for A.
As described in above-mentioned steps 9, the method for refinement projective transformation matrix H is as follows:
Owing to being the solution under min ‖ Ah-b ‖ meaning according to the h required by method of least square, the most still have certain Error.Refining algorithms given below, makes error minimize.Order
G ( h ) = Σ i = 1 N { ( x i ′ - x ‾ i ′ ) 2 + ( y i ′ - y - i ′ ) 2 } - - - ( 1 )
Wherein
x ‾ i ′ = h 0 x i + h 1 y i + h 2 h 6 x i + h 7 y i + 1 y ‾ i ′ = h 3 x i + h 4 y i + h 5 h 6 x i + h 7 y i + 1 - - - ( 2 )
Obviously, refinement h is exactly to seek a concrete h*, make
G(h*)=minG(h)
This is non-linear optimal problem.We can profit solve with the following method.By (x in stepi′,yi') and (xi,yi) be any pair coupling in the geometric center point of all of compact district in the video camera imaging visual field pixel sit Mark and, world coordinates,By (the x in front an iterationi,yi) and h calculate and obtain, every iteration once,
By the match point (x of previous iterationi′,yi') and (xi, yi) replace respective items (iterations in A (see step 8) Strictly corresponding with the footnote being replaced item) solve next iteration parameter h, the h* after refinement is i.e. available accurately Video camera projection matrix H after change.
Seat calibration method is hit according to projection inverse transformation computer sim-ulation rifle as follows described in above-mentioned steps l0:
According to video camera mounting characteristic in gun-simulation, camera optical axis and curtain intersection point are required shooting and hit Point, knows camera center picture point inverse projection to world coordinate system, gained generation according to video camera linear imaging model Boundary's coordinate is shooting and hits coordinate.If central pixel is (x '0,y′0), then hit point coordinates (x0t,y0t,t)=H-1(x′0s,y′0S, s) the homogeneous partial differential transformation ratio of coordinate (t, the s are), or by above-mentioned nonhomogeneous formula Solve, central pixel (x '0,y′0) directly read by camera views, H-1Inverse matrix for video camera projection matrix.
Technical scheme has the advantages that relative to prior art
1 this method can be simultaneously used in multiple parallel embedded system, the corresponding virtual ejaculator's hands of each system In a stage property gun, can really realize multiple player and shoot simultaneously, the most not conflict and system delay;
2 this method really overcome existing virtual shooting aiming foresight location (shooting scenery location) and cannot be suitable for In the limitation of the bigger screen of area, really high-quality image effect and multidimensional specially good effect are dissolved into virtual shooting Amusement and training platform in, bring shooting experiencer or gunnery training person be immersed in virtual scene and such as face it The experience more true to nature in border;
3 this method use shooting aiming point in video camera projection imaging geometry principle location to make collimating fault especially The method traditional compared with some is much lower.
Accompanying drawing explanation
Fig. 1 optics wall light source point arranges exemplary plot, and wherein open circles represents the position of infrared light supply on optics wall, The position of the infrared light supply compact district geometric center that filled circles is demarcated;
Fig. 2 camera review sampling instances figure;
Data processing core programmed algorithm flow chart in Fig. 3 embedded lines plate;
Fig. 4 Image semantic classification Matlab simulated effect figure;
Fig. 5 centroid position and coordinate solve Matlab simulation result schematic diagram;
Geometric center position, Fig. 6 compact district and coordinate solve Matlab simulation result schematic diagram;
All close quarters geometric center pixel coordinates and the signal of eigenvalue Matlab solving result in Fig. 7 image Figure;
Fig. 8 pattern recognition characteristic number and associative mode coordinate output schematic diagram;
Fig. 9 video camera projection matrix H solves and H* Data Comparison table figure after refinement;
An analysis of the accuracy table figure is hit in Figure 10 incomplete statistics.
Detailed description of the invention
The present invention is by the following technical solutions:
Embodiment: 1
A kind of many people of huge curtain shoot interactive scenery localization method, comprise the following steps:
Step 1, optics wall is set, sets up optical coordinate system with the optics wall upper left corner for initial point, and at optics Arrange a number of infrared lamp close quarters on wall, measure each infrared lamp close quarters geometric center The coordinate of optics wall coordinate system, i.e. world coordinates, the geometric center rectangular array of each infrared lamp close quarters Arrangement, according to the infrared lamp of each infrared lamp close quarters in adjacent 4 infrared lamp close quarterses constituting rectangle Number, set pattern count;
Step 2, photographic head is installed on the muzzle of gun-simulation, makes the aiming line of gun-simulation and the optical axis of video camera It is in the position of parallel relation or coincidence, by optical filtering ink sheet on camera lens front to filter visible ray;
Step 3, the image photographing photographic head carry out pretreatment;
Step 4, connected domain all of to pretreated picture search, and solve the picture of each connected domain barycenter Element coordinate, i.e. obtains the pixel coordinate of all infrared lamp pixels;
Step 5, draw a circle to approve each infrared lamp compact district mid-infrared lamp pixel;
Step 6, calculate the number of each infrared lamp compact district mid-infrared lamp pixel, calculate each infrared lamp intensive The pixel coordinate of the geometric center in district;
Step 7, choose the infrared lamp compact district corresponding to transverse and longitudinal coordinate minimum of geometric center pixel coordinate, and Choose 3 infrared lamp compact districts constituting rectangular relation with the infrared lamp compact district corresponding to transverse and longitudinal coordinate minimum, According to the infrared lamp quantity contained by 4 infrared lamp compact districts that this step is chosen, it is thus achieved that pattern count, and obtain 4 The pixel coordinate matrix corresponding to geometric center of individual infrared lamp compact district;
Step 8, obtain the world coordinates of the geometric center of 4 infrared lamp compact districts in step 7 according to pattern count And world coordinates matrix;
Step 9, obtain projective transformation matrix H according to world coordinates matrix and pixel coordinate matrix;
Step 10, according to gun-simulation location of pixels corresponding in the picture, obtained by projective transformation matrix H The world coordinates of the optics wall of gun-simulation alignment.
Step 9 projective transformation matrix H is obtained by below equation:
x i ′ = h 0 x i + h 1 y i + h 2 h 6 x i + h 7 y i + 1 y i ′ = h 3 x i + h 4 y i + h 5 h 6 x i + h 7 y i + 1
Wherein xi', yi' for compact district geometric center pixel coordinate, xi, yiFor compact district geometric center world coordinates, 4 compact district geometric center pixel coordinates constitute eight yuan of linear function groups, and trying to achieve projective transformation matrix H is
h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 1 .
It is further comprising the steps of that step 9 asks for projective transformation matrix H:
In step 9.1, the video camera imaging visual field in addition to 4 infrared lamp compact districts that step 7 is chosen Infrared lamp compact district is refinement infrared lamp compact district;
Step 9.2, to choose 4 infrared lamps that certain refinement infrared lamp compact district replacement step 7 chooses close In collection district one, 4 infrared lamp compact district geometric center pixel coordinates after being replaced and world coordinates;
Step 9.3, according to replace after 4 infrared lamp compact district geometric center world coordinates and projection become Change matrix H and try to achieve new infrared lamp compact district geometric center pixel coordinateAnd with the pixel of step 9.2 Coordinate seeks error amount G (h) by below equation, G ( h ) = Σ i = 1 4 { ( x i ′ - x ‾ i ′ ) 2 + ( y i ′ - y - i ′ ) 2 } ;
If step 9.4 G (h) is more than or equal to predetermined value, ask for 4 the infrared lamp compact districts currently chosen The projective transformation matrix H that obtains also updates Current projection transformation matrix H, returns step 9.2;If G (h) is less than Predetermined value, then stop iteration, and choosing Current projection transformation matrix H is the projective transformation matrix H after refinement.
Embodiment 2:
This huge curtain, many people shoot interactive scenery localization method and include optics wall method for arranging, emulation after curtain The particularity that in rifle, video camera collocation method and light source image gather sets and in embedded lines plate, data process Core algorithm.
After this curtain, optics wall method for arranging comprises the following steps, (example schematic as shown in Figure 1):
This optics wall method for arranging comprises the following steps:
Step 1: with the screen upper left corner as zero, with screen longitudinal direction frame for the horizontal frame of X-axis as Y-axis (being constituted decision by the pixel coordinate system orientation of image acquisition camera) sets up optics wall coordinate system;
Step 2: by arrange on optics wall a number of infrared lamp (infrared lamp is drawn materials according to screen material, if Then select wavelength long enough for projection screen, the sufficiently large infrared lamp of angle of scattering is placed in curtain rear;If LED is electric Son screen then takes some LED lamp tube in screen light emitting array and is replaced by infrared emission without visible LED of the same area Pipe;If other material visual field is closed selects other specification infrared lamp else), following rule is followed in the position of lamp:
Being divided into intensive and sparse two kinds of position relationships by the big I of spacing between a lamp and lamp, distance is less than marginal value R is then intensive relation, is then sparse relation more than marginal value R;
B is in lamp one close quarters of composition of intensive relation each other;
Lamp in any one compact district of c is in sparse relation each other with the lamp in another compact district;
The quantity of the lamp of each close quarters of d can be the random natural number of more than 1, all close quarterses Geometric center point rectangular array arranges.
Step 3: accurately measure the Geometric center coordinates of each infrared lamp close quarters.
The particularity of video camera collocation method and light source image collection in this gun-simulation sets and comprises the following steps:
Step 1: photographic head is installed on the muzzle of gun-simulation, makes the aiming line of gun-simulation and the optical axis of video camera It is in the relation of parallel relation or coincidence;
Step 2: by blocking optical filtering ink sheet before camera lens closely to filter visible ray, collect the reddest The image of outer lamp picture point;
Step 3: camera lens specification is chosen and to meet following condition:
Lens aperture wants suitable, will ensure that ejaculator (sets for guarantee Consumer's Experience at point blank in application scenario The minimum range of ejaculator's shooting course middle-range curtain) under at least 4 infrared lamps in the video camera imaging visual field Close packing region;Use the camera lens that flake distortion is less.
Obtain camera review sampling instances figure, as shown in Figure 2
Data processing core programmed algorithm flow chart in embedded lines plate as shown in Figure 3, its algorithm substep describes As follows:
Step 1: Image semantic classification, Matlab emulates design sketch such as Fig. 4;
Step 2: all of connected domain in search graph picture, and solve the pixel coordinate of each connected domain barycenter one by one, will
These coordinates are stored in elongated array W;
Step 3: the point choosing abscissa from tried to achieve elongated array W minimum is reference point, selects in W additionally Arbitrary, ask the most one by one distance calculate meet the distance point less than R (have the following optimized algorithm can be instead of: The center of mass point choosing transverse and longitudinal coordinate minimum is reference point, searches for appearance in this field with certain Radius R for field The number of connected domain center of mass point), all center of mass point calculate connected domain contained by first close quarters of complete judgement Center of mass point delineation complete, and by be determined mutual distance less than R be in same close packing district center of mass point get rid of, Start to choose the next one again and search Radix Ginseng selection examination point, circulate aforesaid operations, traveled through all intensive connected domain matter herein Heart point is drawn a circle to approve (classification that neighbor distance meets the center of mass point less than R), and Matlab emulation distribution implementation effect is shown in Accompanying drawing 5.
Step 4: the coordinate of the center of mass point being belonged to same close quarters by drawing a circle to approve searched is put into one (N is determined N*2 array by each close quarters center of mass point number, and the numerical value of N is intensive in being put by optics wall paper The most number of infrared lamp number that district is comprised determines), the part of not enough N number of center of mass point is with 0 yuan of polishing, by every The array that the center of mass point coordinate of one piece of close quarters is constituted is combined into the three-dimensional array of a N*2*M, and (array is altogether Having M page is that the infrared lamp close quarters number put by optics wall paper determines, the upper limit of compact district number is by curtain chi Very little size determines), the part of not enough M close quarters is all with 0 yuan of polishing, and Matlab emulates to be distributed and performs effect Fruit sees accompanying drawing 6;
Step 5: the non-zero entry number of array in statistics every one page of gained three-dimensional array, is stored in array K=[x1, x2, x3, x4, x5, x6, x7, x8 ..., XM], X1, x2, x3, x4, x5 ... XM is each red The identification eigenvalue of outer lamp compact district, this M numerical value at least 4 be not 0 (by the video camera imaging visual field Compact district number is at least 4 decisions);
Step 6: calculating the pixel coordinate of all of infrared lamp compact district geometric center in the visual field, directly output is taken the photograph In camera imaging plane corresponding each infrared on the pixel coordinate of the geometric center point of each close quarters and optics wall The geometric center world coordinates (the most measured and drawn) of lamp close quarters, is stored in the three-dimensional of a M*2*2 respectively In array, it is divided into the two-dimensional array T of 2 M*2 by page1、T2, array T1 is deposited pixel coordinate, array T2 In deposit world coordinates, in two arrays each element according to corresponding relation arrange, Matlab emulation distribution implementation effect See accompanying drawing 7;
Step 7: the connected domain number taking leftmost 4 infrared lamp close packing regions in the video camera imaging visual field is made Eigenvalue (the world coordinates composition one of the geometric center of these 4 infrared lamp close quarterses for zone location identification Individual rectangle), by corresponding geometric center position, compact district from horizontal stroke the 4 of coordinate reckling arranged clockwise gained Bit digital is the eigenvalue of the identification of screen curtain section domain partitioning location, and (this Eigenvalue Extraction Method is for optimizing simplification side Method, and not exclusive, can take in M numerical value in array K and arbitrarily be combined into identification more than or equal to 4 numerical value Eigenvalue), it is the pattern count identifying camera coverage delineation screen area.Red to what step 6 was exported Pixel coordinate and the world coordinates of outer lamp compact district geometric center point screen, and export the close of transverse and longitudinal coordinate minimum The other three compact district geometric center point of collection district's geometric center point and therewith composition quadrangle relation, will filter out 4 compact district geometric center pixel coordinates and corresponding world coordinates be stored in the two-dimensional array S of two 4*21、 S2In, array S1In deposit pixel coordinate, array S2In deposit world coordinates, 42 dimension coordinates of each array Arranging in the direction of the clock from transverse and longitudinal coordinate minimum, Matlab emulation distribution implementation effect is shown in accompanying drawing 8;
Step 8: according to video camera linear imaging model, the coordinate in two 4*2 arrays of step 7 output is constituted World coordinates in model and the corresponding relation of pixel coordinate, utilize the method solving system of linear equations to solve projection and become Change each element in matrix, export projective transformation matrix H;
Step 9: using refining algorithms refinement projective transformation matrix H, Matlab emulates operation result and sees attached Fig. 9;
Step 10: hit coordinate according to video camera imaging inverse projection (projection inverse transformation) computer sim-ulation rifle, This coordinate is sent to main frame, and the virtual image in main frame is play and is controlled engine according to coordinate setting to virtual image Scenery corresponding to this coordinate in coordinate system.
As the Image semantic classification in above-mentioned steps 1 includes image gray processing, binaryzation, medium filtering, removal orphan Standing statue vegetarian refreshments, opening operation and closed operation.
As the video camera linear imaging model in above-mentioned steps 8 refers to following mathematical relationship:
In video camera linear imaging model (also known as pin-hole model), space any point P projected position on image It is line OP and the intersection point of the plane of delineation of photocentre O and some P for p, p.If some P is in world coordinate system Coordinate is (Xw, Yw, Zw), and the coordinate under photographic head coordinate system is (Xc, Yc, Zc), and some p is at image physics Coordinate under coordinate system is that (x, y), the pixel coordinate in video camera imaging plane is that (u, v), by by taking the photograph phase Machine optics geometrical principle can be obtained by the P point coordinates (Xw, Yw, Zw, 1) that represents with world coordinate system and its Relation between subpoint p coordinate (u, v, 1) under image pixel coordinates system:
Z c u v 1 = k x 0 u 0 0 0 k y v 0 0 0 0 1 0 R t 0 T 1 X W Y W Z W 1
Wherein, spin matrix R and translation matrix t describe the pass between world coordinate system and photographic head coordinate system System, kx,ky,u0,v0It is intrinsic parameters of the camera, solves it is generally required to demarcate.
In the present invention on optics wall at compact district geometric center point all approximations of infrared lamp and composition quadrangle relation In a plane, so this model can be further simplified as plane (2D) projective transformation model, this model is to close In the linear transformation of homogeneous trivector, it represents with nonsingular 3 × 3 matrixes:
Plane (2D) projective transformation is the linear transformation about homogeneous trivector, and it is with one nonsingular 3 × 3 Matrix represents:
a 1 ′ a 2 ′ a 3 ′ h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 1 a 1 a 2 a 3
Wherein (a '1,a′2,a′3) it is the homogeneous coordinate system table of compact district geometric center pixel coordinate Show, (a1,a2,a3) it is that every coordinate system of the world coordinates of compact district geometric center represents, above formula or more succinct Be expressed as:
X’=HX
H is video camera projection matrix, i.e. H = h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 1 .
X ' characterizes camera image plane pixel coordinate matrix, and X characterizes the world of subpoint in world coordinates yarn and sits Mark matrix, H-matrix has 8 degree of freedom, and individual point is to solving projection matrix H to need N (2N >=8), and step 7 exports 4 couple point solve requirement to meeting.
As follows in above-mentioned steps 8 solves projective transformation matrix H method:
Turn to nonhomogeneous for above formula:
x i ′ = a 1 ′ a 3 ′ = h 0 x i + h 1 y i + h 2 h 6 x i + h 7 y i + 1 y i ′ = a 2 ′ a 3 ′ = h 3 x i + h 4 y i + h 5 h 6 xx i + h 7 y i + 1
Wherein (xi′,yi') it is that the pixel coordinate inhomogeneous coordinate system of compact district geometric center represents, (xi,yi) it is intensive The inhomogeneous coordinate system of the world coordinates of district's geometric center represents;
Make (xi, yi) ∈ (xi', yi') it is a pair match point, i=1,2 ..., N.By every a pair match point, Two linear equation can be obtained according to formula (11):
(xi, yi, 1,0,0,0 ,-xixi', yi xi′)h=xi
(0,0,0, xi, yi, 1 ,-xiyi' ,-yi yi′)h=yi
Wherein, h=h0, h1, h2, h3, h4, h5, h6, h7 T
Then we can obtain 2N about parameter h0, h1, h2, h3, h4, h5, h6, h7Equation. Write as matrix form:
Ah=b
Wherein
A = x 1 y 1 1 0 0 0 - x 1 x 1 ′ y 1 x 1 ′ 0 0 0 x 1 y 1 1 - x 1 y 1 ′ - y 1 y 1 ′ . . . . . . . . . . . . . . . . . . . . . . . . x N y N 1 0 0 0 - x N x N ′ y N x N ′ 0 0 0 x N y N 1 - x N y N ′ - y N y N ′
b=(x′1,y′1,…,x′N,y′N)T
As N >=4 time, we can seek h=A with method of least square-1B wherein A-1Pseudo-inverse matrix for A.
As described in above-mentioned steps 9, the method for refinement projective transformation matrix H is as follows:
Owing to being the solution under min ‖ Ah-b ‖ meaning according to the h required by method of least square, the most still have certain Error.Refining algorithms given below, makes error minimize.Order
G ( h ) = Σ i = 1 N { ( x i ′ - x ‾ i ′ ) 2 + ( y i ′ - y - i ′ ) 2 } - - - ( 1 )
Wherein
x ‾ i ′ = h 0 x i + h 1 y i + h 2 h 6 x i + h 7 y i + 1 y ‾ i ′ = h 3 x i + h 4 y i + h 5 h 6 x i + h 7 y i + 1 - - - ( 2 )
Obviously, refinement h is exactly to seek a concrete h*, makes
G(h*)=minG(h)
This is non-linear optimal problem.We can profit solve with the following method.By (x in stepi′,yi') and (xi,yi) be any pair coupling in the geometric center point of all of compact district in the video camera imaging visual field pixel sit Mark and, world coordinates,By (the x in front an iterationi,yi) and h calculate and obtain, every iteration once, By the match point (x of previous iterationi′,yi') and (xi, yi) replace respective items (iterations in A (see step 8) Strictly corresponding with the footnote being replaced item) solve next iteration parameter h, the h* after refinement is i.e. available accurately Video camera projection matrix H after change.
Iterative process is as follows:
Iterative step 1, solve initial value h according to initial 4 groups of corresponding point, find first group of iteration point to (xi′,yi′) (xi, yi), substitute into formula (2) and solve (xi′,yi'), then substitute into formula (1) judgement Whether G (h) is the minimum (| G (h) |≤e, e are the judgement minimum set) set, and completes an iteration.
Iterative step 2, solve next iterative value h, by first group of iteration point to (xi′,yi') and (xi, yi) replace Change corresponding element x in matrix Ai0, yi0, xi0, yi0Value (note footnote i0 the most corresponding with iterations, So every iteration 4 times update the most completely one time h), that seeks A violates a gust A-1, according to formula h=A-1B tries to achieve newly H, find second group of iteration point to (xi+1, yi+1) and (xi+1, yi+1), next repeat step (1), Complete second time iteration.
Iterative step 3, iteration ends are judged to that | G (h) |≤e, e are the judgement minimum set.
Seat calibration method is hit according to projection inverse transformation computer sim-ulation rifle as follows described in above-mentioned steps l0:
According to video camera mounting characteristic in gun-simulation, camera optical axis and curtain intersection point are required shooting and hit Point, knows camera center picture point inverse projection to world coordinate system, gained generation according to video camera linear imaging model Boundary's coordinate is shooting and hits coordinate.If central pixel is (x '0,y′0), then hit point coordinates (x0t,y0t,t)=H-1(x′0s,y′0S, s) the homogeneous partial differential transformation ratio of coordinate (t, the s are), or by above-mentioned nonhomogeneous formula Solve, central pixel (x '0,y′0) directly read by camera views, H-1Inverse matrix for video camera projection matrix.
During implementing this method, this group engineering staff is distributed output in Matlab analogue system and performs knot Really, and the shooting calculated is hit point coordinates being verified, verification method is as follows:
Bundle laser pen always by before image acquisition camera camera lens, be laser pen luminous rays and camera optical axis Parallel, measure the light point coordinates (opposing optical that this laser pen is incident upon on screen each time during collecting test picture Wall coordinate system) (x0,y0), and hit a seat with the shooting obtained by computed in software in image of testing collected Mark (x0,y0) contrast, show that an analysis of the accuracy table figure such as accompanying drawing 10 is hit in incomplete statistics, the most vertical Checking this method is reliable and practical, and error is less than 0.5 coordinate unit.
Specific embodiment described herein is only to present invention spirit explanation for example.Skill belonging to the present invention Described specific embodiment can be made various amendment or supplements or use by the technical staff in art field Similar mode substitutes, but without departing from the spirit of the present invention or surmount defined in appended claims Scope.

Claims (2)

1. the many people of huge curtain shoot interactive scenery localization method, it is characterised in that comprise the following steps:
Step 1, optics wall is set, sets up optical coordinate system with the optics wall upper left corner for initial point, and at optics Arrange a number of infrared lamp close quarters on wall, measure each infrared lamp close quarters geometric center The coordinate of optics wall coordinate system, i.e. world coordinates, the geometric center rectangular array of each infrared lamp close quarters Arrangement, according to the infrared lamp of each infrared lamp close quarters in adjacent 4 infrared lamp close quarterses constituting rectangle Number, set pattern count;
Step 2, photographic head is installed on the muzzle of gun-simulation, makes the aiming line of gun-simulation and the optical axis of video camera It is in the position of parallel relation or coincidence, by optical filtering ink sheet on camera lens front to filter visible ray;
Step 3, the image photographing photographic head carry out pretreatment;
Step 4, connected domain all of to pretreated picture search, and solve the picture of each connected domain barycenter Element coordinate, i.e. obtains the pixel coordinate of all infrared lamp pixels;
Step 5, draw a circle to approve each infrared lamp compact district mid-infrared lamp pixel;
Step 6, calculate the number of each infrared lamp compact district mid-infrared lamp pixel, calculate each infrared lamp intensive The pixel coordinate of the geometric center in district;
Step 7, choose the infrared lamp compact district corresponding to transverse and longitudinal coordinate minimum of geometric center pixel coordinate, and Choose 3 infrared lamp compact districts constituting rectangular relation with the infrared lamp compact district corresponding to transverse and longitudinal coordinate minimum, According to the infrared lamp quantity contained by 4 infrared lamp compact districts that this step is chosen, it is thus achieved that pattern count, and obtain 4 The pixel coordinate matrix corresponding to geometric center of individual infrared lamp compact district;
Step 8, obtain the world coordinates of the geometric center of 4 infrared lamp compact districts in step 7 according to pattern count And world coordinates matrix;
Step 9, obtain projective transformation matrix H according to world coordinates matrix and pixel coordinate matrix;
Step 10, according to gun-simulation location of pixels corresponding in the picture, obtained by projective transformation matrix H The world coordinates of the optics wall of gun-simulation alignment,
Step 9 projective transformation matrix H is obtained by below equation:
x i ′ = h 0 x i + h 1 y i + h 2 h 6 x i + h 7 y i + 1 y i ′ = h 3 x i + h 4 y i + h 5 h 6 x i + h 7 y i + 1
Wherein x 'i, y 'iFor compact district geometric center pixel coordinate, xi, yiFor compact district geometric center world coordinates, 4 compact district geometric center pixel coordinates constitute eight yuan of linear function groups, and trying to achieve projective transformation matrix H is
h 0 h 1 h 2 h 3 h 4 h 5 h 6 h 7 1 .
The many people of the huge curtain of one the most according to claim 1 shoot interactive scenery localization method, its feature Being, it is further comprising the steps of that described step 9 asks for projective transformation matrix H:
Infrared in addition to 4 infrared lamp compact districts that step 7 is chosen in step 9.1, the video camera imaging visual field Lamp compact district is refinement infrared lamp compact district;
Step 9.2, choose 4 infrared lamp compact districts that certain refinement infrared lamp compact district replacement step 7 is chosen In one, 4 infrared lamp compact district geometric center pixel coordinates after being replaced and world coordinates;
Step 9.3, according to replace after 4 infrared lamp compact district geometric center world coordinates and projective transformation square Battle array H tries to achieve new infrared lamp compact district geometric center pixel coordinateAnd with the pixel coordinate of step 9.2 Error amount G (h) is sought by below equation,
If step 9.4 G (h) is more than or equal to predetermined value, 4 the infrared lamp compact districts asking for currently choosing obtain Projective transformation matrix H and update Current projection transformation matrix H, return step 9.2;If G (h) is less than predetermined Value, then stop iteration, and choosing Current projection transformation matrix H is the projective transformation matrix H after refinement.
CN201410120796.4A 2014-03-27 2014-03-27 Object positioning method adopting giant screen for multi-user shoot interaction Expired - Fee Related CN104138661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410120796.4A CN104138661B (en) 2014-03-27 2014-03-27 Object positioning method adopting giant screen for multi-user shoot interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410120796.4A CN104138661B (en) 2014-03-27 2014-03-27 Object positioning method adopting giant screen for multi-user shoot interaction

Publications (2)

Publication Number Publication Date
CN104138661A CN104138661A (en) 2014-11-12
CN104138661B true CN104138661B (en) 2017-01-11

Family

ID=51848105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410120796.4A Expired - Fee Related CN104138661B (en) 2014-03-27 2014-03-27 Object positioning method adopting giant screen for multi-user shoot interaction

Country Status (1)

Country Link
CN (1) CN104138661B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104436643B (en) * 2014-11-17 2017-05-31 深圳市欢创科技有限公司 Output light gun aims at method, the apparatus and system of the quasi- heart on a display screen
CN105066964A (en) * 2015-07-13 2015-11-18 中山北京理工大学研究院 Method for distinguishing optical axis of camera in real time
CN105457276A (en) * 2015-11-19 2016-04-06 上海曼恒智能科技有限公司 FPS game simulation system
CN107966147B (en) * 2016-10-20 2021-02-05 北京自动化控制设备研究所 Scene matching method under large-locomotive condition
CN108389246B (en) * 2018-01-30 2023-07-28 河南三阳光电有限公司 Somatosensory equipment, game picture and naked eye 3D interactive game manufacturing method
CN112860083B (en) * 2021-01-08 2023-01-24 深圳市华星光电半导体显示技术有限公司 Laser pen light source positioning method and display device
CN115079238B (en) * 2022-08-23 2023-10-03 安徽交欣科技股份有限公司 Highway traffic intelligent accurate positioning system and method based on RTK

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001224856A (en) * 1999-12-06 2001-08-21 Namco Ltd Game device, position detecting device and information storage medium
CN202120235U (en) * 2011-05-27 2012-01-18 西安肯琰优动漫科技有限公司 Multipoint light gun control device based on liquid crystal display (LCD) and projectors
KR20120029277A (en) * 2010-09-16 2012-03-26 (주)에스엠스포츠 Mobile lazor shooting game system
CN103055524A (en) * 2013-01-21 2013-04-24 上海恒润数码影像科技有限公司 Positioning device, four-dimensional interactive cinema and interacting method utilizing same
KR101307254B1 (en) * 2013-01-04 2013-09-11 (주)브이아이앰 Gun shooting game system using infrared reflecting marker and infrared camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001224856A (en) * 1999-12-06 2001-08-21 Namco Ltd Game device, position detecting device and information storage medium
KR20120029277A (en) * 2010-09-16 2012-03-26 (주)에스엠스포츠 Mobile lazor shooting game system
CN202120235U (en) * 2011-05-27 2012-01-18 西安肯琰优动漫科技有限公司 Multipoint light gun control device based on liquid crystal display (LCD) and projectors
KR101307254B1 (en) * 2013-01-04 2013-09-11 (주)브이아이앰 Gun shooting game system using infrared reflecting marker and infrared camera
CN103055524A (en) * 2013-01-21 2013-04-24 上海恒润数码影像科技有限公司 Positioning device, four-dimensional interactive cinema and interacting method utilizing same

Also Published As

Publication number Publication date
CN104138661A (en) 2014-11-12

Similar Documents

Publication Publication Date Title
CN104138661B (en) Object positioning method adopting giant screen for multi-user shoot interaction
CN104008571B (en) Human body model obtaining method and network virtual fitting system based on depth camera
CN102735100B (en) Individual light weapon shooting training method and system by using augmented reality technology
CN103945210B (en) A kind of multi-cam image pickup method realizing shallow Deep Canvas
CN105787989B (en) A kind of measurement material geometric properties reconstructing method based on photometric stereo vision
CN105184857B (en) Monocular vision based on structure light ranging rebuilds mesoscale factor determination method
CN110443882A (en) Light field microscopic three-dimensional method for reconstructing and device based on deep learning algorithm
CN106289106A (en) Stereo vision sensor that a kind of line-scan digital camera and area array cameras combine and scaling method
US20150116321A1 (en) Camouflage and Similar Patterns Method and Technique of Creating Such Patterns
CN104361624B (en) The rendering intent of global illumination in a kind of electronic 3-D model
US20130016099A1 (en) Digital Rendering Method for Environmental Simulation
CN105825543B (en) Point off density cloud generation method and system are regarded based on low altitude remote sensing image more
CN109562294A (en) Method for creating virtual objects
CN109785423A (en) Image light compensation method, device and computer equipment
CN107392234A (en) A kind of body surface material kind identification method based on individual 4D light field image
CN108305286A (en) Multi-view stereo vision foot type method for three-dimensional measurement, system and medium based on color coding
CN107589551A (en) A kind of multiple aperture polarization imaging device and system
CN109641150A (en) Method for creating virtual objects
CN108600729A (en) Dynamic 3D models generating means and image generating method
Kinugawa et al. Deep learning model for predicting preference of space by estimating the depth information of space using omnidirectional images
CN107256082A (en) A kind of ammunition ballistic trajectory calculating system based on network integration and binocular vision technology
Paulin et al. Review and analysis of synthetic dataset generation methods and techniques for application in computer vision
CN115880443A (en) Method and equipment for reconstructing implicit surface of transparent object
CN110517348A (en) Target object three-dimensional point cloud method for reconstructing based on display foreground segmentation
Alhejri et al. Reconstructing real object appearance with virtual materials using mobile augmented reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: 100090, No. 18 Zhongguancun Avenue, Beijing, Haidian District, digital logistics port, 6 floor, B District

Applicant after: BEIJING SUN LIGHT TECHNOLOGY Co.,Ltd.

Address before: 100090, No. 18 Zhongguancun Avenue, Beijing, Haidian District, digital logistics port, 6 floor, B District

Applicant before: BEIJING TAIYANG GUANGYING FILM AND TELEVISION SCIENCE & TECHNOLOGY Co.,Ltd.

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: BEIJING SUN LIGHT AND SHADOW TELEVISION TECHNOLOGY CO., LTD. TO: BEIJING SHENLIN QIJING CULTURE?CO.,?LTD.

GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170111