CN102306088A - Solid projection false or true registration device and method - Google Patents

Solid projection false or true registration device and method Download PDF

Info

Publication number
CN102306088A
CN102306088A CN201110170857A CN201110170857A CN102306088A CN 102306088 A CN102306088 A CN 102306088A CN 201110170857 A CN201110170857 A CN 201110170857A CN 201110170857 A CN201110170857 A CN 201110170857A CN 102306088 A CN102306088 A CN 102306088A
Authority
CN
China
Prior art keywords
solid model
photoelectric sensor
image
model
projective transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110170857A
Other languages
Chinese (zh)
Inventor
王辉柏
韦欢
郭林
蔡兴泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NCUT ZHUOLI TECHNOLOGY Co Ltd
Original Assignee
NCUT ZHUOLI TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NCUT ZHUOLI TECHNOLOGY Co Ltd filed Critical NCUT ZHUOLI TECHNOLOGY Co Ltd
Priority to CN201110170857A priority Critical patent/CN102306088A/en
Publication of CN102306088A publication Critical patent/CN102306088A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a solid projection false or true registration device and a method. The device comprises a solid model, a plurality of photoelectric sensors, a computer and a projector, wherein the photoelectric sensors are embedded into the solid model and are used for inputting inductive electrical signals into the computer; a three-dimensional texture image of the solid model is set in the computer; a group of Gray code images of black, white and gray irradiated onto the solid model are generated; inductive information of the photoelectric sensors for the Gray code images is collected, analyzed and processed; a binary system coordinate of each photoelectric sensor of the corresponding characteristic points of the solid model is obtained; a projection conversion matrix is computed according to a three-dimensional coordinate of the corresponding characteristic points of the solid model; a three-dimensional virtual image of the solid model is subject to projection conversion; and the three-dimensional virtual image after projection conversion is projected to the solid model by the projector. According to the invention, the photoelectric sensors are embedded on the solid model; the projection matrix is dynamically generated according to signals of the photoelectric sensors by the computer; and after the generated three-dimensional virtual image is subject to projection conversion, the image is accurately projected to the solid model by the projector for generating a false or true beautiful three-dimensional solid.

Description

A kind of entity projection actual situation registration apparatus and method
Technical field
The present invention is specifically related to a kind of entity projection actual situation registration apparatus and method.
Background technology
The entity projection is meant on irregular surface evenly, projection in real time, as with projector the pinup picture of automobile being shone car model, makes automobile have magnificent appearance, and its color, gloss, decoration are changed at any time; With projector tridimensional advertisement is radiated on the sleazy advertising model, advertisement content is changed at any time.Therefore have powerful vision impact, and can practice thrift the displaying cost.
Entity projection and people from display scene action combines; The real scene of display scene and the virtual scene of computing machine structure are organically blent; Through people's in the real scene action recognition, virtual scene is changed, represent the scene that various actual situations combine.Be mainly used in various new product release conferences, various exhibition, profile, inner structure, the dynamic operation situation of the existing or product that has in the future of comprehensive displaying.
Realize the entity projection, gordian technique is that the image of how projector being launched fits like a glove with on-the-spot solid model, just solves entity projection actual situation registration problems.The actual situation registration problems has had more research in academia, belongs to a key problem in the augmented reality technical field.
Abroad, 2006, people such as Andel Miroslavl developed the interactive aspectant augmented reality software based on mobile phone based on the mobile phone of band camera.The user can handheld mobile phone, walks in the room of finishing not, can experience the virtual finishing effect that represents on the mobile phone screen.These software systems to room key point image recognition, are played up virtual finishing effect through mobile phone cam, allow the user experience What You See Is What You Get.But this technology only rests on research department's application at present, does not introduce to the market.
2007, people such as Reifinger proposed the static and dynamic gesture recognition methods of a kind of automatic distinguishing based on infra-red tracing system.Infrared target is installed on user's opposing thumb and the forefinger, and system acquisition extracts static gesture through the distance classification device after information, extract dynamic gesture through statistical model.Adopting this kind gesture identification method to finish the work will be faster than keyboard, mouse, but the hand comfortableness is poor slightly.
2008, people such as Xu calculated camera motion based on the three-view diagram constraint of reference picture of taking in advance and present frame, have proposed a kind of method that is applicable to the real-time tracing six-freedom video camera attitude of natural scene.
2009, people such as Abbate proposed a kind of framework of durable real-time human action capture system.Intelligent inertia measurement sensor unit is distributed on the human body key position, sets the collect data of autobiography sensor of a single-chip microcomputer, adopt kalman filter method to realize the location.
In the same year, people such as Zhang also adopt the method for similar Abbate to realize the location, and different is that the data of collecting from sensor are transferred to computing machine through USB interface; In the same year, people such as Lu adopt based on graphic process unit (GPU) real-time computing technique, and utilization is obtained information and confirmed the degree of depth of object in real scene from the stereopsis of catching, set the mutual alignment between dummy object and the real-world object; The France augmented reality Total Immersion of technology suppliers has developed a augmented reality software DFusion; Be used for the high-end video display later stage, independently the special efficacy of Flame Image Process is synthetic; But this software platform is very expensive; Each works packing charges are more than 100,000; Production costs is higher, and price is not suitable for the current market demand.
About the research of augmented reality, domestic research starting will be later than external.2006, people such as professor Wang Yongtian of Beijing Institute of Technology adopted the wearable augmented reality system of band location tracking device to realize Yuanmingyuan Park augmented reality displaying; Subsequently, people such as Chen Jing, Wang Yongtian has proposed a kind of real-time follow-up registration algorithm based on physical feature point.3D model in known scene and having demarcated on the basis of key frame images is on a small quantity selected the key frame that mates the most with present image, and the kinematic parameter that utilizes image matching method based on key frame to obtain video camera is in real time estimated.
2007, people such as the Deng Shuguo of Jilin University carried out infrared photography Complex Background body Study of Recognition.At first carry out pre-service, utilize active infrared camera to gather infrared image; Adopt medium filtering, remove picture noise; With boundary contrast adaptive histogram equalization method, image is strengthened; Opening operation is removed and the human body image burr of adhesion mutually; Adopt the method for Prewitt rim detection, extract the profile of image; Overall approach is confirmed the gray threshold of image, and image is carried out binary conversion treatment.Then, adopt the chain code following method to carry out Human Body Model's identification.To the characteristics of human body contour outline, first search number of people image, seeker's shoulder image again, the image of final search people's arm and people's leg; 2008; People such as the old one-tenth money of Zhejiang University are display device with the projector; With the thermal camera is optical sensor, designs the multi-point touch system based on projector, thermal camera, and the information that adopts hand, finger, other objects etc. to be projected in desktop is carried out alternately.
At first, set up the geometric maps relation of view field in frame buffer and the camera review space, the image transformation that photographs is calculated under the geometric space of frame buffer.Then, the image behind the employing background subtraction method cutting transformation, mark is communicated with composition, extracts profile and direction bounding box that each is communicated with composition respectively, and real-time follow-up.Then; According to complexion model, from be communicated with composition, extract the geometric position in the centre of the palm, utilize the distance relation in the point and the centre of the palm; According to mathematical knowledge and finger structure, in profile diagram, extract the length and width of finger, the exhibition angle of finger tip, the characteristic informations such as angle between finger.At last, adopt neural network identification finger.
2009, proposition such as the Chen Yimin of Shanghai University also realized a kind of based on the system of augmented reality with many people real-time interactive of special-shaped screen.Proposed fusion application AR technology and how special-shaped screen technology in the display and demonstration field first, studied and realized many people of AR system real-time interactive correlation technique, that has designed the AR interactive system plays up platform, interaction platform and network communication platform; In the same year, the Du Fengyi of University of Electronic Science and Technology etc. adopt the visual tracking method based on marker, have adopted the mode that meets people's manual manipulation, the multiple graphic plotting that utilizes graphics with play up technology and accomplish various historical relic models and virtual information interactive.
2010, people such as the Liu Yiran of Shanghai Communications University, Yang Xubo studied the augmented reality technology based on many projections and mobile projector, adopted a camera sampling environmental data and projector parameter, adopted many projector to come common the demonstration.
Summary of the invention
The object of the present invention is to provide a kind of entity projection actual situation registration apparatus and method, it can simply and ideally be implemented in effect even on the irregular solid object surface, projection in real time.
The present invention is achieved in that a kind of entity projection actual situation registration apparatus, comprises solid model, some photoelectric sensors, computing machine and projector; Said photoelectric sensor embeds in the solid model, and electrical signal of reaction is input to computing machine; Computer settings solid model three-D grain image; Produce one group of black and white gray scale gray code map picture that shines on the solid model; Gather induction information and the analyzing and processing of photoelectric sensor to the Gray code image; Obtain the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point; Three-dimensional coordinate in conjunction with correspondent entity aspect of model point; Calculate projective transformation matrix, the solid model three-dimensional virtual image is carried out projective transformation; The three-dimensional virtual image of projector after with projective transformation projects on the solid model.The present invention embeds photoelectric sensor on the solid model; Dynamically generate projection matrix by computing machine according to photo-sensor signal will; After the virtual three dimensional image that generates carried out projective transformation, project accurately on the solid model, produce the exquisite 3D solid of actual situation unification through projector.
Wherein, described computing machine includes:
Solid model three-D grain image generation unit is used to set the solid model three-D grain, generates the three-D grain image of solid model;
The Gray code image generation unit is used for producing one group of gray code map picture that shines the black and white gray scale on the solid model according to the Gray code scanning algorithm;
The projective transformation matrix generation unit; Be used for importing the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point of the electric signal acquisition that produces according to the photoelectric sensor induction light on the solid model; Three-dimensional coordinate in conjunction with correspondent entity aspect of model point calculates projective transformation matrix;
The projective transformation unit is used for according to said projective transformation matrix the three-D grain image of said solid model being carried out projective transformation.
Said photoelectric sensor is connected to said computing machine through cable.
The corresponding photoelectric sensor of each unique point of said solid model, the photoelectric sensor optical fiber connector is located at the characteristic point position of said solid model.
Said photoelectric sensor is 4 or 8.
The fibre diameter of said photoelectric sensor is 1mm.
A kind of entity projection actual situation method for registering may further comprise the steps:
Set the solid model three-D grain, generate the three-D grain image of solid model;
Produce the Gray code bianry image of one group of black and white gray scale according to the Gray code scanning algorithm;
The utilization projector shines said solid model surface with said Gray code bianry image;
The electric signal that produces according to photoelectric sensor obtains the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point, and the three-dimensional coordinate in conjunction with correspondent entity aspect of model point calculates projective transformation matrix;
Carry out projecting the solid model surface after the projective transformation according to the three-D grain image of said projective transformation matrix to said solid model.
The present invention combines photoelectric technology with computer technology; Accomplish the actual situation registration work in the entity projection quickly and accurately; It is through Gray code scanning algorithm and photoelectric sensor; Obtain the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point; Three-dimensional coordinate in conjunction with correspondent entity aspect of model point; Calculate distortion of projection's matrix; Carry out projecting the solid model surface after the projective transformation according to the three-D grain image of projective transformation matrix to solid model; Thereby be implemented on the irregular solid object surface even; Projection in real time reaches the exquisite 3D solid effect that actual situation is unified.
Description of drawings
Fig. 1 is the structural representation of the entity projection actual situation registration apparatus that provides of the embodiment of the invention;
Fig. 2 is the flow chart of the entity projection actual situation method for registering that provides of the embodiment of the invention;
Fig. 3 is the synoptic diagram of the photoelectric sensor Line Card that provides of the embodiment of the invention
Fig. 4 is that the gray code map that provides of the embodiment of the invention is as sequence chart;
Fig. 5 is the process flow diagram of the method for the entity projection actual situation registration that provides of the embodiment of the invention;
Fig. 6 is the design sketch of the entity projection actual situation registration that provides of the embodiment of the invention.
Embodiment
In order to make the object of the invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with accompanying drawing.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
Referring to shown in Figure 1, a kind of entity projection actual situation registration apparatus comprises solid model, some photoelectric sensors, computing machine and projector; Said photoelectric sensor embeds in the solid model, and electrical signal of reaction is input to computing machine; Computer settings solid model three-D grain image; Produce one group of black and white gray scale gray code map picture that shines on the solid model; Gather induction information and the analyzing and processing of photoelectric sensor to the Gray code image; Obtain the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point; Three-dimensional coordinate in conjunction with correspondent entity aspect of model point; Calculate projective transformation matrix, the solid model three-dimensional virtual image is carried out projective transformation; The three-dimensional virtual image of projector after with projective transformation projects on the solid model.
Wherein, said photoelectric sensor is connected to said computing machine through cable.
The corresponding photoelectric sensor of each unique point of said solid model, the photoelectric sensor optical fiber connector is located at the characteristic point position of said solid model.
Said photoelectric sensor is 4 or 8.
The fibre diameter of said photoelectric sensor is 1mm.
Referring to Fig. 2, described computing machine includes solid model three-D grain image generation unit, Gray code image generation unit, projective transformation matrix generation unit and projective transformation unit.For the ease of explanation, only show the part relevant here with the present invention.
Solid model three-D grain image generation unit is used to set the solid model three-D grain, generates the three-D grain image of solid model; Before carrying out the entity projection, at first set the three-D grain of solid model through the solid model three-D grain image generation unit of built-in computer, the three-D grain image that storage configures is subsequent use;
The Gray code image generation unit produces one group of gray code map picture that shines the black and white gray scale on the solid model according to the Gray code scanning algorithm; Said gray code map similarly is a series of gray scale bianry image, and each image is made up of black and white strip, and these stripeds are alternately separated with projection in zone;
The projective transformation matrix generation unit; Import the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point of the electric signal acquisition that produces according to the photoelectric sensor induction light on the solid model; The three-dimensional coordinate of binding entity aspect of model point calculates projective transformation matrix;
After said gray code map looked like to shine said solid model surface; Light imports through the optical fiber of photoelectric sensor; Photoelectric sensor produces one group of electric signal; After amplifying, be input to described computing machine; Computing machine carries out analyzing and processing according to the electric signal of input; Obtain the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point, the three-dimensional coordinate of binding entity aspect of model point, and calculate projective transformation matrix in view of the above;
In order to obtain the position of each unique point of solid model, be embedded in 4 or 8 photoelectric sensors at solid model, the corresponding photoelectric sensor of each unique point.Referring to shown in Figure 3; The figure shows the conspectus of photoelectric sensor Line Card; The light signal of photoelectric sensor 1 imports through optical fiber 3; Be placed on the characteristic point position of solid model after optical fiber connector 2 usefulness one little electron device packed; The area that in solid model, is exposed is very little; If careful survey is not difficult to find, do not influence viewing effect.The photoelectric sensor Line Card is fixed on the solid model the inside, is connected to computing machine through cable mouth 4.
Optical fiber 3 is used for guiding the optical signal transmission that each unique point receives on the solid model surface to arrive photoelectric sensor 1, converts electric signal into by photoelectric sensor 1 and also amplifies after cable mouth 4 passes to computing machine again.Because the diameter of every optical fiber is 1mm only, little electron device of permission use is with its packing and be placed on a position easily, makes optical fiber head exist in the solid model Surface Physical and reaches minimum.
When said gray code map looked like to be used as a sequence projection, these gray code maps went out each locations of pixels as unique identification.Photoelectric sensor detects the existence or the disappearance of projection light on each ladder, produces bit sequence then, decodes, and obtains x, the y coordinate of the pixel of mating with each sensing station.
The striped that in the gray code map picture, only comprises black, white two kinds of colors, and black, white two kinds of colors correspond respectively to 0 in the binary number and 1.In order to distinguish each bar list pixel striped, need to use log2n width of cloth scale-of-two gray encoding image to distinguish each stripe.Wherein n is the resolution of gray code map picture.
For example, in order to distinguish out each stripe of 1024 resolution, then need draw log21024=10 width of cloth gray encoding image.
In order to ask for the volume coordinate information of solid model unique point, need Gray's code-bar line is decoded.Gray's code-bar line is decoded, need come to confirm uniquely the binary decoded sequence of a n position according to the time series of n width of cloth image.
Referring to Fig. 4, can be to the corresponding relation of striped gray code decoder by the next correspondence of the Gray code code value shown in the table 1.Can know that according to the code value in the table 1 decoding sequence of Line1 is 11111...... among Fig. 4, the decoding sequence of Line2 is 10011......, and the decoding sequence of Line3 is 01001.......
Figure BDA0000070525520000091
Table one
According to above-mentioned decoding sequence, can obtain the pixel coordinate of unique point on the Y direction.Asking the method for coordinate figure by decoding sequence, is binary method with Gray code conversion just, is called coding/decoding method.Implementation method is: second from the left side, and with every and the decoded value XOR in the left side, as this decoded value (one of Far Left is still constant).
Mathematics (computing machine) is described:
Coordinate figure (scale-of-two): p[0~n]; Decoding sequence (Gray code): c[0~n] (n N); When writing from left to right label reduce successively.
Decoding:
p[n]=c[n],
P[i]=c[i]XOR?p[i+1](i?N,n-1?i0)。
In like manner, through changing the projecting direction of gray encoding image, can obtain the pixel coordinate of directions X;
In case after the vertex position on solid model surface is determined, just can calculates a projective transformation matrix and come the conversion source images to adapt to projection surface.Even projector is reversed, rotates even light path is reflected by mirror, transformation matrix all automatically image rotating to keep and the surperficial correct match of solid model.
If (Xw, Yw Zw) are the position coordinates in the three-dimensional world coordinate system of each unique point of solid model, (x c, y c) be the position of this unique point in the two-dimensional screen coordinate system, the transformation relation of the two is:
x c y c 1 = λCT cm X W Y W Z W 1 = λC R 1 R 2 R 3 T X w Y w Z w 1 = λ f u 0 u 0 0 f v v 0 0 0 1 r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 X w Y w Z w 1
Wherein λ is a scale factor, and C is unknown camera confidential reference items matrix, and Tcm is three-dimensional registration matrix, and R1, R2, R3 are rotational component, and T is a translational component;
Can further be expressed as:
x c y c 1 = H w X w Y w Z w 1
Hw is a projective transformation matrix.Behind the two-dimensional screen coordinate of the world coordinates of knowing the unique point more than 4 and correspondence, just can obtain Hw;
The projective transformation unit carries out projective transformation according to said projective transformation matrix to the three-D grain image of said solid model; By the 3D programmed environment,, just can obtain the realtime graphic deformation effect on the hardware cheaply like OpenGL or DirectX.
Fig. 5 illustrates a kind of entity projection actual situation method for registering that the embodiment of the invention provides, and may further comprise the steps:
501: set the solid model three-D grain, generate the three-D grain image of solid model;
Before carrying out the entity projection, at first through the three-D grain of computer settings solid model, the three-D grain image that storage configures is subsequent use;
502: the gray code map picture that produces one group of black and white gray scale according to the Gray code scanning algorithm;
Said Gray code light image is a series of gray scale bianry image, and each image is made up of black and white strip, and these stripeds are alternately separated with projection in zone;
503: the utilization projector looks like to shine said solid model surface with said gray code map;
504: the electric signal that produces according to photoelectric sensor obtains the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point, and the three-dimensional coordinate in conjunction with correspondent entity aspect of model point calculates projective transformation matrix;
After said gray code map looked like to shine said solid model surface; Light imports through the optical fiber of photoelectric sensor; Photoelectric sensor produces one group of electric signal; After amplifying, be input to described computing machine; Computing machine carries out analyzing and processing according to the electric signal of input; Obtain the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point,, and calculate projective transformation matrix in view of the above in conjunction with the three-dimensional coordinate of correspondent entity aspect of model point; Its concrete implementation method, as previously mentioned, this no longer describes.
505: carry out projecting the solid model surface after the projective transformation according to the three-D grain image of said projective transformation matrix to said solid model.
By the 3D programmed environment, like OpenGL or DirectX, we can obtain the realtime graphic deformation effect on the hardware cheaply.Fig. 6 shows the design sketch of the entity projection actual situation registration that the embodiment of the invention provides.
The present invention also can pass through real-time interactive, dynamically changes projective transformation matrix, realizes that project content is in real time with the tracking drop shadow effect that is subjected to the motion of shadow body.
On-the-spot in product news conference or exhibition; The surface is inlaid with the solid model of optical fiber head according to the space layout at scene; Projector; After computing machine is installed and is connected; Computing machine produces the Gray code bianry image of one group of black and white gray scale according to the Gray code scanning algorithm; Through projector projects to solid model; Optical fiber head on the solid model imports light into photoelectric sensor and produces one group of electric signal; Computing machine is analyzed according to this group signal; Obtain the two-dimensional coordinate accurately of the corresponding optical fiber head of each photoelectric sensor; Three-dimensional coordinate in conjunction with correspondent entity aspect of model point; Calculate projective transformation matrix; Three-dimensional virtual image to the solid model coupling carries out projective transformation; Image after the conversion is passed through projector projects to solid model, the complete registration in the position of image and the position of solid model.
Through using the embedded photoelectric sensor; Can adopt a plurality of projector that same solid model is carried out projection from different perspectives; Because each projected image and solid model have been realized the actual situation registration; Therefore realized seamless spliced between projected image; Be the solid model magnificent profile of one deck of having put on, stereoscopic sensation is strong, strong sense of reality; Can also dynamically change color, gloss, decoration, rich dynamic.
Carry out entity projection actual situation calibration, if adopt manual shift projector the position and towards, need skilled person 15-20 minute careful adjustment, sometimes through the innumerable trials and tribulations optimum efficiency of also failing to obtain.Because projector fails effectively fixing, in case projector or solid model displacement, adjustment work need be carried out again.
After adopting method for registering provided by the present invention; Can look for a good position that projector is fixed; After choosing registration order through computer interface then, can accomplish that Gray code image projection, projective transformation matrix are calculated, virtual three-dimensional image accurately projects actual situation registration process such as solid model 2-3 time second.
If do not have the unusual situation such as mobile of projector or solid model to take place, just adopt this projective transformation matrix to calculate always.Fortuitous event occurs, flower 2-3 calibrates once again and get final product second, can not influence real-time needs such as exhibition, news conference; And if adopt the manual shift mode, 15-20 minute adjustment time is that spectators are unacceptable, this fortuitous event will directly cause the failure of news conference.
More than disclosed be specific embodiment of the present invention only, but the present invention is not limited thereto, for the person of ordinary skill of the art, under the prerequisite that does not break away from the principle of the invention, the distortion of making should be considered as belonging to protection domain of the present invention.

Claims (7)

1. an entity projection actual situation registration apparatus comprises solid model, some photoelectric sensors, computing machine and projector; Said solid model is the object that does not have pattern; Said photoelectric sensor embeds in the solid model, is used for electrical signal of reaction is input to said computing machine; Said computing machine is used to set solid model three-D grain image, generation shines the gray code map picture of the black and white gray scale on the solid model and the electric signal analyzing and processing that said photoelectric sensor is imported image sensing for one group; Obtain the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point; Three-dimensional coordinate in conjunction with correspondent entity aspect of model point; Calculate projective transformation matrix, said solid model three-dimensional virtual image is carried out projective transformation; Said projector is used for the three-dimensional virtual image after gray code map picture and the said projective transformation is projected said solid model.
2. entity projection actual situation registration apparatus according to claim 1 is characterized in that described computing machine includes:
Solid model three-D grain image generation unit is used to generate the three-D grain image of solid model;
The Gray code image generation unit is used for producing one group of gray code map picture that shines the black and white gray scale on the solid model according to the Gray code scanning algorithm;
The projective transformation matrix generation unit; Be used for importing the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point of the electric signal acquisition that produces according to the photoelectric sensor induction light on the solid model; Three-dimensional coordinate in conjunction with correspondent entity aspect of model point calculates projective transformation matrix;
The projective transformation unit is used for according to said projective transformation matrix the three-D grain image of said solid model being carried out projective transformation.
3. entity projection actual situation registration apparatus according to claim 1 is characterized in that said photoelectric sensor is connected to said computing machine through cable.
4. entity projection actual situation registration apparatus according to claim 1 is characterized in that, the corresponding photoelectric sensor of each unique point of said solid model, and the photoelectric sensor optical fiber connector is located at the characteristic point position of said solid model.
5. according to claim 1 or 4 described entity projection actual situation registration apparatus, it is characterized in that said photoelectric sensor is 4 or 8.
6. entity projection actual situation registration apparatus according to claim 4 is characterized in that the fibre diameter of said photoelectric sensor is 1mm.
7. an entity projection actual situation method for registering is characterized in that, may further comprise the steps:
Set the three-D grain of solid model, generate the three-D grain image of solid model;
Produce the Gray code bianry image of one group of black and white gray scale according to the Gray code scanning algorithm;
The utilization projector shines said solid model surface with said Gray code bianry image;
The electric signal that produces according to photoelectric sensor obtains the two-dimensional coordinate of each photoelectric sensor correspondent entity aspect of model point, and the three-dimensional coordinate in conjunction with correspondent entity aspect of model point calculates projective transformation matrix;
Carry out projecting the solid model surface after the projective transformation according to the three-D grain image of said projective transformation matrix to said solid model.
CN201110170857A 2011-06-23 2011-06-23 Solid projection false or true registration device and method Pending CN102306088A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110170857A CN102306088A (en) 2011-06-23 2011-06-23 Solid projection false or true registration device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110170857A CN102306088A (en) 2011-06-23 2011-06-23 Solid projection false or true registration device and method

Publications (1)

Publication Number Publication Date
CN102306088A true CN102306088A (en) 2012-01-04

Family

ID=45379954

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110170857A Pending CN102306088A (en) 2011-06-23 2011-06-23 Solid projection false or true registration device and method

Country Status (1)

Country Link
CN (1) CN102306088A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104135657A (en) * 2014-07-10 2014-11-05 上海大学 Information-driven entity three-dimensional display device
CN105869160A (en) * 2016-03-28 2016-08-17 武汉理工大学 Method and system for implementing 3D modeling and holographic display by using Kinect
CN106056663A (en) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 Rendering method for enhancing reality scene, processing module and reality enhancement glasses
CN106570897A (en) * 2016-11-04 2017-04-19 西安中科晶像光电科技有限公司 Multi-display module image automatic registration method
CN106816077A (en) * 2015-12-08 2017-06-09 张涛 Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality
CN107251098A (en) * 2015-03-23 2017-10-13 英特尔公司 The true three-dimensional virtual for promoting real object using dynamic 3 D shape is represented
CN109001676A (en) * 2018-05-31 2018-12-14 北京科技大学 A kind of robot localization navigation system
CN109923500A (en) * 2016-08-22 2019-06-21 奇跃公司 Augmented reality display device with deep learning sensor
CN111083453A (en) * 2018-10-18 2020-04-28 中兴通讯股份有限公司 Projection device, method and computer readable storage medium
CN111182288A (en) * 2018-11-09 2020-05-19 上海云绅智能科技有限公司 Space object imaging method and system
CN111414873A (en) * 2020-03-26 2020-07-14 广州粤建三和软件股份有限公司 Alarm prompting method, device and alarm system based on wearing state of safety helmet
CN111899347A (en) * 2020-07-14 2020-11-06 四川深瑞视科技有限公司 Augmented reality space display system and method based on projection
CN112508071A (en) * 2020-11-30 2021-03-16 中国公路工程咨询集团有限公司 BIM-based bridge disease marking method and device
CN113380088A (en) * 2021-04-07 2021-09-10 上海中船船舶设计技术国家工程研究中心有限公司 Interactive simulation training support system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1701603A (en) * 2003-08-06 2005-11-23 三菱电机株式会社 Method and system for determining correspondence between locations on display surface having arbitrary shape and pixels in output image of projector
CN1934459A (en) * 2004-07-01 2007-03-21 三菱电机株式会社 Wireless location and identification system and method
CN101363716A (en) * 2008-09-26 2009-02-11 华中科技大学 Combination space precision measurement system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1701603A (en) * 2003-08-06 2005-11-23 三菱电机株式会社 Method and system for determining correspondence between locations on display surface having arbitrary shape and pixels in output image of projector
CN1934459A (en) * 2004-07-01 2007-03-21 三菱电机株式会社 Wireless location and identification system and method
CN101363716A (en) * 2008-09-26 2009-02-11 华中科技大学 Combination space precision measurement system

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104135657A (en) * 2014-07-10 2014-11-05 上海大学 Information-driven entity three-dimensional display device
CN107251098A (en) * 2015-03-23 2017-10-13 英特尔公司 The true three-dimensional virtual for promoting real object using dynamic 3 D shape is represented
CN106816077A (en) * 2015-12-08 2017-06-09 张涛 Interactive sandbox methods of exhibiting based on Quick Response Code and augmented reality
CN106816077B (en) * 2015-12-08 2019-03-22 张涛 Interactive sandbox methods of exhibiting based on two dimensional code and augmented reality
CN105869160A (en) * 2016-03-28 2016-08-17 武汉理工大学 Method and system for implementing 3D modeling and holographic display by using Kinect
CN105869160B (en) * 2016-03-28 2019-11-26 武汉理工大学 The method and system of three-dimensional modeling and holographic display are realized using Kinect
CN106056663A (en) * 2016-05-19 2016-10-26 京东方科技集团股份有限公司 Rendering method for enhancing reality scene, processing module and reality enhancement glasses
CN106056663B (en) * 2016-05-19 2019-05-24 京东方科技集团股份有限公司 Rendering method, processing module and augmented reality glasses in augmented reality scene
US10573075B2 (en) 2016-05-19 2020-02-25 Boe Technology Group Co., Ltd. Rendering method in AR scene, processor and AR glasses
CN109923500B (en) * 2016-08-22 2022-01-04 奇跃公司 Augmented reality display device with deep learning sensor
US11797078B2 (en) 2016-08-22 2023-10-24 Magic Leap, Inc. Augmented reality display device with deep learning sensors
CN109923500A (en) * 2016-08-22 2019-06-21 奇跃公司 Augmented reality display device with deep learning sensor
CN106570897A (en) * 2016-11-04 2017-04-19 西安中科晶像光电科技有限公司 Multi-display module image automatic registration method
CN109001676A (en) * 2018-05-31 2018-12-14 北京科技大学 A kind of robot localization navigation system
CN109001676B (en) * 2018-05-31 2020-08-21 北京科技大学 Robot positioning navigation system
CN111083453A (en) * 2018-10-18 2020-04-28 中兴通讯股份有限公司 Projection device, method and computer readable storage medium
CN111083453B (en) * 2018-10-18 2023-01-31 中兴通讯股份有限公司 Projection device, method and computer readable storage medium
CN111182288B (en) * 2018-11-09 2021-07-23 上海云绅智能科技有限公司 Space object imaging method and system
CN111182288A (en) * 2018-11-09 2020-05-19 上海云绅智能科技有限公司 Space object imaging method and system
CN111414873B (en) * 2020-03-26 2021-04-30 广州粤建三和软件股份有限公司 Alarm prompting method, device and alarm system based on wearing state of safety helmet
CN111414873A (en) * 2020-03-26 2020-07-14 广州粤建三和软件股份有限公司 Alarm prompting method, device and alarm system based on wearing state of safety helmet
CN111899347A (en) * 2020-07-14 2020-11-06 四川深瑞视科技有限公司 Augmented reality space display system and method based on projection
CN112508071A (en) * 2020-11-30 2021-03-16 中国公路工程咨询集团有限公司 BIM-based bridge disease marking method and device
CN112508071B (en) * 2020-11-30 2023-04-18 中国公路工程咨询集团有限公司 BIM-based bridge disease marking method and device
CN113380088A (en) * 2021-04-07 2021-09-10 上海中船船舶设计技术国家工程研究中心有限公司 Interactive simulation training support system

Similar Documents

Publication Publication Date Title
CN102306088A (en) Solid projection false or true registration device and method
CN105843386B (en) A kind of market virtual fitting system
CN100594519C (en) Method for real-time generating reinforced reality surroundings by spherical surface panoramic camera
Moons et al. 3D reconstruction from multiple images part 1: Principles
CN101310289B (en) Capturing and processing facial motion data
Bostanci et al. Augmented reality applications for cultural heritage using Kinect
CN103116857B (en) A kind of virtual show house roaming system controlled based on body sense
Asayama et al. Fabricating diminishable visual markers for geometric registration in projection mapping
Portalés et al. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments
KR20200012043A (en) Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking
CN102568026A (en) Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
KR20180108709A (en) How to virtually dress a user's realistic body model
CN104952111A (en) Method and apparatus for obtaining 3D face model using portable camera
CN106504073A (en) House for sale based on AR virtual reality technologies is investigated and decorating scheme Ask-Bid System
CN106504337A (en) House for sale based on AR virtual reality technologies is investigated and collaboration decorations system
CN108564662A (en) The method and device that augmented reality digital culture content is shown is carried out under a kind of remote scene
Schall et al. A survey on augmented maps and environments: approaches, interactions and applications
CN107330980A (en) A kind of virtual furnishings arrangement system based on no marks thing
CN102509224A (en) Range-image-acquisition-technology-based human body fitting method
Park et al. " DreamHouse" NUI-based Photo-realistic AR Authoring System for Interior Design
Lin et al. Extracting 3D facial animation parameters from multiview video clips
CN106875461A (en) One kind is tinted plane picture 3D model transformation systems and method
Wang et al. Digital Longmen project: A free walking VR system with image-based restoration
Hsu et al. HoloTabletop: an anamorphic illusion interactive holographic-like tabletop system
Zhang et al. Virtual lighting environment and real human fusion based on multiview videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120104