CN104050859A - Interactive digital stereoscopic sand table system - Google Patents

Interactive digital stereoscopic sand table system Download PDF

Info

Publication number
CN104050859A
CN104050859A CN201410193904.0A CN201410193904A CN104050859A CN 104050859 A CN104050859 A CN 104050859A CN 201410193904 A CN201410193904 A CN 201410193904A CN 104050859 A CN104050859 A CN 104050859A
Authority
CN
China
Prior art keywords
gesture
dimensional
stereo
interactive
sand table
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410193904.0A
Other languages
Chinese (zh)
Inventor
王元庆
董辰辰
李异同
陆大伟
马换
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201410193904.0A priority Critical patent/CN104050859A/en
Publication of CN104050859A publication Critical patent/CN104050859A/en
Pending legal-status Critical Current

Links

Abstract

An interactive digital stereoscopic sand table system is composed of a stereoscopic image generating system and a man-machine interactive system. Opengl and other 3D engines are adopted by the stereoscopic image generating system to draw the three-dimensional terrain, a human eye tracking and pointing light technology is applied, stereoscopic display without assistance is achieved, and a series of image accelerating technologies is adopted to accelerate the rendering process of an image. A three-dimensional building model is arranged in a three-dimensional terrain target area, and precise space distribution and the good visual effect are achieved. A naked eye stereoscopic displayer is formed by the stereoscopic image generating system. The man-machine interactive system controls the three-dimensional terrain scene or three-dimensional model data in the stereoscopic image generating system through gestures, and is achieved by presetting a gesture recognition interface. The corresponding gestures control the three-dimensional terrain scene or the three-dimensional model data in the stereoscopic image generating system to be enlarged or reduced or horizontally moved or rotated or to enter, and the corresponding gestures capable of distinguishing different control significances can be configured according to needs.

Description

Can interactive digitizing stereo sand table system
Technical field
The present invention relates to the technology that bore hole mode generates free stereo scene and man-machine interaction, belong to information display technology and field of human-computer interaction.
Background technology
Sand table is a kind of instrument of expressing the various geographical environment information such as earth's surface 3 D stereo distribution, ground object target state, main manifestations be terrain data, make people understand macroscopical things from the angle of microcosmic.The making of sand table is normally according to topomap, airphoto or landform on the spot, and the model that relation forms according to a certain percentage, has the features such as visual in image, simple for production, economical and practical.
Traditional silt, war game and other material composting for sand table, purchase speed slow, transportation inconvenience, the needs in the present age and Future Information battlefield have been difficult to adapt to, especially modern war has developed into large region, three-dimensional, many arm of the services, war process is fast changing, timing for operation is written in water, how to develop a kind of situation of battlefield expression technology that adapts to modern war needs, is the active demand of modern war.
Along with the development of digitizing, informationization technology, the digitizing sand table of interactive uses and gives birth to.More and more be applied in recent years planning building, educate especially in military field, virtual and the natural interaction technique integrated system of its people and operational environment, solved to a great extent the problem in true operation and training, as too high in expense, environmental restraint etc.More and more be subject to the attention of the various countries militaries.But present digitizing sand table is all plane, can not accurately express the situation of three-dimensional scenic.
Below to look into greatly the comparative analysis of new website to domestic similar patent by south
Patent [1] (CN200920301807.3[P]。2010-7-28) proposed a kind of " based on real-time interactive image technology ", comprised image-forming module, interactive module, basic module.Image-forming module is projection unit, interactive module is divided into contactless mutual show alternately with contact, noncontact is for utilizing infrared camera/camera apparatus to gather image, and contact is to adopt touch-screen as input equipment, then image output module is sent to image-forming module.Basic module is made for being applicable to the material of image-forming module imaging.
Patent [2] (CN201110233828.8[P]。" a kind of interactive electronic sand table gesture identification method " 2013-2-20) proposed, the mode and the goods electronic sand map that use gesture are carried out alternately, by different gesture motions, on sand table, draw photoelectricity movement locus, and then by photoelectricity movement locus is adopted to mode identification technology, realize gesture identification, finally realize the control of gesture to goods electronic sand map, wherein, the tracking of photoelectricity movement locus adopts minor increment algorithm, and the mode identification technology that luminous point movement locus is adopted is decision Tree algorithms.
Patent [3] (CN201110233542.X[P]。2012-9-5) a kind of " the interactive sandbox system based on gesture identification " proposed, it is characterized in that sand table and show with ring curtain the content that control end sends, sand table camera obtains user's gesture information, and send to control end, control end is received the time of instruction by judgement, the gesture information receiving is resolved to corresponding steering order, and control command is sent to relevant controlled terminal.Sand table and ring curtain receive control command, and carry out this order.Interactive large-scale projection screen adopts large-scale ring curtain design, adopts Duo Tai projector, and in conjunction with synchronized projection control technology, curved surface correcting technology and edge fusion technology, realize the seamless spliced and synchronized projection of super large ring curtain projection.
Patent [1] is without gesture identification content, and realization be take touch-screen as medium alternately, and 3D is embodied as the projection of projector group to be realized; Patent is mainly Gesture Recognition in [2], and the algorithm of employing and the present invention have a great difference and do not relate to the demonstration of 3D rendering; In patent [3], take ring curtain design and shadow casting technique to realize synchronized projection, what big difference with camera, obtains at a distance the gordian techniquies such as gesture information all has with institute of the present invention employing technology, and the image of demonstration is not three-dimensional.
In sum, the present invention has following features:
System generates stereopsis in real time, in the mode of bore hole solid, shows stereo scene, and can under user's gesture operation, not only can realize amplification, dwindle, the basic operation such as rotation, can also realize the functions such as all kinds of signs, demarcation, measurement.
The present invention includes the technology of three aspects: 1, man-machine interaction, 2, bore hole stereo display, 3, stereo-picture generates in real time
Aspect man-machine interaction: as one independently important field of research be subject in the world paying close attention to very widely.Work out now in the world many modes and realized man-machine interaction.People can utilize the equipment such as keyboard and mouse, control lever, location tracking device, data glove control operation and the understanding of relevant devices and carry out relevant various command and the requirement of transmitting by human-computer interaction device.Since the later stage nineties 20th century along with high speed processing chip, multimedia technology and Internet Web technology developing rapidly and universal, the research emphasis of man-machine interaction has been placed on intelligent complementation, multi-modal (hyperchannel) multimedia interactive, the aspects such as virtual interacting and man-machine coordination are mutual, are namely placed on artificial aspect the human-computer interaction technology at center.Can say just at the early-stage and development just rapidly of the technical development aspect man-machine interaction in the world.Can say in the world and start to be employed gradually (aspect film, amusement) in man-machine interaction.Yet design and research with the world similar research of China aspect man-machine interaction is at present compared and is also had larger gap, lacks new people's machine technology.And the application without assist type bore hole stereo display technique that utilizes sensing light technology and human eye detection technology to realize is not yet ripe.
Aspect bore hole 3D demonstration: it is more ripe that current 3D technology has developed.Also more and more can see 3D product on the market.Most widely used is exactly video display aspects.Although yet stereo-picture can provide stereoscopic sensation to technology, it is two or multiple plane pictures in space in essence, the three-dimensional imaging by parallax.But this class technology not only need reduce the comfort level that people experience 3D by aids such as polarized light pieces, and use operation to there is great limitation, in addition adjusting is converged in this class technology also can make human eye produce contradiction crystalline lens focus adjustment and realization, watches for a long time and can produce visual fatigue.
Summary of the invention
The present invention seeks to, propose a kind of can interactive digitizing stereo sand table system, utilize bore hole 3D technology and touching technique to realize high artificial stereo picture and high precision human-computer interaction, the horizontal display station of large-size screen monitors (as 55 cun) produces 3D effect, and the sand table model that human eye is seen is suspended in screen top.The built-in gesture identification device of frame of display station, by the identification to gesture, the 3D rendering that user can directly be touched oneself see is realized interactive, as rotation, translation, convergent-divergent, scene walkthrough etc., and by amplifying building, can observe directly house interior details, allow manipulator place oneself in the midst of seemingly reality scene.
Technical scheme of the present invention is, a kind of can interactive digitizing stereo sand table system, mainly by stereoscopic image generation system and man-machine interactive system two large divisions, formed;
1), stereoscopic image generation system
Stereoscopic image generation system adopts Opengl, Opencv, and Directx or 3D engine drawing three-dimensional landform, application tracing of human eye and sensing light technology, realize without auxiliary stereo display, and adopt the render process of the speed technology quickening image of a series of images; Make three-dimensional building model be placed in dimensional topography target area, there is accurate space distribution and good visual effect; Stereoscopic image generation system forms naked-eye stereoscopic display;
Stereoscopic image generation system adopts programming alone, and sand table model details is numerous.Not only can show whole topography and geomorphology, and can show that the military model of using in war comprises tank, aircraft etc., and there is gorgeous war effect of shadow, the interface of preset gesture identification is controlled dimensional topography scene or three-dimensional modeling data in stereoscopic image generation system by gesture, because landform and military model complexity very, so the data volume of program is very large.For such data volume, system adopts the data management mode of innovation: i.e. unified dimensional topography scene, the three-dimensional military model data of piecemeal, realizing three-dimensional military model layering calls in, making system is an organic whole, can move fast again, solve the problem of mass data and operational efficiency.
The acquisition of stereoscopic image generation system neutral body image is to utilize the image that right and left eyes is seen to have fine distinction, thereby synthesizes the stereographic map that a width has the degree of depth in brain.With virtual video camera, take simulated scenario, by coordinate transform, obtain left and right image, the left and right image obtaining is carried out to parallax control, based on parallax mechanism, utilize Opengl, Opencv, Directx or 3D engine drawing three-dimensional landform (playing up acceleration).Because cannot form 3D rendering in brain in the situation that parallax spacing is excessive, so the left and right image obtaining is carried out to parallax control (controlling the spacing of left and right image same place); When the movement due to human eye can cause the movement of image convergent point, make stereo-picture produce distortion, the method that so just need to adopt vision to follow the tracks of remains unchanged the convergent point of left and right image, thereby realizes the generation of mutual stereo-picture.
2), man-machine interactive system
Finger is controlled the system of dimensional topography scene in stereoscopic image generation system or three-dimensional modeling data by gesture, by the Interface realization of preset gesture identification; Can interactive digitizing stereo sand table system flexible operation, interoperability is very strong.Corresponding gesture control dimensional topography scene in stereoscopic image generation system or three-dimensional modeling data amplification, dwindle, translation, rotate or enter etc. and as required configuration corresponding can distinguish the different gestures of controlling meanings; By stereoscopic camera, take and obtain gesture three-dimensional information, utilize mid point backoff algorithm to process original sample point, set up recessive Markov model, by gesture spatial distribution characteristic, carry out similarity contrast with Sample Storehouse, identification gesture concrete meaning.
Also comprise by natural gesture and controlling, can in three-dimensional scenic, advance, retreat, raise, reduce viewpoint, enter dimensional topography scene or three-dimensional modeling data and observe interior details (as observed the details of house interior in building inside, and can in room, see building and landscape around, experience to a kind of ultimate attainment man-machine interaction of people).Figure 11 has shown the gesture operation that can realize.Mutual without the generation of invading and harassing the real-time man-machine stereo-picture (image) of gesture identifying device and the formation of stereo content generation in real time;
Naked-eye stereoscopic display is placed without invading and harassing gesture identifying device, by the identification to gesture, realizes many people of 3D rendering real-time interactive.Utilize bore hole 3D technology and realize high artificial stereo picture and high precision human-computer interaction without invading and harassing Gesture Recognition, the sand table model that human eye is seen is suspended in naked-eye stereoscopic display top.
Described man-machine interaction, according to real time record with great visual angle, the User Activity region of great dynamic range, utilize the recognition methods of inter-frame difference, reach the real-time control of gesture to three-dimensional scenic.
Adopt tracing of human eye and point to light technology, the detection of position of human eye is contactless, and user is without wearing any servicing unit.
Have the basic photographing module that gesture is surveyed, the description of the gesture motion track based on visually-perceptible, has understanding and the recognizer of gesture, realizes the automatic identification of conventional sign.Near field gesture interaction, the camera of a plurality of gesture identification carries out the seizure of gesture, can realize without invading and harassing gesture identification, and the precision of gesture location reaches a centimetre rank.
What native system adopted is near field gesture interaction technology, detects the position of finger in conjunction with Computer Vision Detection Technique, and guiding stereo-picture generation module generates corresponding stereogram, can control technical requirement that can be mutual to realize.Have the basic photographing module that gesture is surveyed, real time record with great visual angle, the User Activity region of great dynamic range; Have the description of the gesture motion track based on visually-perceptible, thereby form, about gesture fundamental point cloud, describe; The understanding and the recognizer that have gesture, realize the automatic identification of conventional sign.
According to user in real time dynamically, the conversion of gesture is controlled, the demonstration of naked-eye stereoscopic display real-time update scene.
The invention has the beneficial effects as follows: owing to being near field gesture identification, so the precision of gesture identification is very high, for pointing and holding object and can distinguish, and process respectively.The precision of gesture location reaches a centimetre rank, in addition because the identification range of single gesture identification device is limited, so need the camera of a plurality of gesture identification to carry out the seizure of gesture, and the gesture of zones of different is carried out to different dividing processing, and the method that adopts inter-frame difference extracts the motion vector of gesture, adopt Hidden Markov Model (HMM) to process.And by real time algorithm, make image carry out real-time change according to gesture.
Accompanying drawing explanation
Fig. 1 world coordinates is tied to the particular flow sheet that camera coordinates is, camera coordinates is tied to image coordinate system, image coordinate is tied to the conversion of displaing coordinate system.
The 3D height map that Fig. 2 Opengl generates according to elevation information.
The mutual relationship of Fig. 3 three-dimensional display man-machine interaction of the present invention.
Fig. 4 convergence type camera model schematic diagram.
Fig. 5 run-in index camera model schematic diagram.
Fig. 6 watches model schematic diagram.
Fig. 7 pattern distortion schematic diagram.
Tetra-kinds of stereo-picture distortion schematic diagram of A-D in Fig. 8.
In Fig. 9, A, B are respectively that pattern distortion model and distortion correction are processed.
In Figure 10, tetra-kinds of gestures detection of A-D for example.
The explanation of Figure 11 gesture motion.
In figure, PC terminal 1, video interface 2, parallax illumination 3,3D bore hole display screen 4, dummy object 5, tracing of human eye device 6, gesture detector 7.
Embodiment
1), gesture location and understanding
The method of utilization based on Face Detection extracted the approximate region at user's hand place, adopts the motion vector of the method extraction gesture of inter-frame difference; The difference information of the gray-scale value of adjacent two two field pictures by successive video frames obtains moving region in one's hands, utilizes the images match analysis of stereoscopic camera to point residing three-dimensional coordinate parameter.By above-mentioned several means, finally determine the three dimensional point cloud of gesture.
For the understanding of gesture, mainly comprising for the gesture resampling and carry out feature extraction, and set up a kind of Statistic analysis models of hidden markov models (Hidden Markov Model, HMM)) sampled point analyzes.The overall goal of gesture identification module is to build a healthy and strong sorter, Freehandhand-drawing gesture is classified and be identified in the resampling to gesture, utilizes mid point backoff algorithm to process original employing point.Sampled point can better react the variation of curvature feature can effectively control the scale of point set again, then sets up an efficient HMM hidden Markov model, guarantees that observation sequence has certain regularity, meets the demand of HMM modeling preferably based on direction encoding.
2), the realization of gesture interaction technology
The realization of gesture interaction is divided into gesture and follows the tracks of and the large main part of gesture identification two:
The tracking of gesture is by static gesture identification division, and the compatible portion of images of gestures and model and tracking section form.Static gesture identification division has been realized the understanding to gesture attitude in present frame; It is above-mentioned 1 that images of gestures is utilized first with mating of model) recognition result, the image that comprises gesture need to mate with 2D model and obtains following the tracks of needed eigenvector and initial parameter; Tracking section first carries out the coarse positioning in hand region, then determines the variation of finger fingertip position, and the palm position, architectural feature location that utilizes hand.
Without invading and harassing Gesture Recognition, be the Gesture Recognition Algorithm based on gesture spatial distribution characteristic, gesture spatial distribution characteristic (HDF) is the abstractdesription to staff space characteristics.Wherein most important is exactly the proper vector of extracting the colour of skin and gesture space, and the gesture spatial distribution characteristic of extraction and Sample Storehouse are carried out to similarity comparison, the concrete meaning of identification gesture.Extracting spatial distribution characteristic generally carries out from whole attitude and 2 aspects of local attitude.The joint variation characteristic of the general performance feature of gesture and gesture is combined to the spatial distribution characteristic that extracts gesture, not only can the less gesture of cog region calibration, can also distinguish certain diastrophic gesture.Figure 11 is to give an example in the gesture motion identification storehouse of setting up after Hidden Markov Model (HMM), is the identification to different gesture motion, also comprises the identification of rigid body in hand.The feature of extracting gesture by gesture detector provide gesture order the meaning and by PC interface, by PC to corresponding 3D pattern: amplify, dwindle, translation, rotate or enter.
3), stereo-picture generates
Stereo-picture generates the ultimate principle based on double vision point stereo-picture, generate in real time, rapidly can be mutual stereogram, and according to the stereoscopic image format of the requirement output formulation of stereo display terminal.In OpenGL application programming interfaces, the function of playing up various visual angles is provided specially, correctly obtain horizontal parallax image.By PC terminal 1, video interface 2, parallax illumination 3 devices, on 3D bore hole display screen 4, shows, dummy object 5 is on shielding, and tracing of human eye device 6 is for to tracing of human eye.
First build virtual scene, the eyes that virtual camera simulation beholder in left and right is set are caught left and right image, by video interface 2 and parallax illumination 3, offer display terminal.By two virtual camera simulation eyes, obtain stereo-picture pair, have two kinds of stereoscopic camera models, convergence type stereoscopic camera model (Fig. 4) and run-in index stereoscopic camera model (Fig. 5).The optical axis intersection of two cameras of convergence type stereoscopic camera model, in a bit, is applicable to the shooting of close shot; Run-in index stereoscopic camera model optical axis is parallel to each other, is equivalent to optical axis intersection in infinite point, and this model is applicable to the shooting of distant view.Set up mathematical model, two the camera line mid points of take are set up world coordinate system as initial point, two cameras of take are set up camera coordinates as initial point and are, take center, camera CCD projecting plane sets up image coordinate system as initial point, the display center of take is set up displaing coordinate as initial point and is, the process that stereoscopic camera is taken can be refined as the world coordinates in mathematical model is tied to camera coordinates system, camera coordinates and is tied to the conversion that image coordinate system, image coordinate are tied to displaing coordinate system, shown in idiographic flow Fig. 1.
Be generally the ghost image and the visual fatigue problem that solve stereo-picture, the parameter generating as stereo-picture through a conventional spacing less than actual pupil distance solves this problem.Yet this way can be introduced the problem of stereo-picture distortion, the degree of depth of the stereo-picture that reduces to perceive, especially the three-dimensional interaction figure of tracing of human eye as generative process in this distortion effect more obvious.As shown in Figure 7, the horizontal line of black represents parallax free face, beneath stain represents actual object position, the object space of red some representative of consumer perception, the navy blue point in parallax free face top represents the spacing of actual human eye, and the human eye spacing that some representative azury is set, the human eye gap ratio actual value of setting is little, there is skew in things position and the physical location that when user is moved to right by left position, can see user awareness, and the position that front and back perceive is not also in same point.As shown in Figure 8, in Fig. 8, the grid of black represents actual object to the effect that this distortion produces when user observes stereo-picture, the stereo-picture of red grid representative of consumer perception.In figure, shown the distortion situation under four kinds of different situations.And this distortion is more serious at setting interocular distance and the different larger distortion of actual human eye pitch difference.
For eliminating distortion, can set up a distortion model (as left in Fig. 9), take parallax free planar central as initial point, perpendicular to this face, be Z axis, horizontal direction is X-axis.The vector of actual human eye line is 2D in the figure, and the human eye line vector of computer settings is rD, and eyes centre coordinate is I, and actual object position is E point, and the object space of user awareness is F point.Can define a transition matrix, by this transition matrix, can in world coordinate system, E point be transformed into F point, i.e. F=Δ (E).By the conversion in coordinate system, can effectively reduce distortion problem, as shown in (Fig. 9 is right), red grid is fault image, and blue grid is for carrying out contrary distortion Δ -1image after processing.Can find out that carrying out contrary distortion processes the effectively scene of rediscover.
4), the image accelerated method based on parallax mechanism
In virtual scene, according to the three-dimensional information of series of points, utilize existing program OPENGL to draw triangle and render whole scene, texture mapping just can complete map and plays up in addition again, we can be in initialization procedure by information a little deposit in GPU like this in the process of playing up just without the data that read frequently in internal memory, render process in addition.Before playing up, to landform, each carries out cutting test soon, we just can use less triangle to play up during scene far away in map making like this, for not playing up in scene within the vision, so just can accelerate the render process of image.
Embodiment 1:
In actual device, the gesture identifying device that we adopt is Leap motion.The picture that Leap motion sensor catches from different angle according to two built-in cameras, reconstructs palm in the three-dimensional interlock information in the world really.The scope detecting is substantially above sensor between 25 millimeters to 600 millimeters, the space of detection be substantially one to rectangular pyramid.
First, Leap Motion sensor can be set up a rectangular coordinate system, the center of origin formula sensor, and the X-axis of coordinate is parallel to sensor, points to screen right-hand.Y-axis points upwards.The direction that deviates from mutually screen of Z axis.Unit is the millimeter of real world.In use, the transmission that Leap Motion sensor can be regular is about the interlock information of hand, and every part of such information is called " frame ".Each such frame inclusion test arrives:
The list of all palms and information;
The list of all fingers and information;
List and the information of handheld tool (thin, straight, longer than finger thing, for example a pen);
All can point at objects (Pointable Object), the i.e. list of all fingers and instrument and information;
Leap sensor can distribute a unique identification (ID) to all these, when palm, finger, instrument remain within sweep of the eye, can not change.According to these ID, can pass through Frame::hand (), the information that the functions such as Frame::finger () are inquired about each interlock object.
Then the data that detect according to every frame and front frame, generate movable information.For example, if two hands detected, and two hands all surpass a direction and move, and just think translation; If rotate, be designated as rotation as holding ball.If two hands near or separately, be designated as convergent-divergent.The packet generating contains:
The axial vector of rotation;
The angle (clockwise for just) of rotation;
The matrix of rotation is described;
Zoom factor;
Translation vector;
Embodiment 2:
Actual finished product as shown in Figure 3.This product is by 55 cun of 3D LCDs, the accurate setting element of three-dimension gesture, and gesture coprocessor, robot calculator is integrated to be formed, and configurable base adopts integrated seamless design.Only have outside display screen is exposed to, the accurate setting element of three-dimension gesture and gesture coprocessor element (gesture detector) are hidden the periphery at display screen.
As shown in Figure 3, user is when display the place ahead left-right and front-back moves, and tracing of human eye device real-time monitors customer location; Meanwhile, tidy up the operating attitude of surveying its detecting user both hands.User's eye position, gesture motion information is by real-time Transmission to PC, and PC is watched point, hand motion and generates in real time a corresponding stereoscopic image and be presented on display screen according to user.Owing to being contact to the detection of position of human eye, gesture motion, with head, hand, without wearing any servicing unit, thereby realize nothing, invade and harass man-machine interaction.
Although the present invention discloses as above with preferred embodiment, so it is not in order to limit the present invention.Persond having ordinary knowledge in the technical field of the present invention, without departing from the spirit and scope of the present invention, when being used for a variety of modifications and variations.Therefore, protection scope of the present invention is when being as the criterion depending on claims person of defining.

Claims (9)

1. can an interactive digitizing stereo sand table system, it is characterized in that being formed by stereoscopic image generation system and man-machine interactive system two large divisions;
1), stereoscopic image generation system
Stereoscopic image generation system adopts Opengl, Opencv, and Directx or 3D engine drawing three-dimensional landform, application tracing of human eye and sensing light technology, realize without auxiliary stereo display, and adopt the render process of the speed technology quickening image of a series of images; Make three-dimensional building model be placed in dimensional topography target area, there is accurate space distribution and good visual effect; Stereoscopic image generation system forms naked-eye stereoscopic display;
Stereoscopic image generation system adopts programming alone, and sand table model details is numerous.Not only can show whole topography and geomorphology, and can show that the military model of using in war comprises tank, aircraft etc., and there is gorgeous war effect of shadow, the interface of preset gesture identification is controlled dimensional topography scene or three-dimensional modeling data in stereoscopic image generation system by gesture, because landform and military model complexity very, so the data volume of program is very large.For such data volume, system adopts the data management mode of innovation: i.e. unified dimensional topography scene, the three-dimensional military model data of piecemeal, realizing three-dimensional military model layering calls in, making system is an organic whole, can move fast again, solve the problem of mass data and operational efficiency.
The acquisition of stereoscopic image generation system neutral body image is to utilize the image that right and left eyes is seen to have fine distinction, thereby synthesizes the stereographic map that a width has the degree of depth in brain.With virtual video camera, take simulated scenario, by coordinate transform, obtain left and right image, the left and right image obtaining is carried out to parallax control, based on parallax mechanism, utilize Opengl, Opencv, Directx or 3D engine drawing three-dimensional landform (playing up acceleration).Because cannot form 3D rendering in brain in the situation that parallax spacing is excessive, so the left and right image obtaining is carried out to parallax control (controlling the spacing of left and right image same place); When the movement due to human eye can cause the movement of image convergent point, make stereo-picture produce distortion, the method that so just need to adopt vision to follow the tracks of remains unchanged the convergent point of left and right image, thereby realizes the generation of mutual stereo-picture.
2), man-machine interactive system
Finger is controlled the system of dimensional topography scene in stereoscopic image generation system or three-dimensional modeling data by gesture, by the Interface realization of preset gesture identification; Corresponding gesture control dimensional topography scene in stereoscopic image generation system or three-dimensional modeling data amplification, dwindle, translation, rotate or enter etc. and configure as required the corresponding gestures that can distinguish difference control meanings; By stereoscopic camera, take and obtain gesture three-dimensional information, utilize mid point backoff algorithm to process original sample point, set up recessive Markov model, by gesture spatial distribution characteristic, carry out similarity contrast with Sample Storehouse, identification gesture concrete meaning;
Also comprise by natural gesture and controlling, in three-dimensional scenic, advance, retreat, raise, reduce viewpoint, enter dimensional topography scene or three-dimensional modeling data observation interior details;
The interface of gesture identification connects without invading and harassing gesture identification or monitoring device, passing through the identification to gesture, and stereo content generates the generation real-time, interactive that forms real-time man-machine stereo-picture (image) in real time.
2. according to claim 1 can interactive digitizing stereo sand table system, it is characterized in that naked-eye stereoscopic display placement is without invading and harassing gesture identifying device, utilize bore hole 3D technology and realize high artificial stereo picture and high precision human-computer interaction without invading and harassing Gesture Recognition, the sand table model that human eye is seen is suspended in naked-eye stereoscopic display top.
3. according to claim 1 can interactive digitizing stereo sand table system, it is characterized in that described man-machine interaction, have the basic photographing module that gesture is surveyed, by a plurality of cameras or gesture identification element, extract a plurality of gestures simultaneously, scope covers whole screen, utilize the recognition methods of inter-frame difference, the gesture of different user is extracted respectively, and the feedback of gesture identification is presented on image in real time, reach the real-time control of gesture to three-dimensional scenic.
4. according to claim 1 can interactive digitizing stereo sand table system, it is characterized in that adopting tracing of human eye and point to light technology, the detection of position of human eye is contactless.
5. according to claim 1 can interactive digitizing stereo sand table system, it is characterized in that having the basic photographing module that gesture is surveyed, the description of the gesture motion track based on visually-perceptible, the understanding and the recognizer that have gesture, the i.e. Gesture Recognition Algorithm based on gesture spatial distribution characteristic, gesture spatial distribution characteristic (HDF) is the abstractdesription to staff space characteristics; Wherein most important is exactly the proper vector of extracting the colour of skin and gesture space, and the gesture spatial distribution characteristic of extraction and Sample Storehouse are carried out to similarity comparison, the concrete meaning of identification gesture; Extracting spatial distribution characteristic generally carries out from whole attitude and 2 aspects of local attitude.Then the joint variation characteristic of the general performance feature of gesture and gesture is combined to the spatial distribution characteristic that extracts gesture, thereby realize the automatic identification of conventional sign.
6. according to claim 1 can interactive digitizing stereo sand table system, what it is characterized in that system employing is near field gesture interaction technology, the camera of a plurality of gesture identification carries out the seizure of gesture, in conjunction with Computer Vision Detection Technique, detect the position of finger, adopt the technology such as skin color model that gesture motion is extracted, according to Hidden Markov Model (HMM) guiding stereo-picture generation module, generate corresponding stereo-picture pair, by gesture motion and preset model, compare, thereby feed back corresponding image change signal, to realize, can control, can be mutual.
7. according to claim 1 can interactive digitizing stereo sand table system, it is characterized in that being provided with the description of the gesture motion track based on visually-perceptible, thereby form, about gesture fundamental point cloud, describe; The understanding and the recognizer that have gesture, realize the automatic identification of conventional sign; According to user in real time dynamically, the conversion of gesture is controlled, the demonstration of naked-eye stereoscopic display real-time update scene.
8. according to claim 1 can interactive digitizing stereo sand table system, it is characterized in that, for eliminating distortion, setting up a distortion model, take parallax free planar central as initial point, perpendicular to this face, be Z axis, horizontal direction is X-axis.The vector of actual human eye line is 2D in the figure, and the human eye line vector of computer settings is rD, and eyes centre coordinate is I, and actual object position is E point, and the object space of user awareness is F point; By defining a transition matrix, by this transition matrix, can in world coordinate system, E point be transformed into F point, i.e. F=Δ (E).
9. according to claim 1 can interactive digitizing stereo sand table system, it is characterized in that effectively reducing distortion problem by the conversion in coordinate system.
CN201410193904.0A 2014-05-08 2014-05-08 Interactive digital stereoscopic sand table system Pending CN104050859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410193904.0A CN104050859A (en) 2014-05-08 2014-05-08 Interactive digital stereoscopic sand table system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410193904.0A CN104050859A (en) 2014-05-08 2014-05-08 Interactive digital stereoscopic sand table system

Publications (1)

Publication Number Publication Date
CN104050859A true CN104050859A (en) 2014-09-17

Family

ID=51503610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410193904.0A Pending CN104050859A (en) 2014-05-08 2014-05-08 Interactive digital stereoscopic sand table system

Country Status (1)

Country Link
CN (1) CN104050859A (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571510A (en) * 2014-12-30 2015-04-29 青岛歌尔声学科技有限公司 Gesture input system and method in 3D scene
CN104820497A (en) * 2015-05-08 2015-08-05 东华大学 A 3D interaction display system based on augmented reality
CN104898394A (en) * 2015-04-27 2015-09-09 华北电力大学 Holographic projection technology-based man-machine interaction system and method
CN105045389A (en) * 2015-07-07 2015-11-11 深圳水晶石数字科技有限公司 Demonstration method for interactive sand table system
CN105511602A (en) * 2015-11-23 2016-04-20 合肥金诺数码科技股份有限公司 3d virtual roaming system
TWI559269B (en) * 2015-12-23 2016-11-21 國立交通大學 System, method, and computer program product for simulated reality learning
CN106803932A (en) * 2017-03-31 2017-06-06 合肥安达创展科技股份有限公司 A kind of utilization dynamic recognition technique and the method for image fusion technology interactive demonstration
CN106980366A (en) * 2017-02-27 2017-07-25 合肥安达创展科技股份有限公司 Landform precisely catches system and fine high definition projection imaging system
CN107168516A (en) * 2017-03-31 2017-09-15 浙江工业大学 Global climate vector field data method for visualizing based on VR and gesture interaction technology
CN107479706A (en) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 A kind of battlefield situation information based on HoloLens is built with interacting implementation method
CN107945270A (en) * 2016-10-12 2018-04-20 阿里巴巴集团控股有限公司 A kind of 3-dimensional digital sand table system
CN108389432A (en) * 2018-01-29 2018-08-10 盎锐(上海)信息科技有限公司 Science displaying device based on shadow casting technique and projecting method
CN108398049A (en) * 2018-04-28 2018-08-14 上海亿湾特训练设备科技有限公司 A kind of mutual war formula projection confrontation fire training system of networking
CN108737811A (en) * 2018-05-25 2018-11-02 卓谨信息科技(常州)有限公司 Sandbox scene interaction system based on Kinect and sandbox scene creation method
CN108874133A (en) * 2018-06-12 2018-11-23 南京绿新能源研究院有限公司 Interactive for distributed photoelectricity station monitoring room monitors sand table system
CN109426783A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 Gesture identification method and system based on augmented reality
CN109493655A (en) * 2018-12-21 2019-03-19 广州亚普机电设备科技有限公司 A kind of power battery 3D virtual emulation interaction experience system
CN109857260A (en) * 2019-02-27 2019-06-07 百度在线网络技术(北京)有限公司 Control method, the device and system of three-dimensional interactive image
CN110163831A (en) * 2019-04-19 2019-08-23 深圳市思为软件技术有限公司 The object Dynamic Display method, apparatus and terminal device of three-dimensional sand table
US10482670B2 (en) 2014-12-30 2019-11-19 Qingdao Goertek Technology Co., Ltd. Method for reproducing object in 3D scene and virtual reality head-mounted device
CN110609616A (en) * 2019-06-21 2019-12-24 哈尔滨拓博科技有限公司 Stereoscopic projection sand table system with intelligent interaction function
CN110767063A (en) * 2019-11-08 2020-02-07 浙江浙能技术研究院有限公司 Non-contact interactive electronic sand table and working method
CN111459264A (en) * 2018-09-18 2020-07-28 阿里巴巴集团控股有限公司 3D object interaction system and method and non-transitory computer readable medium
CN111953956A (en) * 2020-08-04 2020-11-17 山东金东数字创意股份有限公司 Naked eye three-dimensional special-shaped image three-dimensional camera generation system and method thereof
CN112040215A (en) * 2020-08-30 2020-12-04 河北军云软件有限公司 Naked eye stereoscopic display system in electromagnetic environment
CN112379777A (en) * 2020-11-23 2021-02-19 南京科盈信息科技有限公司 Digital exhibition room gesture recognition system based on target tracking
CN112885222A (en) * 2021-01-25 2021-06-01 中国石油大学胜利学院 Novel AI interactive simulation sand table and use method thereof
CN113096515A (en) * 2021-03-31 2021-07-09 江西交通职业技术学院 Adjustable sand table simulation device
CN113296604A (en) * 2021-05-24 2021-08-24 北京航空航天大学 True 3D gesture interaction method based on convolutional neural network
CN113687716A (en) * 2021-07-29 2021-11-23 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Data three-dimensional visualization platform system and method based on intelligent interaction technology
CN114931746A (en) * 2022-05-12 2022-08-23 南京大学 Interaction method, device and medium for 3D game based on pen type and touch screen interaction
CN115061606A (en) * 2022-02-14 2022-09-16 邹良伍 Naked eye 3D immersive experience equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101034208A (en) * 2007-04-04 2007-09-12 大连东锐软件有限公司 Three-dimensional simulation sand table system
EP1879129A1 (en) * 2006-07-13 2008-01-16 Northrop Grumman Corporation Gesture recognition simultation system and method
JP2009211563A (en) * 2008-03-05 2009-09-17 Tokyo Metropolitan Univ Image recognition device, image recognition method, image recognition program, gesture operation recognition system, gesture operation recognition method, and gesture operation recognition program
CN101783966A (en) * 2009-01-21 2010-07-21 中国科学院自动化研究所 Real three-dimensional display system and display method
CN202694599U (en) * 2012-07-23 2013-01-23 上海风语筑展览有限公司 Digital three-dimensional sand table display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1879129A1 (en) * 2006-07-13 2008-01-16 Northrop Grumman Corporation Gesture recognition simultation system and method
CN101034208A (en) * 2007-04-04 2007-09-12 大连东锐软件有限公司 Three-dimensional simulation sand table system
JP2009211563A (en) * 2008-03-05 2009-09-17 Tokyo Metropolitan Univ Image recognition device, image recognition method, image recognition program, gesture operation recognition system, gesture operation recognition method, and gesture operation recognition program
CN101783966A (en) * 2009-01-21 2010-07-21 中国科学院自动化研究所 Real three-dimensional display system and display method
CN202694599U (en) * 2012-07-23 2013-01-23 上海风语筑展览有限公司 Digital three-dimensional sand table display device

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ZACHARY WARTELL,ECT: ""Balancing Fusion,Image Depth and Distortion in Stereoscopic Head-Tracked Displays"", 《SIGGARPH 99 CONFERENCE PROCEEDINGS》 *
杨宇航 等: ""基于虚拟现实技术的电子沙盘仿真系统"", 《计算机仿真》, vol. 20, no. 1, 31 January 2003 (2003-01-31) *
杨智勋: ""三维电子沙盘系统的研究与实现"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 05, 15 May 2012 (2012-05-15) *
杨波 等: ""复杂背景下基于空间分布特征的手势识别算法"", 《计算机辅助设计与图形学学报》, no. 10, 31 October 2010 (2010-10-31), pages 1 - 8 *

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571510A (en) * 2014-12-30 2015-04-29 青岛歌尔声学科技有限公司 Gesture input system and method in 3D scene
US10482670B2 (en) 2014-12-30 2019-11-19 Qingdao Goertek Technology Co., Ltd. Method for reproducing object in 3D scene and virtual reality head-mounted device
US10466798B2 (en) 2014-12-30 2019-11-05 Qingdao Goertek Technology Co., Ltd. System and method for inputting gestures in 3D scene
CN104898394A (en) * 2015-04-27 2015-09-09 华北电力大学 Holographic projection technology-based man-machine interaction system and method
CN104898394B (en) * 2015-04-27 2019-01-15 华北电力大学 Man-machine interactive system and method based on line holographic projections technology
CN104820497A (en) * 2015-05-08 2015-08-05 东华大学 A 3D interaction display system based on augmented reality
CN104820497B (en) * 2015-05-08 2017-12-22 东华大学 A kind of 3D interactive display systems based on augmented reality
CN105045389A (en) * 2015-07-07 2015-11-11 深圳水晶石数字科技有限公司 Demonstration method for interactive sand table system
CN105045389B (en) * 2015-07-07 2018-09-04 深圳水晶石数字科技有限公司 A kind of demenstration method of interactive sand table system
CN105511602A (en) * 2015-11-23 2016-04-20 合肥金诺数码科技股份有限公司 3d virtual roaming system
TWI559269B (en) * 2015-12-23 2016-11-21 國立交通大學 System, method, and computer program product for simulated reality learning
CN107945270A (en) * 2016-10-12 2018-04-20 阿里巴巴集团控股有限公司 A kind of 3-dimensional digital sand table system
CN106980366A (en) * 2017-02-27 2017-07-25 合肥安达创展科技股份有限公司 Landform precisely catches system and fine high definition projection imaging system
CN107168516B (en) * 2017-03-31 2019-10-11 浙江工业大学 Global climate vector field data method for visualizing based on VR and gesture interaction technology
CN106803932A (en) * 2017-03-31 2017-06-06 合肥安达创展科技股份有限公司 A kind of utilization dynamic recognition technique and the method for image fusion technology interactive demonstration
CN107168516A (en) * 2017-03-31 2017-09-15 浙江工业大学 Global climate vector field data method for visualizing based on VR and gesture interaction technology
CN107479706A (en) * 2017-08-14 2017-12-15 中国电子科技集团公司第二十八研究所 A kind of battlefield situation information based on HoloLens is built with interacting implementation method
CN107479706B (en) * 2017-08-14 2020-06-16 中国电子科技集团公司第二十八研究所 Battlefield situation information construction and interaction realization method based on HoloLens
CN109426783A (en) * 2017-08-29 2019-03-05 深圳市掌网科技股份有限公司 Gesture identification method and system based on augmented reality
CN108389432A (en) * 2018-01-29 2018-08-10 盎锐(上海)信息科技有限公司 Science displaying device based on shadow casting technique and projecting method
CN108398049A (en) * 2018-04-28 2018-08-14 上海亿湾特训练设备科技有限公司 A kind of mutual war formula projection confrontation fire training system of networking
CN108398049B (en) * 2018-04-28 2023-12-26 上海亿湾特训练设备科技有限公司 Networking mutual-combat type projection antagonism shooting training system
CN108737811A (en) * 2018-05-25 2018-11-02 卓谨信息科技(常州)有限公司 Sandbox scene interaction system based on Kinect and sandbox scene creation method
CN108874133A (en) * 2018-06-12 2018-11-23 南京绿新能源研究院有限公司 Interactive for distributed photoelectricity station monitoring room monitors sand table system
CN111459264A (en) * 2018-09-18 2020-07-28 阿里巴巴集团控股有限公司 3D object interaction system and method and non-transitory computer readable medium
CN111459264B (en) * 2018-09-18 2023-04-11 阿里巴巴集团控股有限公司 3D object interaction system and method and non-transitory computer readable medium
CN109493655A (en) * 2018-12-21 2019-03-19 广州亚普机电设备科技有限公司 A kind of power battery 3D virtual emulation interaction experience system
CN109857260A (en) * 2019-02-27 2019-06-07 百度在线网络技术(北京)有限公司 Control method, the device and system of three-dimensional interactive image
CN110163831A (en) * 2019-04-19 2019-08-23 深圳市思为软件技术有限公司 The object Dynamic Display method, apparatus and terminal device of three-dimensional sand table
CN110163831B (en) * 2019-04-19 2021-04-23 深圳市思为软件技术有限公司 Method and device for dynamically displaying object of three-dimensional virtual sand table and terminal equipment
CN110609616A (en) * 2019-06-21 2019-12-24 哈尔滨拓博科技有限公司 Stereoscopic projection sand table system with intelligent interaction function
CN110767063A (en) * 2019-11-08 2020-02-07 浙江浙能技术研究院有限公司 Non-contact interactive electronic sand table and working method
CN111953956B (en) * 2020-08-04 2022-04-12 山东金东数字创意股份有限公司 Naked eye three-dimensional special-shaped image three-dimensional camera generation system and method thereof
CN111953956A (en) * 2020-08-04 2020-11-17 山东金东数字创意股份有限公司 Naked eye three-dimensional special-shaped image three-dimensional camera generation system and method thereof
CN112040215A (en) * 2020-08-30 2020-12-04 河北军云软件有限公司 Naked eye stereoscopic display system in electromagnetic environment
CN112379777A (en) * 2020-11-23 2021-02-19 南京科盈信息科技有限公司 Digital exhibition room gesture recognition system based on target tracking
CN112885222A (en) * 2021-01-25 2021-06-01 中国石油大学胜利学院 Novel AI interactive simulation sand table and use method thereof
CN112885222B (en) * 2021-01-25 2022-07-19 中国石油大学胜利学院 AI interactive simulation sand table and use method thereof
CN113096515A (en) * 2021-03-31 2021-07-09 江西交通职业技术学院 Adjustable sand table simulation device
CN113296604B (en) * 2021-05-24 2022-07-08 北京航空航天大学 True 3D gesture interaction method based on convolutional neural network
CN113296604A (en) * 2021-05-24 2021-08-24 北京航空航天大学 True 3D gesture interaction method based on convolutional neural network
CN113687716A (en) * 2021-07-29 2021-11-23 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Data three-dimensional visualization platform system and method based on intelligent interaction technology
CN115061606A (en) * 2022-02-14 2022-09-16 邹良伍 Naked eye 3D immersive experience equipment
CN114931746A (en) * 2022-05-12 2022-08-23 南京大学 Interaction method, device and medium for 3D game based on pen type and touch screen interaction
CN114931746B (en) * 2022-05-12 2023-04-07 南京大学 Interaction method, device and medium for 3D game based on pen type and touch screen interaction

Similar Documents

Publication Publication Date Title
CN104050859A (en) Interactive digital stereoscopic sand table system
KR101761751B1 (en) Hmd calibration with direct geometric modeling
Hilliges et al. HoloDesk: direct 3d interactions with a situated see-through display
US10739936B2 (en) Zero parallax drawing within a three dimensional display
CN103793060B (en) A kind of user interactive system and method
CN102945564A (en) True 3D modeling system and method based on video perspective type augmented reality
KR20130028878A (en) Combined stereo camera and stereo display interaction
CN206961066U (en) A kind of virtual reality interactive device
CN104916182A (en) Immersion type virtual reality maintenance and training simulation system
JP2011022984A (en) Stereoscopic video interactive system
CN103440677A (en) Multi-view free stereoscopic interactive system based on Kinect somatosensory device
Jia et al. 3D image reconstruction and human body tracking using stereo vision and Kinect technology
CN105518584A (en) Recognizing interactions with hot zones
CN103257707B (en) Utilize the three-dimensional range method of Visual Trace Technology and conventional mice opertaing device
CN111275731B (en) Projection type physical interaction desktop system and method for middle school experiments
CN104656893A (en) Remote interaction control system and method for physical information space
CN204406327U (en) Based on the limb rehabilitating analog simulation training system of said three-dimensional body sense video camera
Khattak et al. A real-time reconstructed 3D environment augmented with virtual objects rendered with correct occlusion
CN104349157A (en) 3D displaying apparatus and method thereof
US11380063B2 (en) Three-dimensional distortion display method, terminal device, and storage medium
CN106125927B (en) Image processing system and method
CN205193366U (en) Follow people eye position's stereoscopic display device
CN111383343B (en) Home decoration design-oriented augmented reality image rendering coloring method based on generation countermeasure network technology
CN110060349A (en) A method of extension augmented reality head-mounted display apparatus field angle
CN115908755A (en) AR projection method, system and AR projector

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140917

RJ01 Rejection of invention patent application after publication