CN113419634A - Display screen-based tourism interaction method - Google Patents
Display screen-based tourism interaction method Download PDFInfo
- Publication number
- CN113419634A CN113419634A CN202110776958.XA CN202110776958A CN113419634A CN 113419634 A CN113419634 A CN 113419634A CN 202110776958 A CN202110776958 A CN 202110776958A CN 113419634 A CN113419634 A CN 113419634A
- Authority
- CN
- China
- Prior art keywords
- user
- module
- virtual
- color
- interaction method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000003993 interaction Effects 0.000 title claims abstract description 36
- 230000002452 interceptive effect Effects 0.000 claims abstract description 78
- 230000007246 mechanism Effects 0.000 claims abstract description 21
- 238000006243 chemical reaction Methods 0.000 claims abstract description 16
- 230000000007 visual effect Effects 0.000 claims abstract description 10
- 230000009466 transformation Effects 0.000 claims abstract description 5
- 238000004040 coloring Methods 0.000 claims abstract description 4
- 238000004891 communication Methods 0.000 claims description 9
- 230000009471 action Effects 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000006698 induction Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 230000001678 irradiating effect Effects 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 9
- 230000005484 gravity Effects 0.000 description 6
- 238000007654 immersion Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000002310 reflectometry Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 230000004083 survival effect Effects 0.000 description 2
- 206010048909 Boredom Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008846 dynamic interplay Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000003014 reinforcing effect Effects 0.000 description 1
- 210000000697 sensory organ Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/14—Travel agencies
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Economics (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention relates to a display screen-based tourism interaction method, which comprises the following steps: s1, a user enters a scene conversion unit on a mobile terminal or an operation desk to navigate and select a scene to be experienced; s2, the operation panel controls the projector to start and projects the corresponding content in the projection area, and the user enters the projection area and is in the visual field range of the sensing mechanism; s3, the user autonomously calls and switches the virtual exhibition body on the interactive screen through a gesture instruction; s4, the user autonomously selects a color changing part on the interactive screen through a gesture instruction, and autonomously selects a corresponding color to the color changing part of the virtual exhibition body; s5, the user can re-enter S1 to select different scenes for experience after finishing coloring; according to the tourism interaction method provided by the invention, the user can control the transformation of the virtual display content by using the gesture, so that the user can watch fine and vivid display content from multiple angles, the vividness of the display content and the interactivity of exhibition are enhanced, and the enthusiasm of active participation of the user is improved.
Description
Technical Field
The invention belongs to the technical field of tourism intelligence, and particularly relates to a tourism interaction method based on a display screen.
Background
With the continuous development of mobile internet and the wide application of mobile devices, the traditional tourism industry gradually changes to the direction of smart tourism. Tourism products and services thereof increasingly attach importance to the tourism experience and satisfaction of tourists in order to meet the increasing tourism demands of the tourists. Meanwhile, the form of tourist attraction service is changed, common group games and free-running games are included, and deep experience games gradually enter the public visual field as a novel tourist form. The deep experience tour is free experience optimization and upgrading, and needs to meet various basic tourism requirements of tourists and deep experience requirements of the tourists on scenic area culture contents. The deep experience requirement means that tourists need to have deeper understanding and experience on scenic spot culture besides basic information of scenic spots. It follows that in current tourist attraction services, cultural immersive experience appeal has developed into an increasingly popular distinctive style of tourism.
The immersion experience is also called as immersion theory and immersion type experience, and the immersion experience in the field of positive psychology means that: when a person is doing an activity, attention is focused if fully engaged in the context, and all irrelevant perceptions are filtered out, i.e. into an immersive state. The immersive experience is a positive and positive psychological experience that gives the individual a great pleasure when participating in an activity, thereby encouraging the individual to repeat the same activity without boredom. As computer science has developed, immersion theory has extended to discussions of human-machine interaction, where the immersion experience also means that active participants enter a common experience mode, with awareness focused on a small scale, and other unrelated perceptions and thoughts filtered, reacting only to specific goals and explicit feedback, and creating a sense of control over the environment. An immersive experience results when the skill level of an individual matches the challenge being faced. Virtual intelligence such as present common VR provides immersive experience, utilizes the sense organ experience and the cognitive experience of people, builds the atmosphere and lets the participant enjoy certain state, provides the experience that the participant immerses completely, makes the user have one kind and puts into the sensation among the virtual world.
Traditional scenic spot content's exhibition mode, because the restriction of physical space, mainly show through the mode of signboard combination real object or picture, the online guide software mainly is that the geographical position of tracking visitor explains the cultural content of each sight spot through the mode of pronunciation plus the characters, can see out from the form and on the content that these two kinds of exhibition modes convey that the cultural element is very limited, lacks the explanation to scenic spot cultural connotation degree of depth and width.
Disclosure of Invention
The invention aims to solve the problems in the background technology, and provides a tourism interaction method based on a display screen.
The purpose of the invention is realized as follows:
a tourism interaction method based on a display screen comprises the following steps:
s1, the user accesses the mobile terminal and the console into the same local area network, and accesses the scene to be experienced by the navigation selection corresponding to the scene conversion unit on the mobile terminal or the console;
s2, a sub-controller of an interactive system in the operation panel controls the projector to be started and projects corresponding content on an interactive screen in a projection area through a 3D technology, and a user enters the projection area and is in the visual field range of the sensing mechanism;
s3, selecting navigation corresponding to the calling unit through a gesture instruction by a user on the interactive screen, autonomously calling and switching the virtual exhibition body, entering the replacement of the virtual exhibition body after the user confirms, and entering the virtual exhibition body selected by the system for appreciation if the user does not operate;
s4, selecting navigation corresponding to the color unit through a gesture instruction by a user on the interactive screen, autonomously selecting a color-changing part, and autonomously selecting a corresponding color to a color-changing part of the virtual exhibition body;
s5, after finishing coloring, the user can re-enter the step S1 to select different scenes for experiencing.
Preferably, the interactive system in S2 includes an operation panel, a projector connected to the operation panel, a projection area corresponding to the projector, and a sensing mechanism for monitoring a user state, where an interactive screen for bearing a projection image of the projector is provided in the projection area.
Preferably, a sub-controller is arranged in the console, the projector and the sensing mechanism are both communicatively connected to the sub-controller, and the sub-controller of S2 includes:
the scene conversion unit comprises a conversion module and a music module, wherein a user converts a virtual scene stored in the console through the conversion module, and the music module plays background music matched with the virtual scene converted by the user;
the calling unit comprises a calling module and an audio explanation module, the calling module comprises virtual exhibitions in different scenes, a user autonomously calls and switches the virtual exhibitions, and the audio explanation module plays audio explanation matched with the virtual exhibitions;
and the color unit comprises an audio guide module and a color replacement module, and a user performs color display, color pickup and color replacement on the virtual exhibition body through the color replacement module under the guidance of the audio guide module.
Preferably, the sensing mechanism in S2 includes a camera and a laser sensor for capturing the touch action of the user' S fingertip on the interactive screen, and the field of view of the camera completely covers the image of the projector.
Preferably, the laser sensor emits continuous light beam scanning to receive image signals with depth information, a user gesture enters a two-dimensional laser plane emitted by the laser sensor, an original signal generated by the laser sensor is sent to the branch controller, the branch controller performs processing of an image denoising algorithm and a target positioning tracking algorithm, and the processed signal is displayed on the interactive screen to realize human-computer interaction.
Preferably, the sensing mechanism further comprises a posture sensing sensor for sensing the posture of the user and a lamp for irradiating the user with light, the posture sensing sensor and the lamp are both connected to the branch controller, and the branch controller controls the lamp to focus the light around the user according to the limb movement of the user captured by the posture sensing sensor.
Preferably, at least two sub-controllers are arranged in S2, each of the at least two sub-controllers is controlled by a main controller arranged in the console, and each of the at least two sub-controllers correspondingly controls a corresponding projector, an interactive screen and an induction mechanism.
Preferably, the interactive screen is connected to the master controller through a touch control integrator, the touch control integrator is in communication with the interactive screen and the master controller through a USB, and the touch control integrator is connected to the mobile terminal through wireless communication.
Preferably, the slave controller further includes:
the display module is used for fusing and displaying the acquired user image and the virtual scene;
the recognition module is used for recognizing fingertips and gestures of users and sending gesture instructions of the users to the sub-controllers;
the response module is used for making corresponding transformation on the virtual exhibition body according to the gesture instruction received by the recognition module;
and the sharing module is responsible for the user to group together with the virtual scene and the virtual exhibition body in a photographing mode, and store and share the photos.
Preferably, the moving areas are distinguished by continuously measuring and calculating by a laser radar of the laser sensor, the moving target is detected by judging the pixel difference of continuous images by using a continuous frame difference method based on the camera, the detection results of the two are merged, and the gesture command is given by the laser sensor and the detection results of multi-frame images together.
Preferably, the specific steps of detecting the gesture command are as follows:
a1, taking the gesture of the user as a motion target, and based on the motion target delta = { J = { (J) }i=(f,bL,bC) I =1,2, … N } of each image frame bCi, giving a score s based on a gesture detector consisting of a laser sensor and a cameraCi, recording the multi-frame detection result of the gesture detector as SC={sCi,i=1,2,…N};
A2, the more stable the tracking of the moving object, the higher the probability that the moving object becomes an object independent of the background, and the S is the criterion for determining whether the cost Γ associated with the previous and subsequent frames is a gesture or notL={sl i=1- Γ i, i =1,2, … N, and when i =1, there is no associated cost at initialization, let s bel i= 1; when association fails, sl i=-1;
A3, the result of the gesture tracking score:
SF=W[σ(SC),SL]Twhere W represents a weight and σ is a sum of SCProjection onto [ -1,1 [)]Inner function, and has σ(s) =2/[1-exp (-s/4)]-1;
When S isFIf the gesture is more than gamma, the gesture is judged.
A4, setting the number of survival frames of the tracked target as NbWhen N is presentb>δNAnd S isF<δ2When the tracking target cannot be determined to be the gesture, the tracking list is quitted.
Preferably, the virtual picture is projected on the interactive screen through the projector, the interactive gesture image of the user on the interactive screen is captured by the camera and the laser sensor, the interactive gesture image is transmitted to a sub-controller of the operation console through a data interface to be processed, the foreground arm area is divided, extracted and the position of the fingertip point is detected, the operation that the fingertip touches the interactive screen is detected by adopting self-adaptive structured light coding, and the sub-controller executes corresponding control, so that the touch projection interactive mode is realized.
Preferably, in the process of segmenting and extracting the foreground arm area on the interactive screen in the projection area, detecting through the difference between the reflectivity of the arm skin to illumination and the reflectivity of the surface of the interactive screen, setting the influence of ambient light in the projection area as R, the reflectivity of the surface of the interactive screen as A, the color conversion function of the camera as B, and the brightness value of the visual feedback image as C, if C = B × A × R;
if no front scenery exists on the interactive screen, the camera collects the pixel value I = C of the corresponding point on the image;
if an interactive projection exists on the interactive screen and the surface reflectivity of the projection is a ″, then there is a pixel value I = B × a ″ -R of the corresponding point.
Preferably, when the position of the fingertip is detected, the positioning of the fingertip is the basis for accurately judging the touch position, and on the basis of the curvature extremum algorithm, the detection steps of the fingertip position are as follows:
1) detecting edge contour points of the foreground after the arm region is segmented based on a Canny operator;
2) calculating corresponding curvatures of all edge contour points, obtaining candidate points of fingertip positions by searching a maximum value of the curvatures, and eliminating the interference of a gap between fingers according to the gravity center distance between the candidate points and the contour of a palm region;
3) and classifying the points with the closer distance of each candidate point into a combination, wherein the candidate points are candidate points on the same finger, the candidate points with the larger distance are classified into different groups, one palm area is divided into five groups, and the average value of the points of each group is the final fingertip point to be returned.
Preferably, a contour point P on the contouriThe curvature K of (a) is calculated as follows:
K(Pi)=PiPi-x×PiPi+x/‖PiPi-x‖×‖PiPi+xII; wherein point Pi-xIs a point PiPrevious x-th point, point Pi+xIs a point PiThe x-th point thereafter, x representing a displacement amount.
Preferably, the position of the fingertip is farther from the position of the center of gravity of the hand, and the candidate point farthest from the center of gravity is the position of the fingertip.
Preferably, in the step of detecting the fingertip touch interactive screen by adopting the adaptive structured light coding, in the coding process, after the sub-controller detects the fingertip point position on a certain frame of image, the adaptive structured light coding is carried out in a neighborhood window taking the fingertip point as the center, and then P = O + delta is obtainedcWhere O denotes the pixel value in the original projection image, ΔcThe encoding threshold is fixed for the structured light encoded embedded pixels.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the tourism interaction method based on the display screen, the induction mechanism is responsible for collecting user data, the system sub-controllers are responsible for analyzing and processing human body data, storing virtual display content data and interacting the user with the virtual display content, the user can control the transformation of the virtual display content by using gestures, so that the user can view fine and vivid display content from multiple angles, the vividness of the display content and the interactivity of exhibition are enhanced, and the enthusiasm of active participation of the user is improved.
2. According to the tourism interaction method based on the display screen, the laser sensor operation and the camera identification are carried out, meanwhile, the data interaction is carried out through the computer, the instruction action of a user is accurately captured, and real-time and accurate feedback is carried out, so that the man-machine interaction is realized.
3. According to the tourism interaction method based on the display screen, the user can change the color of the virtual exhibition body through scene conversion, virtual exhibition body calling and active color changing for the virtual exhibition body to be drawn in to the distance from the exhibition body, so that the user can deepen the understanding of the exhibition body in the experience process to the maximum extent, and the flexibility and the interestingness of the exhibition body are improved.
4. According to the tourism interaction method based on the display screen, provided by the invention, the gesture actions of the user are captured through the laser sensor and the camera, and the fingertip touch instruction of the user is judged and recognized, so that natural and convenient human-computer interaction is realized.
5. According to the display screen-based tourism interaction method, the same operation console can simultaneously control a plurality of interaction screens, the simultaneous use of a plurality of users in the same time period is met, and multi-platform and multi-terminal interactive communication is realized.
Drawings
FIG. 1 is a schematic view of a display screen-based tourism interaction method of the present invention.
FIG. 2 is a schematic structural diagram of an interactive system of the travel interaction method based on a display screen.
FIG. 3 is a schematic diagram of a branch controller of the travel interaction method based on a display screen.
FIG. 4 is a schematic view of a sensing mechanism of a display screen-based tour interaction method according to the present invention.
FIG. 5 is a schematic diagram of an integrator of a display-based tour interaction method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by those skilled in the art without any creative work based on the embodiments of the present invention belong to the protection scope of the present invention.
Example 1
With reference to fig. 1, a display screen-based tourism interaction method includes the following steps:
s1, the user accesses the mobile terminal and the console into the same local area network, and accesses the scene to be experienced by the navigation selection corresponding to the scene conversion unit on the mobile terminal or the console;
s2, a sub-controller of an interactive system in the operation panel controls the projector to be started and projects corresponding content on an interactive screen in a projection area through a 3D technology, and a user enters the projection area and is in the visual field range of the sensing mechanism;
s3, selecting navigation corresponding to the calling unit through a gesture instruction by a user on the interactive screen, autonomously calling and switching the virtual exhibition body, entering the replacement of the virtual exhibition body after the user confirms, and entering the virtual exhibition body selected by the system for appreciation if the user does not operate;
s4, selecting navigation corresponding to the color unit through a gesture instruction by a user on the interactive screen, autonomously selecting a color-changing part, and autonomously selecting a corresponding color to a color-changing part of the virtual exhibition body;
s5, after finishing coloring, the user can re-enter the step S1 to select different scenes for experiencing.
Example 2
With reference to fig. 2, a display screen based tourism interactive system comprises an operation desk, a projector connected with the operation desk, a projection area corresponding to the projector and a sensing mechanism for monitoring user states, wherein an interactive screen for bearing projection images of the projector is arranged in the projection area, a sub-controller is arranged in the operation desk, the projector and the sensing mechanism are both in communication connection with the sub-controller, a mobile terminal is connected with Bluetooth or wifi of the operation desk after a user registers and is positioned in the same local area network, the operation desk is started, a scene which the user wants to experience is selected on the mobile terminal or the operation desk, the operation desk controls the projector to be started and projects corresponding content on the interactive screen in the projection area after the user selects, the projector adopts a 3D projection technology, the user can be positioned in the projection area and is integrated with display content, so that the user has personally on the experience in the projection area, the user can move the end through the operation and change experience scene or specific exhibition body to touch interactive screen through the gesture and carry out corresponding instruction operation to virtual exhibition body, let the user from the sensation that the multi-angle is closely spaced, the taste of reinforcing experience.
The sensing mechanism comprises a camera and a laser sensor, the camera is used for capturing touch action of a user fingertip on the interactive screen, the visual field range of the camera completely covers an imaging picture of the projector, the laser sensor emits continuous light beams to scan so as to receive image signals with depth information, gesture action of the user enters a two-dimensional laser plane emitted by the laser sensor, original signals generated by the laser sensor are sent to the sub-controller, the sub-controller carries out processing of an image denoising algorithm and a target positioning tracking algorithm, the processed signals are displayed on the interactive screen, and man-machine interaction is achieved.
The method comprises the steps of opening a camera and a laser sensor, connecting and pre-operating an operation console, the sensor and an interactive screen, then collecting original data by utilizing laser radars of the camera and the laser sensor, converting the laser radar coordinates to pixel coordinate projection and carrying out time calibration, fusing depth information and visual information, carrying out motion filtering and optimizing tracking performance after image denoising, and finally realizing the functional design of an interactive system through three-dimensional interactive software Ventuz.
The sensing mechanism further comprises a posture sensing sensor used for sensing the posture of the user and a lamp used for irradiating the user with light, the posture sensing sensor adopts a three-axis acceleration sensor, the posture sensing sensor and the lamp are both connected to the branch controller, the branch controller controls the lamp to focus the light around the user according to the limb movement of the user captured by the posture sensing sensor, and when the user walks in the projection area, the branch controller judges the position of the user according to the posture sensing sensor worn on the body of the user so as to change the corresponding light and enable the user to become a focus in a virtual scene.
Example 3
With reference to fig. 3, the slave controller includes:
the scene conversion unit comprises a conversion module and a music module, wherein a user converts a virtual scene stored in the operation console through the conversion module, and the music module plays background music matched with the virtual scene according to the virtual scene converted by the user, so that audiences can experience from different environments and can personally visit and enjoy the virtual scene.
The calling unit comprises a calling module and an audio explanation module, the calling module comprises virtual exhibition bodies called in different scenes, a user autonomously calls and switches the virtual exhibition bodies, the audio explanation module plays audio explanation matched with the audio explanation module, the selectable virtual exhibition bodies of the user can change along with the change of the scenes in different space-time dimensions, and the user can completely autonomously select exhibition objects in different scenes.
The color unit comprises an audio guide module and a color replacement module, a user performs color display, color pickup and color replacement on the virtual exhibition body under the guidance of the audio guide module through the color replacement module, the static display is converted into dynamic interaction, and the interest of experience is increased.
The display module is used for fusing and displaying the acquired user image and the virtual scene;
the recognition module is used for recognizing fingertips and gestures of users and sending gesture instructions of the users to the sub-controllers;
the response module is used for making corresponding transformation on the virtual exhibition body according to the gesture instruction received by the recognition module;
and the sharing module is responsible for the user to group together with the virtual scene and the virtual exhibition body in a photographing mode, and store and share the photos.
Example 4
With reference to fig. 4, the laser sensor emits continuous light beam scanning to receive image signals with depth information, a user gesture enters a two-dimensional laser plane emitted by the laser sensor, an original signal generated by the laser sensor is sent to the branch controller, the branch controller performs processing of an image denoising algorithm and a target positioning tracking algorithm, and the processed signal is displayed on an interactive screen to realize human-computer interaction.
The sub-controller processes the signals collected by the laser and the vision sensor, and outputs the signals to the interactive screen after the signals are processed by the control software and the display software; a laser sensor: the laser radar senses the interactive environment and transmits the position and speed information of the target to the branch controller; a CCD camera: collecting image information and providing a continuous image sequence to a sub-controller; and the interactive screen is used for outputting image signals processed by the computer and accurately displaying the operation result.
The method comprises the steps of continuously measuring and calculating by a laser radar of a laser sensor to realize the distinguishing of motion areas, judging the pixel difference of continuous images by using a continuous frame difference method based on a camera to realize the detection of a motion target, and merging the detection results of the two, wherein a gesture instruction is given by the laser sensor and the detection results of multi-frame images together.
The specific steps of detecting the gesture instruction are as follows:
a1, taking the gesture of the user as a motion target, and based on the motion target delta = { J = { (J) }i=(f,bL,bC) I =1,2, … N } of each image frame bCi, giving a score s based on a gesture detector consisting of a laser sensor and a cameraCi, recording the multi-frame detection result of the gesture detector as SC={sCi,i=1,2,…N};
A2, the more stable the tracking of the moving object, the higher the probability that the moving object becomes an object independent of the background, and the S is the criterion for determining whether the cost Γ associated with the previous and subsequent frames is a gesture or notL={sl i=1- Γ i, i =1,2, … N, and when i =1, there is no associated cost at initialization, let s bel i= 1; when association fails, sl i=-1;
A3, the result of the gesture tracking score:
SF=W[σ(SC),SL]Twhere W represents a weight and σ is a sum of SCProjection onto [ -1,1 [)]Inner function, and has σ(s) =2/[1-exp (-s/4)]-1;
When S isFIf the gesture is more than gamma, the gesture is judged.
A4, setting the number of survival frames of the tracked target as NbWhen N is presentb>δNAnd S isF<δ2When the tracking target cannot be determined to be the gesture, the tracking list is quitted.
The virtual picture is projected on the interactive screen through the projector, the interactive gesture image of a user on the interactive screen is captured by the camera and the laser sensor, the interactive gesture image is transmitted to a branch controller of the operation console through a data interface to be processed, the foreground arm area is divided, extracted and the position of the fingertip point is detected, the operation that the fingertip touches the interactive screen is detected by adopting self-adaptive structured light coding, and the branch controller executes corresponding control, so that the touch projection interactive mode is realized.
Example 5
On the basis of the embodiment 4, a virtual picture is projected on the interactive screen through the projector, an interactive gesture image of a user on the interactive screen is captured by the camera and the laser sensor, the interactive gesture image is transmitted to a sub-controller of the operating console through a data interface to be processed, the foreground arm area is divided, extracted and the position of the fingertip point is detected, the operation that the fingertip touches the interactive screen is detected by adopting self-adaptive structured light coding, and the sub-controller executes corresponding control to realize a touch projection interactive mode.
In the process of segmenting and extracting the foreground arm area on the interactive screen in the projection area, detecting through the difference between the light reflection rate of the arm skin and the reflection rate of the surface of the interactive screen, setting the influence of ambient light in the projection area as R, the surface reflection rate of the interactive screen as A, the color conversion function of a camera as B, the brightness value of a visual feedback image as C, and then C = B × A × R;
if no front scenery exists on the interactive screen, the camera collects the pixel value I = C of the corresponding point on the image;
if an interactive projection exists on the interactive screen and the surface reflectivity of the projection is a ″, then there is a pixel value I = B × a ″ -R of the corresponding point.
When the position of the fingertip point is detected, the positioning of the fingertip point is the basis for accurately judging the touch position, and on the basis of a curvature extreme value algorithm, the detection steps of the fingertip point position are as follows:
1) detecting edge contour points of the foreground after the arm region is segmented based on a Canny operator;
2) calculating corresponding curvatures of all edge contour points, obtaining candidate points of fingertip positions by searching a maximum value of the curvatures, and eliminating the interference of a gap between fingers according to the gravity center distance between the candidate points and the contour of a palm region;
3) and classifying the points with the closer distance of each candidate point into a combination, wherein the candidate points are candidate points on the same finger, the candidate points with the larger distance are classified into different groups, one palm area is divided into five groups, and the average value of the points of each group is the final fingertip point to be returned.
A contour point P on the contouriThe curvature K of (a) is calculated as follows:
K(Pi)=PiPi-x×PiPi+x/‖PiPi-x‖×‖PiPi+xII; wherein point Pi-xIs a point PiPrevious x-th point, point Pi+xIs a point PiThe x-th point thereafter, x representing a displacement amount.
The position of the fingertip is far away from the gravity center of the hand, and the candidate point which is farthest away from the gravity center is the position of the fingertip.
In the method for detecting the fingertip touch interactive screen by adopting the self-adaptive structured light coding, in the coding process, after the sub-controller detects the fingertip point position on a certain frame of image, the self-adaptive structured light coding is carried out in a neighborhood window taking the fingertip point as the center, and then P = O + delta is carried outcWhere O denotes the pixel value in the original projection image, ΔcThe encoding threshold is fixed for the structured light encoded embedded pixels.
Example 6
With reference to fig. 5, the sub-controllers are provided with at least two sub-controllers, the at least two sub-controllers are controlled by a main controller arranged in the console, each sub-controller of the at least two sub-controllers correspondingly controls a corresponding projector, an interactive screen and an induction mechanism, the interactive screen is connected to the main controller through a touch control integrator, the touch control integrator is in communication with the interactive screen and the main controller through a USB, and the touch control integrator is connected to the mobile terminal through a Socket communication mode.
The touch control integrator and the mobile terminal device are accessed into the same local area network through the WIFI module, and communication between the mobile terminal and the touch control integrator is achieved through a wireless network.
The above description is only a preferred embodiment of the present invention, and should not be taken as limiting the invention, and any modifications, equivalents and substitutions made within the scope of the present invention should be included.
Claims (9)
1. A tourism interaction method based on a display screen is characterized by comprising the following steps: the method comprises the following steps:
s1, the user accesses the mobile terminal and the console into the same local area network, and accesses the scene to be experienced by the navigation selection corresponding to the scene conversion unit on the mobile terminal or the console;
s2, a sub-controller of an interactive system in the operation panel controls the projector to be started and projects corresponding content on an interactive screen in a projection area through a 3D technology, and a user enters the projection area and is in the visual field range of the sensing mechanism;
s3, selecting navigation corresponding to the calling unit through a gesture instruction by a user on the interactive screen, autonomously calling and switching the virtual exhibition body, entering the replacement of the virtual exhibition body after the user confirms, and entering the virtual exhibition body selected by the system for appreciation if the user does not operate;
s4, selecting navigation corresponding to the color unit through a gesture instruction by a user on the interactive screen, autonomously selecting a color-changing part, and autonomously selecting a corresponding color to a color-changing part of the virtual exhibition body;
s5, after finishing coloring, the user can re-enter the step S1 to select different scenes for experiencing.
2. The display screen-based travel interaction method as claimed in claim 1, wherein: the interactive system in the S2 comprises an operation desk, a projector connected with the operation desk, a projection area corresponding to the projector and a sensing mechanism used for monitoring the state of a user, wherein an interactive screen used for bearing a projection image of the projector is arranged in the projection area.
3. The display screen-based travel interaction method as claimed in claim 1, wherein: be equipped with branch accuse ware in the operation panel, projecting apparatus and response mechanism all communication connection to divide the accuse ware, S2 divide the accuse ware include:
the scene conversion unit comprises a conversion module and a music module, wherein a user converts a virtual scene stored in the console through the conversion module, and the music module plays background music matched with the virtual scene converted by the user;
the calling unit comprises a calling module and an audio explanation module, the calling module comprises virtual exhibitions in different scenes, a user autonomously calls and switches the virtual exhibitions, and the audio explanation module plays audio explanation matched with the virtual exhibitions;
and the color unit comprises an audio guide module and a color replacement module, and a user performs color display, color pickup and color replacement on the virtual exhibition body through the color replacement module under the guidance of the audio guide module.
4. The display screen-based travel interaction method as claimed in claim 1, wherein: the sensing mechanism in the S2 comprises a camera and a laser sensor which are used for capturing the touch action of the fingertip of the user on the interactive screen, and the visual field range of the camera completely covers the image of the projector.
5. The display screen-based travel interaction method according to claim 4, characterized in that: the laser sensor emits continuous light beam scanning to receive image signals with depth information, a user gesture enters a two-dimensional laser plane emitted by the laser sensor, an original signal generated by the laser sensor is sent to the branch controller, the branch controller carries out processing of an image denoising algorithm and a target positioning tracking algorithm, and the processed signal is displayed on an interactive screen to realize man-machine interaction.
6. The display screen-based travel interaction method as claimed in claim 1, wherein: the sensing mechanism further comprises a posture sensing sensor used for sensing the posture of the user and a lamp used for irradiating the user with light, the posture sensing sensor and the lamp are both connected to the branch controller, and the branch controller controls the lamp to emit the light around the user according to the limb movement of the user captured by the posture sensing sensor.
7. The display screen-based travel interaction method as claimed in claim 1, wherein: at least two sub-controllers are arranged in the S2, the at least two sub-controllers are controlled by a main controller arranged in the operating platform, and each sub-controller of the at least two sub-controllers correspondingly controls a corresponding projector, an interactive screen and an induction mechanism.
8. The display screen-based travel interaction method according to claim 7, characterized in that: the interactive screen is connected to the main controller through the touch control integrator, the touch control integrator is communicated with the interactive screen and the main controller through a USB, and the touch control integrator is connected to the mobile terminal through wireless communication.
9. The display screen-based travel interaction method as claimed in claim 1, wherein: the branch controller further comprises:
the display module is used for fusing and displaying the acquired user image and the virtual scene;
the recognition module is used for recognizing fingertips and gestures of users and sending gesture instructions of the users to the sub-controllers;
the response module is used for making corresponding transformation on the virtual exhibition body according to the gesture instruction received by the recognition module;
and the sharing module is responsible for the user to group together with the virtual scene and the virtual exhibition body in a photographing mode, and store and share the photos.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110776958.XA CN113419634A (en) | 2021-07-09 | 2021-07-09 | Display screen-based tourism interaction method |
CN202111495129.0A CN113946223A (en) | 2021-07-09 | 2021-12-09 | Tourism interaction method adopting display screen |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110776958.XA CN113419634A (en) | 2021-07-09 | 2021-07-09 | Display screen-based tourism interaction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113419634A true CN113419634A (en) | 2021-09-21 |
Family
ID=77720608
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110776958.XA Pending CN113419634A (en) | 2021-07-09 | 2021-07-09 | Display screen-based tourism interaction method |
CN202111495129.0A Withdrawn CN113946223A (en) | 2021-07-09 | 2021-12-09 | Tourism interaction method adopting display screen |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111495129.0A Withdrawn CN113946223A (en) | 2021-07-09 | 2021-12-09 | Tourism interaction method adopting display screen |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN113419634A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359473A (en) * | 2021-11-30 | 2022-04-15 | 长沙宏达威爱信息科技有限公司 | Virtual display system of literary composition product VR |
CN114397959A (en) * | 2021-12-13 | 2022-04-26 | 北京大麦文化传播有限公司 | Interactive prompting method, device and equipment |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115243022B (en) * | 2022-08-22 | 2024-03-05 | 周口师范学院 | Laser projection interactive display system |
-
2021
- 2021-07-09 CN CN202110776958.XA patent/CN113419634A/en active Pending
- 2021-12-09 CN CN202111495129.0A patent/CN113946223A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359473A (en) * | 2021-11-30 | 2022-04-15 | 长沙宏达威爱信息科技有限公司 | Virtual display system of literary composition product VR |
CN114397959A (en) * | 2021-12-13 | 2022-04-26 | 北京大麦文化传播有限公司 | Interactive prompting method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113946223A (en) | 2022-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113419634A (en) | Display screen-based tourism interaction method | |
US10949671B2 (en) | Augmented reality system capable of manipulating an augmented reality object and an augmented reality method using the same | |
KR102357265B1 (en) | Method and systems for generating detailed datasets of an environment via gameplay | |
US10665036B1 (en) | Augmented reality system and method with dynamic representation technique of augmented images | |
Crowley et al. | Things that see | |
US10719993B1 (en) | Augmented reality system and method with space and object recognition | |
JP4907483B2 (en) | Video display device | |
US9703940B2 (en) | Managed biometric identity | |
US6181343B1 (en) | System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs | |
US6167353A (en) | Computer method and apparatus for interacting with a physical system | |
EP1329838B1 (en) | Pointing device using the image of the hand | |
US8166421B2 (en) | Three-dimensional user interface | |
Wilson | Depth-sensing video cameras for 3d tangible tabletop interaction | |
US8659658B2 (en) | Physical interaction zone for gesture-based user interfaces | |
US8081822B1 (en) | System and method for sensing a feature of an object in an interactive video display | |
Crowley et al. | Perceptual user interfaces: things that see | |
US10726631B1 (en) | Augmented reality system and method with frame region recording and reproduction technology based on object tracking | |
JP5318623B2 (en) | Remote control device and remote control program | |
KR102012835B1 (en) | An augmented reality system capable of manipulating an augmented reality object using three-dimensional attitude information and recognizes handwriting of character | |
US11164378B1 (en) | Virtual reality detection and projection system for use with a head mounted display | |
TW201539305A (en) | Controlling a computing-based device using gestures | |
KR101088083B1 (en) | Intelligent display apparutus having publicity function and method of performing publicity function | |
CN105229571A (en) | Nature user interface rolls and aims at | |
CN102184020A (en) | Method for manipulating posture of user interface and posture correction | |
WO2012119371A1 (en) | User interaction system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210921 |