CN104166509B - A kind of contactless screen exchange method and system - Google Patents

A kind of contactless screen exchange method and system Download PDF

Info

Publication number
CN104166509B
CN104166509B CN201310186372.3A CN201310186372A CN104166509B CN 104166509 B CN104166509 B CN 104166509B CN 201310186372 A CN201310186372 A CN 201310186372A CN 104166509 B CN104166509 B CN 104166509B
Authority
CN
China
Prior art keywords
laser
screen
laser spots
designator
control action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310186372.3A
Other languages
Chinese (zh)
Other versions
CN104166509A (en
Inventor
刘兆祥
胡伟
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201310186372.3A priority Critical patent/CN104166509B/en
Publication of CN104166509A publication Critical patent/CN104166509A/en
Application granted granted Critical
Publication of CN104166509B publication Critical patent/CN104166509B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

The present embodiments relate to Computer Applied Technology field, a kind of contactless screen exchange method and system are disclosed.Wherein, the system includes computer, projecting apparatus, the video camera for capturing the image information being projected on screen and for producing two laser designator of laser beam projects on the screen, and the system also includes:Laser spots locating module, the camera review coordinate for two laser spots by two laser designator projections on the screen is mapped to computer screen coordinate;Object Identification Module is manipulated, for recognizing the manipulation object pointed by two laser designator;Control action identification module, the control action for recognizing two laser designator;Interaction drive module, for the control action to be converted into the recognizable manipulation instruction for being directed to the manipulation object of computer and computer is transferred to, the manipulation instruction is performed by computer.By the embodiment of the present invention, may be such that makes contactless screen interactive operation more natural and more humane.

Description

A kind of contactless screen exchange method and system
Technical field
The present invention relates to Computer Applied Technology field, and in particular to a kind of contactless screen exchange method and system.
Background technology
Interactive giant-screen is demonstration interactive system popular in recent years, and user can be by certain means to big The content of screen display is manipulated, for example, page turning is browsed, moves, scaled.Accurately catch, understand user's control action Be intended that the key for realizing interaction, meanwhile, control action naturally and hommization, meets user's custom, meets user mutual body Demand is tested, is also a focus in current large-screen interactive research.
One of prior art is that the hand-held single branch laser pen of user draws certain pattern, video camera on giant-screen Real-time capture, the track for recognizing laser pen, and matched with predetermined pattern, after the match is successful, map that to computer The instruction of identification, such as mouse, keyboard action, are interacted so as to realize with computer.But, handed over using single laser pen When mutually, user needs to remember some special patterns, easy misoperation, and interactive action not enough nature and hommization, interaction Scarce capacity.
The content of the invention
, being capable of more nature and hommization the embodiment of the invention discloses a kind of contactless screen exchange method and system Realize contactless screen interaction in ground.
The first aspect of the embodiment of the present invention provides a kind of contactless screen interactive system, and the system includes being used to incite somebody to action Image information delivers to the computer of projecting apparatus, for being thrown by projecting apparatus of the described image presentation of information on screen, for capturing The video camera of image information of the shadow on screen and for producing the laser beam emitting device of laser beam projects on the screen, institute Stating laser beam emitting device includes two laser designator, and the system also includes actuator, and the actuator includes:
Laser spots locating module, for detecting described two laser in real time in the image information that is captured from the video camera The positional information and temporal information of two laser spots of indicator projection on the screen, and by described two laser spots in institute The camera review coordinate stated in image information is mapped to computer screen coordinate;
Object Identification Module is manipulated, for according to the computer screen coordinate of described two laser spots and residence time letter Breath, recognizes the manipulation object pointed by described two laser designator;
Control action identification module, for the relative motion trend according to described two laser spots or respectively according to described two The absolute movement trend of individual laser spots recognizes the control action of described two laser designator;
Interaction drive module, for by the control action be converted to computer it is recognizable for the manipulation object Manipulation instruction is simultaneously transferred to computer, and the manipulation instruction is performed by computer.
In the first possible implementation of first aspect of the embodiment of the present invention, the laser spots locating module bag Include:
Demarcate unit, the mapping relations for determining camera review coordinate system and computer screen coordinate system;
Positioning unit, for detecting described two laser designator in real time in the image information that is captured from the video camera The positional information and temporal information of two laser spots of projection on the screen, and according to the mapping relations, by described two Camera review coordinate of the individual laser spots in described image information is mapped to computer screen coordinate.
In second of possible implementation of first aspect of the embodiment of the present invention, the manipulation Object Identification Module tool Body is used for:
When the computer screen coordinate of at least one in described two laser spots keeps constant in preset time, then It is determined that occurring selecting object event;
When selecting object event occurs for one in described two laser spots, if the computer screen coordinate of the laser spots In the viewing area of some object, it is determined that the object is manipulation object;
When selecting object event occurs for described two laser spots, if the time difference that two events occur is less than threshold value, Then determine occur two selecting object events, if the computer screen coordinate of described two laser spots falls in the aobvious of same object Show in region, it is determined that the object is manipulation object, if the computer screen coordinate of described two laser spots falls at two not With in the viewing area of object, it is determined that described two different objects are two manipulation objects;
When selecting object event occurs for described two laser spots, if the time difference that two events occur is more than threshold Value, it is determined that the selecting object event occurred afterwards is invalid, if the computer screen for first occurring the laser spots of selecting object event is sat It is marked in the viewing area of some object, it is determined that the object is manipulation object.
With reference to first aspect of the embodiment of the present invention or first aspect of the embodiment of the present invention the first to second in any Plant possible implementation, in the third possible implementation of first aspect of the embodiment of the present invention, the control action Identification module specifically for:
, will using the laser spots of a projection on the screen in described two laser designator as the starting point of vector Another laser spots of projection on the screen in described two laser designator are as the vectorial terminal, according to institute The variation tendency of vector field homoemorphism and deflection is stated, to recognize the control action of described two laser designator;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize State the control action of two laser designator.
With reference to first aspect of the embodiment of the present invention or first aspect of the embodiment of the present invention the first to second in any Plant possible implementation, in the 4th kind of possible implementation of first aspect of the embodiment of the present invention, the control action Identification module specifically for:
, will using the laser spots of a projection on the screen in described two laser designator as the starting point of vector Another laser spots of projection on the screen in described two laser designator are as the vectorial terminal, when described Vector field homoemorphism constantly becomes hour, then the semanteme of the control action of described two laser designator reduces for picture, when the vector Mould when constantly becoming big, then the semanteme of the control action of described two laser designator is picture amplification, when the vectorial side When being changed to angle, then the semanteme of the control action of described two laser designator is that picture is rotated accordingly;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize State the control action of two laser designator.
Second aspect of the embodiment of the present invention provides a kind of contactless screen exchange method, and methods described includes:Computer By image information deliver to projecting apparatus, projecting apparatus by described image presentation of information on screen, video camera capture be projected on screen Image information and laser beam emitting device produce laser beam projects on the screen, the laser beam emitting device include two swash Optical indicator, methods described also includes:
Detect described two in real time in the image information that the laser spots locating module of actuator is captured from the video camera The positional information and temporal information of laser designator projection two laser spots on the screen, and by described two laser spots Camera review coordinate in described image information is mapped to computer screen coordinate;
The manipulation Object Identification Module of actuator is according to the computer screen coordinate of described two laser spots and residence time Information, recognizes the manipulation object pointed by described two laser designator;
The control action identification module of actuator is according to the relative motion trend of described two laser spots or respectively according to institute The absolute movement trend for stating two laser spots recognizes the control action of described two laser designator;
The interaction drive module of actuator by the control action be converted to computer it is recognizable for the manipulation pair The manipulation instruction of elephant is simultaneously transferred to computer, and the manipulation instruction is performed by computer.
In the first possible implementation of second aspect of the embodiment of the present invention, the laser point location of the actuator Described two laser designator projections are detected in the image information that module is captured from the video camera in real time on the screen Two laser spots positional information and temporal information, and the video camera figure by described two laser spots in described image information Include as coordinate is mapped to computer screen coordinate:
The laser spots locating module of actuator determines that the mapping of camera review coordinate system and computer screen coordinate system is closed System;
Detect that described two laser designator are incident upon the screen in the image information captured from the video camera in real time The positional information and temporal information of two laser spots on curtain, and according to the mapping relations, by described two laser spots in institute The camera review coordinate stated in image information is mapped to computer screen coordinate.
In second of possible implementation of second aspect of the embodiment of the present invention, the manipulation object of the actuator is known Other module recognizes described two laser designator according to the computer screen coordinate and residence time information of described two laser spots Pointed manipulation object includes:
At least one computer screen coordinate of the manipulation Object Identification Module of actuator in described two laser spots When keeping constant in preset time, it is determined that occur selecting object event;
One in described two laser spots when occurring selecting object event, if the computer screen coordinate of the laser spots In the viewing area of some object, it is determined that the object is manipulation object;
When selecting object event occurs for described two laser spots, if the time difference that two events occur is less than threshold value, Then determine occur two selecting object events, if the computer screen coordinate of described two laser spots falls in the aobvious of same object Show in region, it is determined that the object is manipulation object, if the computer screen coordinate of described two laser spots falls in two differences In the viewing area of object, it is determined that described two different objects are two manipulation objects;
When selecting object event occurs for described two laser spots, if the time difference that two events occur is more than threshold value, The selecting object event occurred after then determining is invalid, if the computer screen coordinate for first occurring the laser spots of selecting object event exists In the viewing area of some object, it is determined that the object is manipulation object.
With reference to second aspect of the embodiment of the present invention or second aspect of the embodiment of the present invention the first to second in any Plant possible implementation, in the third possible implementation of second aspect of the embodiment of the present invention, the actuator Control action identification module is according to the relative motion trend of described two laser spots or respectively according to the exhausted of described two laser spots The control action for recognizing described two laser designator to movement tendency includes:
The control action identification module of actuator projects one in described two laser designator on the screen Laser spots as vector starting point, by another projection laser spots on the screen in described two laser designator As the vectorial terminal, according to the variation tendency of the vector field homoemorphism and deflection, to recognize described two laser designations The control action of device;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize State the control action of two laser designator.
With reference to second aspect of the embodiment of the present invention or second aspect of the embodiment of the present invention the first to second in any Plant possible implementation, in the 4th kind of possible implementation of second aspect of the embodiment of the present invention, the actuator Control action identification module is according to the relative motion trend of described two laser spots or respectively according to the exhausted of described two laser spots The control action for recognizing described two laser designator to movement tendency includes:
The control action identification module of actuator projects one in described two laser designator on the screen Laser spots as vector starting point, by another projection laser spots on the screen in described two laser designator As the vectorial terminal, when the vector field homoemorphism constantly becomes small, then the control action of described two laser designator Semanteme reduces for picture, when the vector field homoemorphism constantly becomes big, then the semanteme of the control action of described two laser designator Amplify for picture, when the vectorial deflection changes, then the semanteme of the control action of described two laser designator Rotated accordingly for picture;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize State the control action of two laser designator.
In the embodiment of the present invention, image information is delivered into projecting apparatus, projecting apparatus by described image presentation of information in computer On screen, video camera captures the image information being projected on screen and two laser designator produce laser beam projects described After on screen, detect described two sharp in the image information that can be captured by laser spots locating module from the video camera in real time The positional information and temporal information of two laser spots of optical indicator projection on the screen, and described two laser spots are existed Camera review coordinate in described image information is mapped to computer screen coordinate, then by manipulating Object Identification Module root According to the computer screen coordinate and residence time information of described two laser spots, recognize pointed by described two laser designator Object is manipulated, and by control action identification module according to the relative motion trend of described two laser spots or respectively according to described The absolute movement trend of two laser spots recognizes the control action of described two laser designator, finally can be by interaction drive module It is the manipulation instruction that computer can recognize that and perform by the semantic conversion of the control action based on the manipulation object.Pass through The embodiment of the present invention, user interacts operation on the projection screen using two laser designator by natural gesture, So as to solve the problem of single laser designator interaction capabilities are not enough, meet user operation habits, make interactive operation more natural With it is more humane.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to institute in embodiment The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention Example, for those of ordinary skill in the art, on the premise of not paying creative work, can also be obtained according to these accompanying drawings Obtain other accompanying drawings.
Fig. 1 is a kind of structural representation of contactless screen interactive system provided in an embodiment of the present invention;
Fig. 2 is the structural representation of another contactless screen interactive system provided in an embodiment of the present invention;
Fig. 3 is the structural representation of another contactless screen interactive system provided in an embodiment of the present invention;
Fig. 4 is a kind of flow chart of contactless screen exchange method provided in an embodiment of the present invention;
Fig. 5 is the flow chart of another contactless screen exchange method provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
, being capable of more nature and hommization the embodiment of the invention discloses a kind of contactless screen exchange method and system Realize contactless screen interaction in ground.It is described in detail individually below.
Referring to Fig. 1, Fig. 1 shows that a kind of structure of contactless screen interactive system provided in an embodiment of the present invention is shown It is intended to.As shown in figure 1, the contactless screen interactive system can include the computer for being used to delivering to image information into projecting apparatus 10th, for the projecting apparatus 30 of described image presentation of information on the screen 20, the image for capture projection on the screen 20 to be believed The video camera 40 of breath and for producing the laser beam emitting device 50 of laser beam projects on the screen 20.Wherein, laser beam emitting device 50 can include two laser designator, specifically, and laser designator can be laser pen or the wearing that may be worn on finger Formula laser designator, the embodiment of the present invention is not construed as limiting;The optical maser wavelength that laser beam emitting device 50 is launched can be 650nm, And the high-pass filter that a piece of cutoff wavelength is 650nm can be increased on the camera lens of video camera 40, effectively to reduce the shadow of ambient light Ring.As shown in figure 1, the contactless screen interactive system can further include actuator 60, the actuator 60 can be wrapped Laser spots locating module 601, manipulation Object Identification Module 602, control action identification module 603 and interaction drive module 604 are included, Wherein:
Laser spots locating module 601, for two laser of detection to refer in real time in the image information that is captured from video camera 40 Show the positional information and temporal information of two laser spots of device projection on the screen 20, and by two laser spots in image information Camera review coordinate be mapped to computer screen coordinate.
Object Identification Module 602 is manipulated, for according to the computer screen coordinate of described two laser spots and residence time Information, recognizes the manipulation object pointed by described two laser designator.
Specifically, the specific method of identification manipulation object is as follows:
When the computer screen of at least one in two laser spots of two laser designator projections on the screen 20 is sat It is marked in preset time(For example, 2 seconds)When keeping constant, it is determined that occur selecting object event;
When selecting object event occurs for one in two laser spots, if the computer screen coordinate of the laser spots is at certain In the viewing area of individual object, it is determined that the object is manipulation object;
When selecting object event occurs for two laser spots, if the time difference that two events occur is less than threshold value, really Surely occur two selecting object events, if the computer screen coordinate of two laser spots falls in the viewing area of same object It is interior, it is determined that the object is manipulation object, if the computer screen coordinate of two laser spots falls in the aobvious of two different objects Show in region, it is determined that two different objects are two manipulation objects;
When selecting object event occurs for two laser spots, if the time difference that two events occur is more than threshold value, It is determined that after the selecting object event that occurs it is invalid, if first occurring the computer screen coordinate of the laser spots of selecting object event at certain In the viewing area of individual object, it is determined that the object is manipulation object.
Control action identification module 603, for two laser spots according to two laser designator projections on the screen 20 Relative motion trend or the control actions of two laser designator is recognized according to the absolute movement trend of two laser spots respectively.
The method of specific identification control action is as follows:
On the basis of manipulation object is had determined that, by the laser of a projection on the screen 20 in two laser designator Point regard another laser spots of projection on the screen 20 in two laser designator as the vector as the starting point of vector Terminal, according to the variation tendency of the vector field homoemorphism and deflection, to recognize the control action of two laser designator.For example, working as Identified vector field homoemorphism constantly becomes hour, then the semanteme of the control action of two laser designator be image down, when really When fixed vector field homoemorphism constantly becomes big, then the semanteme of the control action of two laser designator amplifies for image, when identified When the deflection of vector changes, then the semanteme of the control action of two laser designator is that image is rotated accordingly, When i.e. identified vectorial inverse clock rotates, the picture manipulated follows inverse clock to rotate, the vectorial clockwise rotation of the determination When turning, the picture manipulated also and then carries out clockwise rotation.Certainly, also screen can be incident upon according to two laser designator respectively The movement locus of laser spots on curtain 20, to recognize the control action of two laser designator, for example, mobile Pictures location, is turned over The operation such as page.
Interaction drive module 604, for by control action be converted to computer it is recognizable for the manipulation object Manipulation instruction is simultaneously transferred to computer 10, and manipulation instruction is performed by computer 10.
It should be noted that laser spots locating module 601, manipulation Object identifying mould in actuator 60 in present embodiment Block 602, control action identification module 603 and interaction drive module 604 can be located among computer 10, implement certainly other In mode, these modules may be additionally located in the host computer communicated to connect with computer 10 and video camera 40, such as embedded device In.Further, it is also possible to which by multiple laser designator, such as multiple wearable Wearable laser designator on finger come real Existing contactless screen interaction, specific embodiment can be, on the screen 20 many are projected by multiple laser designator The positional information and residence time information of individual laser spots come determine manipulate object, then pass through the relative motion of this multiple laser spots Trend or the control action for recognizing multiple laser designator by the absolute movement trend of this multiple laser spots respectively.It is such The contactless screen interaction system that multiple laser designator are realized can be widely applied to, for example, field of play.In addition, screen 20 In addition to it can be projection giant-screen, liquid crystal giant-screen or the screen of other forms be can also be, when screen 20 is big During liquid crystal display, projecting apparatus 30, i.e. computer 10 can be saved image information is directly output to screen 20, and among Fig. 1 The setting of other devices is constant.
Wherein, the contactless screen interactive system that Fig. 1 is provided can pass through natural hand using two laser designator Gesture carries out interactive operation on screen, so as to solve the problem of single laser designator interaction capabilities are not enough, meets user behaviour Work is accustomed to, and makes interactive operation more natural and more humane.
Referring to Fig. 2, Fig. 2 shows the structure of another contactless screen interactive system provided in an embodiment of the present invention Schematic diagram.Wherein, the contactless screen interactive system shown in Fig. 2 is that contactless screen interactive system as shown in Figure 1 is entered Row optimization is obtained.In the contactless screen interactive system shown in Fig. 2, laser spots locating module 601 further comprises mark Order member 6011 and positioning unit 6012, wherein:
Demarcate unit 6011, the mapping relations for determining camera review coordinate system and computer screen coordinate system.Its In, the method for determining the mapping relations of camera review coordinate system and computer screen coordinate system has been ordinary skill people Member is known, therefore does and simply introduce at this, and method is as follows:
Video camera 40 shoots demarcation gridiron pattern from multiple orientation, is obtained using classical Zhang Zhengyou standardizations in video camera Parameter, including distortion parameter;The full frame chessboard table images that project of computer 10 intercept full frame acquisition image to screen 20, The chessboard angular coordinate of computer screen is obtained using classical harris Corner Detection Algorithms(X ', Y ');Video camera 40, which is caught, to be thrown Shadow giant-screen, obtains image, and distortion correction is carried out to image according to the distortion parameter of video camera 40, and using classics harris angles Point detection algorithm obtains the chessboard angular coordinate of camera review(X ', y ');Utilize the chessboard angle point of the computer screen of acquisition Coordinate(X ', Y ')With the chessboard angular coordinate of camera review(X ', y '), to calculate homography matrix H, i.e.,:
After homography matrix H is calculated, camera review coordinate system and calculating can be determined by homography matrix H The mapping relations of machine screen coordinate system, i.e.,:
Wherein, formula(2)In(X, Y)For the coordinate in computer screen coordinate system,(X, y)For camera review coordinate system In coordinate.
Positioning unit 6012, for two laser designator of detection to be thrown in real time in the image information that is captured from video camera 40 The positional information and temporal information of two laser spots on the screen 20 are penetrated, and pass is mapped according to determined by demarcation unit 6011 System, computer screen coordinate is mapped to by camera review coordinate of two laser spots in image information.Wherein, by laser spots The specific method that camera review coordinate in image information is mapped to computer screen coordinate is as follows:
The image that the distortion parameter drawn according to demarcation unit 6011 is arrived to video camera real-time capture carries out distortion correction;
To cameras capture to the image after distortion correction enter row threshold division, and carry out dilation operation and area Field mark;It is pointed out that Threshold segmentation, dilation operation and zone marker here is compared with technology, therefore no longer go to live in the household of one's in-laws on getting married herein State;
The camera review coordinate of the center-of-mass coordinate, as laser spots of different zones is calculated, it is as follows:
(3)
Wherein, formula(3)In xi、yiFor belong to laser spots region coordinate a little, N represents to belong to laser spots Region quantity a little.
According to publicity(3)And the mapping equation demarcated during unit 1011 is drawn(2), two laser designator are incident upon The camera review coordinate of two laser spots on screen 20 is mapped under computer screen coordinate system.
Wherein, the contactless screen interactive system that Fig. 2 is provided can pass through natural hand using two laser designator Gesture carries out interactive operation on screen, so as to solve the problem of single laser designator interaction capabilities are not enough, meets user behaviour Work is accustomed to, and makes interactive operation more natural and more humane.
Referring to Fig. 3, Fig. 3 is the structure chart of another contactless screen interactive system disclosed in the embodiment of the present invention, For performing contactless screen exchange method disclosed in the embodiment of the present invention.As shown in figure 3, the system can include:For By image information deliver to projecting apparatus computer 301, for by described image presentation of information projecting apparatus 303 on screen 302, For capturing the video camera 304 of projection image information on screen 302 and for producing laser beam projects on screen 302 Laser beam emitting device 305;Wherein, laser beam emitting device 305 includes two laser designator, specifically, and laser designator can be with For laser pen or the Wearable laser designator that may be worn on finger.As shown in figure 3, the contactless screen interactive system is also Including actuator 306, as shown in figure 3, actuator 306 includes:At least one processor 3061, such as CPU, at least one network Interface 3064 or other users interface 3063, memory 3065, at least one communication bus 3062.Communication bus 3062 is used for Realize the connection communication between these components.Wherein, user interface 3063 can optionally include USB interface and other standards Interface, wireline interface.Network interface 3064 can optionally include Wi-Fi interface and other wave points.Memory 3065 High-speed RAM memory may be included, it is also possible to also including non-labile memory(non-volatile memory), for example At least one magnetic disk storage.Memory 3065 optionally can be located remotely from depositing for aforementioned processor 3061 comprising at least one Storage device.
In some embodiments, memory 3065 stores following element, can perform module or data structure, or Their subset of person, or their superset:
Operating system 30651, comprising various system programs, for realizing various basic businesses and handling hardware based Task;
Application module 30652, comprising various application programs such as equipment control service routine, device identification service programs, is used In realizing various applied business.
Specifically, processor 3061 is used to call the program stored in memory 3065, performs following operate:
Two laser designator of detection are projected on screen 302 in real time in the image information captured from video camera 304 The positional information and temporal information of two laser spots, and camera review coordinate of two laser spots in image information is mapped To computer screen coordinate;
According to the computer screen coordinate and residence time information of two laser spots, pointed by two laser designator of identification Manipulation object;
According to the relative motion trend of two laser spots or respectively according to the absolute movement trend identification two of two laser spots The control action of individual laser designator;
The control action is converted into the recognizable manipulation instruction for manipulating object of computer and calculating is transferred to Machine, manipulation instruction is performed by computer 301..
In one embodiment, camera review coordinate of two laser spots in image information is mapped to by processor 3061 The concrete mode of computer screen coordinate is:
Determine the mapping relations of camera review coordinate system and computer screen coordinate system;
Two laser designator of detection are projected on screen 302 in real time in the image information captured from video camera 304 The positional information and temporal information of two laser spots, and according to identified mapping relations, by two laser spots in image information In camera review coordinate be mapped to computer screen coordinate.
In one embodiment, processor 3061 recognizes the concrete mode of the manipulation object pointed by two laser designator For:
When the computer screen of at least one in two laser spots of two laser designator projections on screen 302 is sat It is marked on when keeping constant in preset time, it is determined that occur selecting object event;
When selecting object event occurs for one in two laser spots, if the computer screen coordinate of the laser spots is at certain In the viewing area of individual object, it is determined that the object is manipulation object;
When selecting object event occurs for two laser spots, if the time difference that two events occur is less than threshold value, really Surely occur two selecting object events, if the computer screen coordinate of two laser spots falls in the viewing area of same object It is interior, it is determined that the object is manipulation object, if the computer screen coordinate of two laser spots falls in the aobvious of two different objects Show in region, it is determined that two different objects are two manipulation objects;
When selecting object event occurs for two laser spots, if the time difference that two events occur is more than threshold value, It is determined that after the selecting object event that occurs it is invalid, if first occurring the computer screen coordinate of the laser spots of selecting object event at certain In the viewing area of individual object, it is determined that the object is manipulation object.
In one embodiment, the concrete mode of the identification control action of processor 3061 is:
Using the laser spots of a projection on screen 302 in two laser designator as vectorial starting point, by two Another projection laser spots on screen 302 in laser designator as the vectorial terminal, according to the vector field homoemorphism and The variation tendency of deflection, to recognize the control action of two laser designator;Or, thrown respectively according to two laser designator The movement locus of laser spots on screen 302 is penetrated, to recognize the control action of two laser designator.
In another embodiment, the concrete mode of the identification control action of processor 3061 is:
Using the laser spots of a projection on screen 302 in two laser designator as vectorial starting point, by two Another laser spots of projection on screen 302 in laser designator are as the vectorial terminal, when the vector field homoemorphism is continuous Become hour, then the semanteme of the control action of two laser designator is picture diminution, when the vector field homoemorphism constantly becomes big, then two The semanteme of the control action of individual laser designator amplifies for picture, when the vectorial deflection changes, then two laser The semanteme of the control action of indicator is that picture is rotated accordingly;Or, it is incident upon respectively according to two laser designator The movement locus of laser spots on screen, to recognize the control action of two laser designator.
Wherein, the contactless screen interactive system that Fig. 3 is provided can pass through natural hand using two laser designator Gesture carries out interactive operation on screen, so as to solve the problem of single laser designator interaction capabilities are not enough, meets user behaviour Work is accustomed to, and makes interactive operation more natural and more humane.
Referring to Fig. 4, Fig. 4 shows a kind of flow chart of contactless screen exchange method provided in an embodiment of the present invention. As shown in figure 4, the contactless screen exchange method may comprise steps of.
S401:Image information is delivered to projecting apparatus by computer, and projecting apparatus is by image information display on screen, and video camera is caught The image information being projected on screen is obtained, laser beam emitting device produces laser beam projects on screen.
Specifically, laser beam emitting device includes two laser designator, for example, two laser pens or wearable on finger Wearable laser designator.
S402:Laser spots locating module in actuator detects two from the image information that video camera is captured and swashed in real time Optical indicator is incident upon the positional information and temporal information of two laser spots on screen, and by two laser spots in image information In camera review coordinate be mapped to computer screen coordinate.
Specifically, the mapping relations of camera review coordinate system and computer screen coordinate system can be first set up, then will Camera review coordinate of two laser spots in image information is mapped to computer screen coordinate.
S403:Manipulation Object Identification Module in actuator is according to the computer screen coordinates of two laser spots and when stopping Between information, identification two laser designator pointed by manipulation object.
Specifically, the specific method of identification manipulation object is as follows:
The computer screen coordinate of at least one in two laser spots that two laser designator are incident upon on screen In preset time(For example, 2 seconds)When keeping constant, it is determined that occur selecting object event;As one in two laser spots During generation selecting object event, if the computer screen coordinate of the laser spots is in the viewing area of some object, it is determined that should Object is manipulation object;
When selecting object event occurs for two laser spots, if the time difference that two events occur is less than threshold value, really Surely occur two selecting object events, if the computer screen coordinate of two laser spots falls in the viewing area of same object It is interior, it is determined that the object is manipulation object, if the computer screen coordinate of two laser spots falls in the aobvious of two different objects Show in region, it is determined that two different objects are two manipulation objects;
When selecting object event occurs for two laser spots, if the time difference that two events occur is more than threshold value, It is determined that after the selecting object event that occurs it is invalid, if first occurring the computer screen coordinate of the laser spots of selecting object event at certain In the viewing area of individual object, it is determined that the object is manipulation object.
S404:Control action identification module in the actuator basis according to the relative motion trends of two laser spots or respectively The absolute movement trend of two laser spots recognizes the control action of two laser designator.
Specifically, the method for identification control action is as follows:
On the basis of manipulation object is had determined that, the laser spots that one in two laser designator is incident upon on screen As the starting point of vector, the laser spots another in two laser designator being incident upon on screen are used as the vectorial end Point, according to the variation tendency of the vector field homoemorphism and deflection, to recognize the control action of two laser designator.For example, working as institute The vector field homoemorphism of determination constantly becomes hour, then the semanteme of the control action of two laser designator is image down, when being determined Vector field homoemorphism when constantly becoming big, then the semanteme of the control action of two laser designator is image amplification, when it is identified to When the deflection of amount changes, then the semanteme of the control action of two laser designator is that image is rotated accordingly, i.e., During identified vectorial inverse clock rotation, the picture manipulated follows inverse clock to rotate, the vectorial clockwise rotation of the determination When, the picture manipulated also and then carries out clockwise rotation.Certainly, also screen can be incident upon according to two laser designator respectively On laser spots movement locus, to recognize the control action of two laser designator, for example, mobile Pictures location, page turning etc. Operation.
S405:Control action is converted to recognizable being directed to of computer and manipulates object by the interaction drive module in actuator Manipulation instruction and be transferred to computer, manipulation instruction is performed by computer.
Specifically, manipulation object is based in interaction drive module, the semantic conversion of control action can recognize that for computer And after the manipulation instruction performed, computer can be transferred to, perform manipulation instruction to drive the progress of interactive operation by computer.
In the contactless screen exchange method that Fig. 4 is provided, natural gesture is passed through using two laser designator Interactive operation is carried out on screen, so as to solve the problem of single laser designator interaction capabilities are not enough, meets user's operation Custom, makes interactive operation more natural and more humane.
Referring to Fig. 5, Fig. 5 shows the flow of another contactless screen exchange method provided in an embodiment of the present invention Figure.As shown in figure 5, the contactless screen exchange method may comprise steps of.
S501:Image information is delivered to projecting apparatus by computer, and projecting apparatus is by image information display on screen, and video camera is caught The image information being projected on screen is obtained, laser beam emitting device produces laser beam projects on screen;
Specifically, laser beam emitting device includes two laser designator, for example, two laser pens or wearable on finger Wearable laser designator.
S502:Determine the mapping relations of camera review coordinate system and computer screen coordinate system;
Specifically, step S503 can be performed by the demarcation unit in the laser spots locating module in actuator;It is determined that The method of the mapping relations of camera review coordinate system and computer screen coordinate system is ripe for those of ordinary skill in the art Know, therefore do and simply introduce at this, method is as follows:
Video camera shoots demarcation gridiron pattern from multiple orientation, and the internal reference of video camera is obtained using classical Zhang Zhengyou standardizations Number, including distortion parameter;The full frame chessboard table images that project of computer intercept full frame acquisition image, used to screen 20 Classical harris Corner Detection Algorithms obtain the chessboard angular coordinate of computer screen(X ', Y ');It is big that video camera 40 catches projection Screen, obtains image, and distortion correction is carried out to image according to the distortion parameter of video camera 40, and using the inspection of classics harris angle points Method of determining and calculating obtains the chessboard angular coordinate of camera review(X ', y ');Utilize the chessboard angular coordinate of the computer screen of acquisition (X ', Y ')With the chessboard angular coordinate of camera review(X ', y '), to calculate homography matrix H, i.e.,:
After homography matrix H is calculated, camera review coordinate system and calculating can be determined by homography matrix H The mapping relations of machine screen coordinate system, i.e.,:
Wherein, formula(2)In(X, Y)For the coordinate in computer screen coordinate system,(X, y)For camera review coordinate system In coordinate.
S503:Detect that two laser designator are incident upon two on screen in real time from the image information that video camera is captured The positional information and temporal information of individual laser spots, and the mapping relations according to determined by previous step, two laser spots are existed Camera review coordinate in image information is mapped to computer screen coordinate;
Specifically, step S503 can be performed by the positioning unit in the laser spots locating module in actuator;By two Camera review coordinate of two laser spots that individual laser designator is incident upon on screen in image information is mapped to computer The specific method of screen coordinate is as follows:
The image arrived according to the distortion parameter drawn in step S502 to video camera real-time capture carries out distortion correction;
To cameras capture to the image after distortion correction enter row threshold division, and carry out dilation operation and area Field mark;It is pointed out that Threshold segmentation, dilation operation and zone marker here is compared with technology, therefore no longer go to live in the household of one's in-laws on getting married herein State;
The center-of-mass coordinate of different zones, as laser spots image coordinate are calculated, it is as follows:
(3)
Wherein, formula(3)In xi、yiFor belong to laser spots region coordinate a little, N represents to belong to laser spots Region quantity a little.
Then the mapping equation in being drawn according to demarcation unit(2), two laser designator are incident upon on screen Even the camera review coordinate of a laser spots is mapped under computer screen coordinate system.
S504:Manipulation Object Identification Module in actuator is according to the computer screen coordinates of two laser spots and when stopping Between information, identification two laser designator pointed by manipulation object;
Specifically, the specific method of identification manipulation object is as follows:
The computer screen coordinate of at least one in two laser spots that two laser designator are incident upon on screen In preset time(For example, 2 seconds)When keeping constant, it is determined that occur selecting object event;As one in two laser spots During generation selecting object event, if the computer screen coordinate of the laser spots is in the viewing area of some object, it is determined that should Object is manipulation object;
When selecting object event occurs for two laser spots, if the time difference that two events occur is less than threshold value, really Surely occur two selecting object events, if the computer screen coordinate of two laser spots falls in the viewing area of same object It is interior, it is determined that the object is manipulation object, if the computer screen coordinate of two laser spots falls in the aobvious of two different objects Show in region, it is determined that two different objects are two manipulation objects;
When selecting object event occurs for two laser spots, if the time difference that two events occur is more than threshold value, It is determined that after the selecting object event that occurs it is invalid, if first occurring the computer screen coordinate of the laser spots of selecting object event at certain In the viewing area of individual object, it is determined that the object is manipulation object.
S505:Control action identification module in the actuator basis according to the relative motion trends of two laser spots or respectively The absolute movement trend of two laser spots recognizes the control action of two laser designator;
Specifically, the method for identification control action is as follows:
On the basis of manipulation object is had determined that, the laser spots that one in two laser designator is incident upon on screen As the starting point of vector, the laser spots another in two laser designator being incident upon on screen are used as the vectorial end Point, according to the variation tendency of the vector field homoemorphism and deflection, to recognize the control action of two laser designator.For example, working as institute The vector field homoemorphism of determination constantly becomes hour, then the semanteme of the control action of two laser designator is image down, when being determined Vector field homoemorphism when constantly becoming big, then the semanteme of the control action of two laser designator is image amplification, when it is identified to When the deflection of amount changes, then the semanteme of the control action of two laser designator is that image is rotated accordingly, i.e., During identified vectorial inverse clock rotation, the picture manipulated follows inverse clock to rotate, the vectorial clockwise rotation of the determination When, the picture manipulated also and then carries out clockwise rotation.Certainly, also screen can be incident upon according to two laser designator respectively On laser spots movement locus, to recognize the control action of two laser designator, for example, mobile Pictures location, page turning etc. Operation.
S506:Control action is converted to recognizable being directed to of computer and manipulates object by the interaction drive module in actuator Manipulation instruction and be transferred to computer, manipulation instruction is performed by computer;
Specifically, manipulation object is based in interaction drive module, the semantic conversion of control action can recognize that for computer And after the manipulation instruction performed, computer can be transferred to, perform manipulation instruction to drive the progress of interactive operation by computer.
In the contactless screen exchange method that Fig. 5 is provided, natural gesture is passed through using two laser designator Interactive operation is carried out on screen, so as to solve the problem of single laser designator interaction capabilities are not enough, meets user's operation Custom, makes interactive operation more natural and more humane.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, storage Medium can include:Flash disk, read-only storage(Read-Only Memory, ROM), random access device(Random Access Memory, RAM), disk or CD etc..
The contactless screen exchange method that is there is provided above the embodiment of the present invention, system are described in detail, this Apply specific case in text to be set forth the principle and embodiment of the present invention, the explanation of above example is only intended to Help to understand method and its core concept of the invention;Simultaneously for those of ordinary skill in the art, the think of according to the present invention Think, will change in specific embodiments and applications, in summary, this specification content should not be construed as pair The limitation of the present invention.

Claims (10)

1. a kind of contactless screen interactive system, it is characterised in that including the calculating for image information to be delivered to projecting apparatus Machine, for by projecting apparatus of the described image presentation of information on screen, for capturing the image information of projection on the screen Video camera and for producing the laser beam emitting device of laser beam projects on the screen, the laser beam emitting device includes two Individual laser designator, the system also includes actuator, and the actuator includes:
Laser spots locating module, for detecting described two laser designations in real time in the image information that is captured from the video camera The positional information and temporal information of two laser spots of device projection on the screen, and by described two laser spots in the figure As the camera review coordinate in information is mapped to computer screen coordinate;
Object Identification Module is manipulated, for the computer screen coordinate and residence time information according to described two laser spots, is known Manipulation object pointed by not described two laser designator;
Control action identification module, swashs for the relative motion trend according to described two laser spots or respectively according to described two The absolute movement trend of luminous point recognizes the control action of described two laser designator;
Interaction drive module, for the control action to be converted into the recognizable manipulation for the manipulation object of computer Instruct and be transferred to computer, the manipulation instruction is performed by computer;
Wherein, it is described manipulation Object Identification Module specifically for:
When selecting object event occurs for described two laser spots, if the time difference that two events occur is less than threshold value, really Surely occur two selecting object events, if the computer screen coordinate of described two laser spots falls in the viewing area of same object In domain, it is determined that the object for manipulation object, if the computer screen coordinate of described two laser spots fall it is right in two differences In the viewing area of elephant, it is determined that described two different objects are two manipulation objects.
2. contactless screen interactive system according to claim 1, it is characterised in that the laser spots locating module bag Include:
Demarcate unit, the mapping relations for determining camera review coordinate system and computer screen coordinate system;
Positioning unit, for detecting described two laser designator projections in the image information that is captured from the video camera in real time The positional information and temporal information of two laser spots on the screen, and according to the mapping relations, swash described two Camera review coordinate of the luminous point in described image information is mapped to computer screen coordinate.
3. contactless screen interactive system according to claim 1, it is characterised in that the manipulation Object Identification Module Specifically for:
When the computer screen coordinate of at least one in described two laser spots keeps constant in preset time, it is determined that Generation selecting object event;
When selecting object event occurs for one in described two laser spots, if the computer screen coordinate of the laser spots is at certain In the viewing area of individual object, it is determined that the object is manipulation object;
When selecting object event occurs for described two laser spots, if the time difference that two events occur is more than threshold value, It is determined that after the selecting object event that occurs it is invalid, if first occurring the computer screen coordinate of the laser spots of selecting object event at certain In the viewing area of individual object, it is determined that the object is manipulation object.
4. the contactless screen interactive system according to any one of claims 1 to 3, it is characterised in that the manipulation is moved Make identification module specifically for:
, will be described using the laser spots of a projection on the screen in described two laser designator as the starting point of vector Another projection laser spots on the screen in two laser designator as the vectorial terminal, according to it is described to The mould of amount and the variation tendency of deflection, to recognize the control action of described two laser designator;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize described two The control action of individual laser designator.
5. the contactless screen interactive system according to any one of claims 1 to 3, it is characterised in that the manipulation is moved Make identification module specifically for:
, will be described using the laser spots of a projection on the screen in described two laser designator as the starting point of vector Another laser spots of projection on the screen in two laser designator are as the vectorial terminal, when the vector Mould constantly become hour, then the semanteme of the control action of described two laser designator be picture reduce, when the vector field homoemorphism When constantly becoming big, then the semanteme of the control action of described two laser designator amplifies for picture, when the vectorial deflection When changing, then the semanteme of the control action of described two laser designator is that picture is rotated accordingly;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize described two The control action of individual laser designator.
6. a kind of contactless screen exchange method, it is characterised in that methods described includes:Image information is delivered to throwing by computer Shadow instrument, projecting apparatus by described image presentation of information on screen, video camera capture projection image information on the screen and Laser beam emitting device produces laser beam projects on the screen, and the laser beam emitting device includes two laser designator, institute Stating method also includes:
Described two laser are detected in real time in the image information that the laser spots locating module of actuator is captured from the video camera The positional information and temporal information of two laser spots of indicator projection on the screen, and by described two laser spots in institute The camera review coordinate stated in image information is mapped to computer screen coordinate;
The manipulation Object Identification Module of actuator according to the computer screen coordinate and residence time information of described two laser spots, Recognize the manipulation object pointed by described two laser designator;
The control action identification module of actuator is according to the relative motion trend of described two laser spots or respectively according to described two The absolute movement trend of individual laser spots recognizes the control action of described two laser designator;
The interaction drive module of actuator by the control action be converted to computer it is recognizable for the manipulation object Manipulation instruction is simultaneously transferred to computer, and the manipulation instruction is performed by computer;
Wherein, the manipulation Object Identification Module of the actuator is according to the computer screen coordinate of described two laser spots and stop Temporal information, recognizes that the manipulation object pointed by described two laser designator includes:
When selecting object event occurs for described two laser spots, if the time difference that two events occur is less than threshold value, really Surely occur two selecting object events, if the computer screen coordinate of described two laser spots falls in the viewing area of same object In domain, it is determined that the object is manipulation object, if the computer screen coordinate of described two laser spots falls in two different objects Viewing area in, it is determined that described two different objects be two manipulation objects.
7. contactless screen exchange method according to claim 6, it is characterised in that the laser spots of the actuator are determined Detect that described two laser designator are incident upon the screen in real time in the image information that position module is captured from the video camera On two laser spots positional information and temporal information, and the video camera by described two laser spots in described image information Image coordinate, which is mapped to computer screen coordinate, to be included:
The laser spots locating module of actuator determines the mapping relations of camera review coordinate system and computer screen coordinate system;
Described two laser designator projections are detected in the image information captured from the video camera in real time on the screen Two laser spots positional information and temporal information, and according to the mapping relations, by described two laser spots in the figure As the camera review coordinate in information is mapped to computer screen coordinate.
8. contactless screen exchange method according to claim 6, the manipulation Object Identification Module root of the actuator According to the computer screen coordinate and residence time information of described two laser spots, recognize pointed by described two laser designator Manipulation object includes:
At least one computer screen coordinate of the manipulation Object Identification Module of actuator in described two laser spots is pre- If keep constant in the time, it is determined that occur selecting object event;
One in described two laser spots when occurring selecting object event, if the computer screen coordinate of the laser spots is at certain In the viewing area of individual object, it is determined that the object is manipulation object;
When selecting object event occurs for described two laser spots, if the time difference that two events occur is more than threshold value, really The selecting object event occurred after fixed is invalid, if first occurring the computer screen coordinate of the laser spots of selecting object event at some In the viewing area of object, it is determined that the object is manipulation object.
9. the contactless screen exchange method according to any one of claim 6~8, the control action of the actuator is known Other module is according to the relative motion trend of described two laser spots or respectively according to the absolute movement trend of described two laser spots Recognizing the control action of described two laser designator includes:
The control action identification module of actuator swashing on the screen by one in described two laser designator projection Luminous point as vector starting point, using another projection laser spots on the screen in described two laser designator as The vectorial terminal, according to the variation tendency of the vector field homoemorphism and deflection, to recognize described two laser designator Control action;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize described two The control action of individual laser designator.
10. the contactless screen exchange method according to any one of claim 6~8, the control action of the actuator Identification module becomes according to the relative motion trend of described two laser spots or respectively according to the absolute movement of described two laser spots Gesture recognizes that the control action of described two laser designator includes:
The control action identification module of actuator swashing on the screen by one in described two laser designator projection Luminous point as vector starting point, using another projection laser spots on the screen in described two laser designator as The vectorial terminal, when the vector field homoemorphism constantly becomes small, then the semanteme of the control action of described two laser designator Reduced for picture, when the vector field homoemorphism constantly becomes big, then the semanteme of the control action of described two laser designator is figure Piece amplifies, when the vectorial deflection changes, then the semanteme of the control action of described two laser designator is figure Piece is rotated accordingly;Or,
The movement locus of laser spots on the screen is projected according to described two laser designator respectively, to recognize described two The control action of individual laser designator.
CN201310186372.3A 2013-05-20 2013-05-20 A kind of contactless screen exchange method and system Expired - Fee Related CN104166509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310186372.3A CN104166509B (en) 2013-05-20 2013-05-20 A kind of contactless screen exchange method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310186372.3A CN104166509B (en) 2013-05-20 2013-05-20 A kind of contactless screen exchange method and system

Publications (2)

Publication Number Publication Date
CN104166509A CN104166509A (en) 2014-11-26
CN104166509B true CN104166509B (en) 2017-08-25

Family

ID=51910352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310186372.3A Expired - Fee Related CN104166509B (en) 2013-05-20 2013-05-20 A kind of contactless screen exchange method and system

Country Status (1)

Country Link
CN (1) CN104166509B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104615283B (en) * 2015-01-30 2017-06-13 大连理工大学 A kind of laser pen interaction method
CN105551033B (en) * 2015-12-09 2019-11-26 广州视源电子科技股份有限公司 Component labelling mthods, systems and devices
CN106993146A (en) * 2016-01-21 2017-07-28 中兴通讯股份有限公司 Control method, control device, projector
CN105758385A (en) * 2016-03-19 2016-07-13 广州能欣计算机技术有限公司 Rapid multi-laser-point positioning method and system based on vision
CN109151298B (en) * 2017-08-15 2021-04-16 南京仁光电子科技有限公司 Pan-tilt camera control method, device and system based on screen
CN108509071B (en) * 2017-10-30 2019-08-09 嘉兴仁光乌镇科技有限公司 The method, apparatus, equipment and computer readable storage medium of coordinate anti-trembling on screen
CN108509072B (en) * 2017-10-30 2019-09-24 嘉兴仁光乌镇科技有限公司 The method, apparatus, equipment and computer readable storage medium of coordinate anti-trembling on screen
CN107823883B (en) * 2017-11-21 2020-12-29 河南黄烨科技有限公司 Aiming point screen coordinate obtaining method based on image recognition and laser positioning
CN110874133B (en) * 2018-08-31 2023-04-21 阿里巴巴集团控股有限公司 Interaction method based on intelligent display device, intelligent display device and storage medium
CN109933212B (en) * 2019-02-22 2021-12-21 江苏理工学院 Low-cost keyboard for disabled person
CN112328158A (en) * 2020-07-23 2021-02-05 深圳Tcl新技术有限公司 Interactive method, display device, transmitting device, interactive system and storage medium
CN112333489B (en) * 2020-08-20 2024-05-03 深圳Tcl新技术有限公司 Laser cursor monitoring method, device and storage medium
CN114442862A (en) * 2020-10-30 2022-05-06 深圳Tcl新技术有限公司 Interaction control method based on laser induction, intelligent terminal and storage medium
CN112346644A (en) * 2020-11-19 2021-02-09 深圳Tcl新技术有限公司 Interaction method based on laser induction, terminal equipment and readable storage medium
CN112631495A (en) * 2020-12-25 2021-04-09 深圳Tcl新技术有限公司 Interaction method based on laser induction, terminal equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289086A (en) * 1999-09-21 2001-03-28 精工爱普生株式会社 Interactive display system
KR20100113766A (en) * 2009-04-14 2010-10-22 대전보건대학 산학협력단 Device for calculating location of touch input absence and method of operating the device
CN102508548A (en) * 2011-11-08 2012-06-20 北京新岸线网络技术有限公司 Operation method and system for electronic information equipment
CN102622108A (en) * 2012-01-18 2012-08-01 深圳市中科睿成智能科技有限公司 Interactive projecting system and implementation method for same

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8754851B2 (en) * 2008-09-03 2014-06-17 Wuhan Splendid Optronics Technology Co., Ltd. Remote controls for electronic display board

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1289086A (en) * 1999-09-21 2001-03-28 精工爱普生株式会社 Interactive display system
KR20100113766A (en) * 2009-04-14 2010-10-22 대전보건대학 산학협력단 Device for calculating location of touch input absence and method of operating the device
CN102508548A (en) * 2011-11-08 2012-06-20 北京新岸线网络技术有限公司 Operation method and system for electronic information equipment
CN102622108A (en) * 2012-01-18 2012-08-01 深圳市中科睿成智能科技有限公司 Interactive projecting system and implementation method for same

Also Published As

Publication number Publication date
CN104166509A (en) 2014-11-26

Similar Documents

Publication Publication Date Title
CN104166509B (en) A kind of contactless screen exchange method and system
TWI454968B (en) Three-dimensional interactive device and operation method thereof
CN107341827B (en) Video processing method, device and storage medium
CN101907954B (en) Interactive projection system and interactive projection method
CN102822862B (en) Calculation element interface
US10015402B2 (en) Electronic apparatus
TWI550437B (en) Apparatus capable of tangible interaction, article of manufacture, and method for tangible interaction
US20120319949A1 (en) Pointing device of augmented reality
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
CN102221887A (en) Interactive projection system and method
CN103365617B (en) One kind projection control system, device and method for controlling projection
US9547370B2 (en) Systems and methods for enabling fine-grained user interactions for projector-camera or display-camera systems
CN102196176A (en) Information processing apparatus, information processing method, and program
US11146744B2 (en) Automated interactive system and method for dynamically modifying a live image of a subject
CN110268703A (en) Imaging method and imaging control apparatus
KR102392437B1 (en) Reflection-based control activation
US20180260044A1 (en) Information processing apparatus, information processing method, and program
CN107194968A (en) Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image
CN202159302U (en) Augment reality system with user interaction and input functions
JP2012238293A (en) Input device
CN110134117A (en) A kind of mobile robot method for relocating, mobile robot and electronic equipment
Meško et al. Laser spot detection
CN108594995A (en) A kind of electronic device method and electronic equipment based on gesture identification
US20140104431A1 (en) System and Method for Utilizing a Surface for Remote Collaboration
TW200834472A (en) Image-based pointing-direction detection system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170825