CN110007830B - Somatosensory interaction device and method - Google Patents

Somatosensory interaction device and method Download PDF

Info

Publication number
CN110007830B
CN110007830B CN201910307277.1A CN201910307277A CN110007830B CN 110007830 B CN110007830 B CN 110007830B CN 201910307277 A CN201910307277 A CN 201910307277A CN 110007830 B CN110007830 B CN 110007830B
Authority
CN
China
Prior art keywords
equipment
movable rod
somatosensory interaction
interaction
interact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910307277.1A
Other languages
Chinese (zh)
Other versions
CN110007830A (en
Inventor
刘嘉乐
赖习章
邓奕明
曾庆彬
陈康富
钟家进
谭丽媛
陈喜珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Yelang Intelligent Technology Co ltd
Original Assignee
Zhongshan Yelang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Yelang Intelligent Technology Co ltd filed Critical Zhongshan Yelang Intelligent Technology Co ltd
Priority to CN201910307277.1A priority Critical patent/CN110007830B/en
Publication of CN110007830A publication Critical patent/CN110007830A/en
Application granted granted Critical
Publication of CN110007830B publication Critical patent/CN110007830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to an intelligent control device, in particular to a somatosensory interaction device. This body of feeling interactive device is including crisscross two vertical edges that block waiting to interact equipment edge eaves, utilizes two vertical edges to block two faces of waiting to interact equipment edge eaves as spacing, avoids feeling that interactive device waits to interact equipment relatively and rotates. And, be equipped with on the body with the cushion of the back form adaptation of treating mutual equipment, this cushion can dismantle with the body and be connected for the body is felt mutual device and is applicable to multiple equipment, and presss from both sides more firmly on treating mutual equipment. According to the interaction method of the somatosensory interaction device, the internal function adjustment of a single menu and the switching between multiple menus are realized through two orthogonal straight lines, all actions are completed by adopting sliding interaction, specific gestures do not need to be remembered, and the operation is convenient.

Description

Somatosensory interaction device and method
Technical Field
The invention relates to an intelligent control device, in particular to a somatosensory interaction device and an interaction method.
Background
In recent years, intelligent control technology has developed rapidly, and the current human-computer interaction mode has strong specificity, such as: desktop computers typically use a mouse keyboard; the tablet personal computer and the mobile phone use a touch screen; VR/AR/MR uses gesture interaction; games and televisions use a handle, etc. Different interaction modes have advantages and disadvantages, such as: the touch screen mode is obtained in a visible mode, so that the learning cost is low, but the efficiency is low; the mouse and keyboard are efficient, but the device is large and difficult to carry. The somatosensory interaction technology is one of emerging intelligent control technologies, can be realized based on a common monocular camera (such as a mobile phone, a tablet, a PC and the like), quickly detects and returns gestures/actions in a picture or a video, realizes control actions such as determination, click, left-right sliding, opening/closing of two hands, point dragging and the like, and enables human-computer interaction to be more efficient and convenient. However, due to the limitation of the shape, the use scene and the like of the device to be interacted, the same type of motion sensing interaction device is difficult to be compatible with various hardware and use environments. The somatosensory interaction equipment is usually arranged on a display device of equipment to be interacted, taking a computer display as an example, the somatosensory interaction equipment is arranged on a display outer frame, the display outer frame may have different designs, and as shown in fig. 1, a front eave 11 and a back 12 of the somatosensory interaction equipment both have various designs. Different equipment need independent design body to feel interactive device, and the cost is higher and be unfavorable for batch production.
Disclosure of Invention
The invention aims to provide a somatosensory interaction device suitable for various devices.
In order to achieve the purpose, the invention provides a somatosensory interaction device which comprises a body, a processor arranged on the body, and a sensor and a wireless communication device which are respectively and electrically connected with the processor, wherein the wireless communication device is used for carrying out signal interaction with equipment to be interacted, the body comprises two vertical edges for clamping the edge eaves of the equipment to be interacted in a staggered mode, and a rubber pad matched with the back shape of the equipment to be interacted is arranged on the body.
Wherein, the body is including the perpendicular movable rod, movable rod knob and the horizontal movable rod that connect gradually, and the movable rod knob is screwed up then position fixing between perpendicular movable rod and the horizontal movable rod, the end connection of perpendicular movable rod the cushion, the perpendicular vertical buckling type of tip of horizontal movable rod becomes and is used for blocking the perpendicular screens of treating mutual equipment limit eaves.
One side of the rubber cushion, which is used for being attached to the equipment to be interacted, is set to be one or more of a step surface, a plane, an inclined surface, an arc-shaped inclined surface and an arc-shaped step surface, and the other side, opposite to the side, of the rubber cushion is detachably connected to the body through a clamping groove.
Wherein, the wireless communication device comprises one or more of a Bluetooth chip, WIFI and Zigbee.
Wherein the sensor comprises one or more of a camera, an ultrasonic wave, a radar and a laser.
Wherein the sensor comprises a laser that flashes at a preset frequency.
The somatosensory interaction device comprises two staggered vertical edges, two faces of the edge eaves of the equipment to be interacted are clamped by the two vertical edges to be used as limiting, and the fact that the somatosensory interaction device rotates relative to the equipment to be interacted is avoided. The bottom of this treat interactive equipment's cushion is passed through screens groove detachably and is connected on feeling the interactive device body, and the top designs into the structure at the different screen backs of adaptation for it is applicable to multiple equipment to feel the interactive device, and presss from both sides more firmly on treating interactive equipment.
The method is realized based on the somatosensory interaction device, and multi-menu control is realized through single-hand gesture sliding interaction: defining two ends of a straight line direction as current menu adjusting operation, and defining two ends of the straight line direction orthogonal to the straight line direction as multi-level menu adjusting operation.
The method comprises the following steps: and setting a group of point sequences P ═ P1, P2, P3,., pM }, wherein M > -3, at least one point in the P set is not in the same straight line with other points, all the points in the P set are clicked once according to a preset sequence to finish calibration, and a two-dimensional plane range surrounded by the points is defined as an operation area.
Wherein, defining downward as ending operation, and determining the operation direction according to the length-width ratio W/H of the inscribed rectangle with the lowest point and the highest point: if the length-width ratio W/H is less than-1, judging the operation as a leftward operation; if the length-width ratio W/H is larger than 1, judging the operation as rightward operation; if the aspect ratio W/H is between-1 and 1, the operation is judged to be an upward operation.
And automatically selecting the current active window as an operation object under the condition that the current control scene has multiple applications.
According to the interaction method of the somatosensory interaction device, the internal function adjustment of a single menu and the switching between multiple menus are realized through two orthogonal straight lines, all actions are completed by adopting sliding interaction, specific gestures do not need to be remembered, and the operation is convenient.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, without inventive effort, further drawings may be derived from the following figures.
Fig. 1 is a schematic diagram of an external structure of a device to be interacted in the prior art.
Fig. 2 is a schematic view of an installation structure of the somatosensory interaction device of the invention.
Fig. 3 is a schematic diagram of a rubber mat structure of the somatosensory interaction device of the invention.
Fig. 4 is a schematic structural diagram of another rubber mat of the somatosensory interaction device of the invention.
Fig. 5 is a schematic structural diagram of another rubber mat of the somatosensory interaction device of the invention.
Fig. 6 is a schematic structural diagram of another rubber mat of the somatosensory interaction device of the invention.
Fig. 7 is a schematic structural diagram of the somatosensory interaction device of the invention.
Fig. 8 is a schematic frequency distance diagram of the somatosensory interaction method of the invention.
FIG. 9 is a standard custom operating region of the somatosensory interaction method of the invention
Fig. 10 is a schematic diagram of an optional custom operation area of the motion sensing interaction method of the present invention.
Fig. 11 is a schematic diagram of an orthogonal interaction mode of the somatosensory interaction method of the invention.
Fig. 12 is a schematic diagram of a positive four-way manner of the somatosensory interaction method of the invention.
Fig. 13 is a schematic diagram of a diagonal direction mode of the somatosensory interaction method of the invention.
Fig. 14 is a schematic diagram illustrating a touch miss prevention interaction determination of the motion sensing interaction method of the present invention.
Fig. 15 is a schematic diagram of a method for determining a lowest point inscribed rectangle in a somatosensory interaction method according to the present invention.
Fig. 16 is a schematic diagram of a method for determining a highest point inscribed rectangle in the somatosensory interaction method of the invention.
Detailed Description
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
This body feels mutual device includes the treater to and get for instance device, induction system and communication device with this treater electricity is connected respectively, and communication device specifically is conventional wireless communication device (like bluetooth chip, WIFI, Zigbee), can carry out signal interaction with treating mutual equipment. This body feels interactive device 2 and uses on display 1, and display 1 frame is hard material (be difficult to take place deformation after the extrusion promptly), and frame thickness > is 0.1mm, and concrete mounting structure is shown as figure 2, and this body feels interactive device 2 and includes two crisscross vertical edges 21, 22, utilizes two vertical edges to block two faces of display 1, and as spacing, avoids body feeling interactive device 2 and rotates 1 relative display. The somatosensory interaction device can be fixed on any surface with fixed thickness, such as a display, a desktop, a wooden door, a glass door, an advertisement showcase, a glass exhibition stand, a decorative wall and the like. After the fixation, one surface of the somatosensory interaction device is ensured to have a predetermined relationship, such as parallel, vertical or a designated angle, with a target plane (the surface of the object), and the position relationship is used as a reference for the somatosensory interaction device to judge the direction and the amplitude of the acquired interaction action such as the gesture.
In addition, the image capturing device and the sensing device of the somatosensory interaction device are arranged on the side surfaces of the two edges of the somatosensory interaction device and are arranged on the same side with the display device of the equipment to be displayed. Considering that each sensor (such as a camera, ultrasonic wave, radar and laser) has a certain sensing range, when the somatosensory interaction device is installed, the position of the somatosensory interaction device installed on the equipment to be interacted needs to be determined according to the sensing range of the sensor. Taking a camera as an example, the image capturing device arranged on the somatosensory interaction device can cover the whole display only by a 90-degree opening angle, and at the moment, the somatosensory interaction device is arranged on the top corner of the display and is just opposite to the position of the center line of the corner or is arranged right above the display; taking a laser of a linear laser line as an example, the light intensity covered by the laser line has negative correlation with the distance and has negative correlation with the visual angle, so that the somatosensory interaction device can be arranged on the top corner of the display, and the sensing angle of the somatosensory interaction device is adjusted to be consistent with the diagonal direction of the screen, therefore, the sensing direction of the laser points to the farthest place of the visual field, the light intensity difference in the visual field range is minimum, and the sensing identification is convenient.
Preferably, the adaptive rubber pad is arranged according to the back shapes of the devices to be interacted in the same batch, so that the somatosensory interaction device is clamped on the devices to be interacted more stably. Specifically, as shown in fig. 3-6, the bottom of cushion is passed through screens groove detachably and is connected on the body is felt mutual device body, and the top designs into the structure at the different screen backs of adaptation, like step face 3, plane 4, inclined plane 5, arc inclined plane 6, arc step face 7. In addition, can set up ball or other position adjusting device on the cushion, the interactive equipment is felt to the body that like this different money cushion can match different money, further strengthens the suitability of feeling interactive equipment.
Preferably, as shown in fig. 7, the somatosensory interaction device comprises a movable rod knob 8, a vertical movable rod 9 and a horizontal movable rod 10. The end part of the vertical movable rod 9 is connected with a rubber pad 11 which is tightly attached to the back of the display to play a role in stabilizing and fixing. The end of the horizontal movable rod 10 is bent vertically to form a vertical clamping position 12 for clamping the front edge of the frame of the display 1, so as to fix the whole. Specifically, the joint of the vertical movable rod 9 and the rubber mat 11 rotates freely, and the joint of the vertical movable rod 9 and the horizontal movable rod 10 is fixed through a movable rod knob 8. Before the fixing, the angle and position of the sensor 13 are corrected by means of a parallel detector 14 arranged at the lower end of the display 1. The parallel detector 14 detects whether the target plane is tilted upward or downward according to the relative position relationship with the sensor 13, and then adjusts the position of the movable lever knob 8 according to the detection result, so that the horizontal movable lever 10 is moved in the horizontal direction until the parallel detector 14 detects that the target plane is parallel to the display plane, and the relative position of the vertical movable lever 9 and the horizontal movable lever 10 is fixed by tightening the movable lever knob 8. The image capturing device and the sensing device of the somatosensory interaction device are arranged on the vertical position 12 at the same side with the display screen.
Specifically, the parallel detector 14 is usually a photosensitive device, and when the device emits light such as infrared light/visible light, the sensor 13 feeds back the inclination condition by the color of the LED lamp or different LED lamps. The parallel detector 14 may be a mirror, and the sensor 13 may sense the inclination of the device when the device emits light such as infrared light or visible light, thereby guiding the fixed position of the movable lever knob 8.
Taking a laser of a linear laser line as an example, the interaction method of the somatosensory interaction device changes any display device into a touch surface at first, and can define an operation area and a boundary by self to carry out interaction. Different from the traditional interaction method (a word line with fixed brightness is emitted as a light spot light source, the position of the light spot is calculated through triangular ranging), the word line emitted by the somatosensory interaction device flickers according to fixed frequency, and the ambient light is shielded through specific flickering codes, as shown in fig. 8, a pulse is triggered every t0 time, the duration time of the pulse is t1, the power of the pulse is p1, and then the duration time of the pulse is t1 and the pulse of the power p2 is triggered every t0 time, after two pulses, the sensor can see the light spot flickering at a certain frequency, the irregular light spot is the ambient light, and the light spot is directly filtered through background modeling.
Optionally, t0 is generally the reciprocal of the sensor frequency (frame rate), that is, t0 is 1/f, and the power p0 may generally be 0, or may be a preset value, depending on the actual usage scenario.
In the actual operation process, the interaction method of the somatosensory interaction device adopts a combined calibration strategy, specifically, two groups of point sequences P, Q are set, and the first group of point sequences P is { P1, P2, P3,..., pM }; a second group of point sequences Q ═ Q1, Q2.., qN }, where M > -3, N > -1, P is a set of calibration points, Q is a set of test points, and Q may be an empty set if no test is required; at least one point in the P set is not in the same straight line with other points. And the user finishes the sequential point counting according to the appearance sequence of the points, namely the whole calibration process is finished, and the real operation area can be determined after the calibration is finished. The operation area can be any specified two-dimensional shape, and as three points in a three-dimensional space can determine a two-dimensional plane, the three calibration points can be used for determining the operation area as long as the relative relation between the calibration points and the two-dimensional shape is determined.
In this embodiment, a rectangle shown in fig. 9 is used as an operation area for description, and four corner points of the rectangle are used for calibration point collection for simplifying the description. As shown in fig. 10, any three points that are not collinear in the operation region are selected as the index points, and since the relative positions of the four vertices and the index points are known, as long as three points are determined, the four corner points of the rectangle are also determined together. The P sequence defines the operating region and the points of the Q sequence are used to verify that the calculated operating region is consistent with the actual one, and if the error is large, recalibration is required.
Preferably, in the interaction method of the somatosensory interaction device, the user only needs to remember one gesture (such as palm, fist, vertical thumb, and the like, hereinafter taking palm as an example), and can complete all actions through sliding interaction without remembering a specific gesture.
Specifically, the operations of the parent menu and the submenus are designed to be orthogonal, that is, if the parent menu is selected from top to bottom, the corresponding submenu is triggered from left to right, and so on. As shown in fig. 11, to control the volume of the video, the hand is lifted first, at this time, the interface presents five options such as mailbox and video, and after moving down to the video, the user moves left and right to trigger three submenus "full screen, sound, fast forward and fast backward", and after moving to the "sound" menu, the user moves up and down to trigger the volume control, and after moving to the desired size, the user moves left or right to trigger the determination action.
In a simple interaction scenario, a menu may not be defined, but only four directions "up, down, left and right" as shown in fig. 12 are defined, usually "down" is defined as undo, and the menu at each level consists of four directions, and the whole operation chain is not completed until a specific operation is found, and before that, the hand undo operation can be put down at any time. When the definition of "up, down, left, and right" is made, as shown in fig. 13, the operation is determined in which range the end point falls from the center point with a 45-degree inclination as a boundary.
Since it is likely that the "down" operation will be defined as the end operation, in natural use, not the standard up or right but with some curvature, the logic of the "down" operation is used to trigger the "down" operation. As shown in fig. 14, the traces S1 and S2 represent an upward operation and a rightward operation, and when the traces cross the same horizontal line as the starting point, a judgment is triggered by: as shown in fig. 15, the lowest point inscribed rectangle judgment method determines that the operation is "right" if the aspect ratio W: H >1, and otherwise, the operation is "left"; or as shown in fig. 16, the highest point inscribed rectangle determination method uses a circumscribed rectangle formed by the highest point of the track as a determination basis.
In a multi-application interaction scenario, different applications trigger different interaction logic. Specifically, the operation modes adopted by different applications may be set to be completely different, and the program automatically selects the currently activated (having focus) application as the target application, and different target applications may default to provide a set of operation modes matched therewith, and the user may modify the operation modes as needed. For example, the desktop is understood as an App, the interaction of which is defined as calling out an application on the desktop, and since the number of applications is very large, a long menu can be used for presentation at this time, and after the selection of up and down movement is finished, the left and right movement triggers an "open/switch" command. Under the condition that a plurality of applications are opened, when a PPT obtains a system focus, a user raises his hand and then triggers a corresponding operation, and then the system focus is transferred to a PDF document, and the subsequent moving actions of the palm are all regarded as the operation on the PDF document. The operation of the application currently in focus of the system is higher than the setting in the three directions, the system automatically jumps to the operating mode corresponding to the application, e.g. presented through a menu.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (6)

1. The utility model provides an interactive device is felt to body, its characterized in that includes the body, sets up treater on the body and sensor and the wireless communication device who is connected with this treater electricity respectively, wireless communication device carries out signal interaction with the equipment of waiting to interact, the body includes crisscross two vertical edges that block the equipment of waiting to interact edge eaves, be equipped with the cushion with the back form adaptation of the equipment of waiting to interact on the body, the body includes vertical movable rod, movable rod knob and the horizontal movable rod that connects gradually, and the movable rod knob is tightened then the rigidity between vertical movable rod and the horizontal movable rod, the end connection of vertical movable rod the cushion, the tip of horizontal movable rod is buckled perpendicularly and is formed the vertical screens that are used for blocking the equipment of waiting to interact edge eaves, the cushion is used for laminating one side of the equipment of waiting to interact and establishes to one or more in step face, plane, inclined plane, the other side opposite to the side on the rubber mat is detachably connected to the body through a clamping groove, and the sensor is arranged on the same side as the front side of the device to be interacted.
2. The somatosensory interaction device according to claim 1, wherein the wireless communication device comprises one or more of a Bluetooth chip, WIFI and Zigbee.
3. The somatosensory interaction device of claim 1, wherein the sensor comprises one or more of a camera, ultrasound, radar, laser.
4. The somatosensory interaction device of claim 1, wherein the sensor comprises a laser that flashes at a preset frequency.
5. A somatosensory interaction method is realized based on the somatosensory interaction device of any one of claims 1-2, and multi-menu control is realized through single-hand gesture sliding interaction: defining two ends of a straight line direction as current menu adjusting operation, and defining two ends of the straight line direction orthogonal to the straight line direction as multi-level menu adjusting operation, and also comprising a calibration step: setting a group of point sequences P ═ P1, P2, P3,., pM }, wherein M > -3, at least one point in the P set is not on the same straight line with other points, all the points in the P set are clicked once according to a preset sequence to finish calibration, a two-dimensional plane range enclosed by the points is defined as an operation area, downward is defined as an end operation, and the operation direction is judged according to the length-width ratio W/H of an inscribed rectangle of the lowest point and the highest point: if the length-width ratio W/H is less than-1, judging the operation as a leftward operation; if the length-width ratio W/H is larger than 1, judging the operation as a right operation; if the aspect ratio W/H is between-1 and 1, the operation is judged to be an upward operation.
6. The somatosensory interaction method according to claim 5, wherein when multiple applications exist in the current control scene, the current active window is automatically selected as the operation object.
CN201910307277.1A 2019-04-17 2019-04-17 Somatosensory interaction device and method Active CN110007830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910307277.1A CN110007830B (en) 2019-04-17 2019-04-17 Somatosensory interaction device and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910307277.1A CN110007830B (en) 2019-04-17 2019-04-17 Somatosensory interaction device and method

Publications (2)

Publication Number Publication Date
CN110007830A CN110007830A (en) 2019-07-12
CN110007830B true CN110007830B (en) 2022-08-02

Family

ID=67172439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910307277.1A Active CN110007830B (en) 2019-04-17 2019-04-17 Somatosensory interaction device and method

Country Status (1)

Country Link
CN (1) CN110007830B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103353267A (en) * 2013-07-19 2013-10-16 太仓伟利达铭板科技有限公司 Detection apparatus specially used for computer display screen support
CN204986272U (en) * 2015-09-09 2016-01-20 天津光电高斯通信工程技术股份有限公司 A fixed bolster structure for kinect sensor
CN108066979A (en) * 2016-11-07 2018-05-25 浙江舜宇智能光学技术有限公司 The acquisition methods of virtual three dimensional image, the forming method of interactive environment and somatic sensation television game equipment
CN108759023A (en) * 2018-04-16 2018-11-06 青岛海信日立空调系统有限公司 Human inductor and air-conditioning including human inductor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8619039B2 (en) * 2007-12-21 2013-12-31 Motorola Mobility Llc Translucent touch screen devices including low resistive mesh

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103353267A (en) * 2013-07-19 2013-10-16 太仓伟利达铭板科技有限公司 Detection apparatus specially used for computer display screen support
CN204986272U (en) * 2015-09-09 2016-01-20 天津光电高斯通信工程技术股份有限公司 A fixed bolster structure for kinect sensor
CN108066979A (en) * 2016-11-07 2018-05-25 浙江舜宇智能光学技术有限公司 The acquisition methods of virtual three dimensional image, the forming method of interactive environment and somatic sensation television game equipment
CN108759023A (en) * 2018-04-16 2018-11-06 青岛海信日立空调系统有限公司 Human inductor and air-conditioning including human inductor

Also Published As

Publication number Publication date
CN110007830A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
US11269481B2 (en) Dynamic user interactions for display control and measuring degree of completeness of user gestures
US11886694B2 (en) Apparatuses for controlling unmanned aerial vehicles and methods for making and using same
US8077147B2 (en) Mouse with optical sensing surface
JP3968477B2 (en) Information input device and information input method
RU2579952C2 (en) Camera-based illumination and multi-sensor interaction method and system
US20170024017A1 (en) Gesture processing
KR100449710B1 (en) Remote pointing method and apparatus therefor
US5483261A (en) Graphical input controller and method with rear screen image detection
CN106030495B (en) Multi-modal gesture-based interaction system and method utilizing a single sensing system
KR100974894B1 (en) 3d space touch apparatus using multi-infrared camera
CN105353904A (en) Interactive display system, touch interactive remote control thereof and interactive touch method therefor
CN110007830B (en) Somatosensory interaction device and method
JP2014123316A (en) Information processing system, information processing device, detection device, information processing method, detection method, and computer program
JP4687820B2 (en) Information input device and information input method
CN203217524U (en) Spherical display based multipoint touch system
KR100511044B1 (en) Pointing apparatus using camera
CN105659193A (en) Multifunctional human interface apparatus
US20230409148A1 (en) Virtual mouse
Huot Touch Interfaces
CN111984116A (en) VR perception touch device
CN203376719U (en) Huge CCD optical touch screen
CN113395553A (en) Television and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant