WO2018000519A1 - 一种基于投影的用户交互图标的交互控制方法及系统 - Google Patents

一种基于投影的用户交互图标的交互控制方法及系统 Download PDF

Info

Publication number
WO2018000519A1
WO2018000519A1 PCT/CN2016/093423 CN2016093423W WO2018000519A1 WO 2018000519 A1 WO2018000519 A1 WO 2018000519A1 CN 2016093423 W CN2016093423 W CN 2016093423W WO 2018000519 A1 WO2018000519 A1 WO 2018000519A1
Authority
WO
WIPO (PCT)
Prior art keywords
interactive
icon
interaction
projection
image
Prior art date
Application number
PCT/CN2016/093423
Other languages
English (en)
French (fr)
Inventor
杨伟樑
高志强
罗衡荣
林清云
Original Assignee
广景视睿科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广景视睿科技(深圳)有限公司 filed Critical 广景视睿科技(深圳)有限公司
Publication of WO2018000519A1 publication Critical patent/WO2018000519A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

Definitions

  • the present invention relates to the field of projection interaction, and in particular, to an interactive control method and system for a user interaction icon based on projection.
  • Interactive projection is a popular multimedia display platform in recent years. Using computer vision technology and projection display technology, users can directly interact with virtual scenes on the projection area using feet or hands to create a dynamic interactive experience.
  • the interactive projection system captures and shoots the target image through the image acquisition device, and then processes it by the image data processing system to identify and judge the method of the target performing the interaction operation point, which has the characteristics of natural, concise and direct, in the virtual reality and the human machine.
  • Interactive, visual monitoring and other fields have broad application prospects.
  • the technical problem to be solved by the present invention is to provide an interactive control method and system for a user interaction icon based on projection, and directly recognize whether the interactive operation body in the projection display space operates a user interaction icon by using a computer vision method, thereby being able to easily implement the method.
  • the implementation cost is low and the limiting factors are few. In a projection environment such as strong light or low light, the interactive operation control of the user interaction icon can be performed more accurately.
  • a technical solution adopted by the present invention is: a method for interactively controlling a user interaction icon based on a projection, comprising the steps of: (S1) inputting an interactive image including a user interaction icon to the projection module, the projection The module displays the interactive image projection on the projection display interface; (S2) the interactive operation body performs an interactive operation on the projection display interface; (S3) the image acquisition module continuously acquires and acquires the interactive image information on the projection display interface, and the The interactive image information is transmitted to the central processing unit; (S4) the central processing unit extracts the feature information in the interactive image information, inputs the extracted feature information into the pre-stored trained classifier for identification, and determines the user interaction icon controlled by the operation.
  • the classifier is stored in the central processing unit; (S5) the projection module changes the projection content according to the interaction instruction output by the central processing unit, and returns the input interaction image to the projection module Steps; and/or electronic devices controlled by user interaction icons According to the interactive operation corresponding to the command execution.
  • the interactive image sequence is continuously acquired, and when the number of images is less than a preset threshold, the position of the user interaction icon in the interaction image is located and the grayscale feature of the icon region is extracted, and the The position and the grayscale feature; when the number of images is equal to the preset threshold, assigning weights to the stored positions of the interactive images of each frame and the grayscale features and summing them separately as reference background information; when the number of images is greater than a preset threshold And extracting, according to the reference background information, the feature derived from the grayscale change of the interactive icon region in the interactive image, and the color and shape feature.
  • the central processing unit extracts feature information in the interactive image information, and the specific steps are: positioning the location of the region in the interactive image based on the brightness and geometric shape information of the user interaction icon in the interaction image;
  • the location of the user interaction icon can be described by a rectangular box, denoted as rect i (x i , y i , w i , h i ,);
  • the features of each user interaction icon area are extracted, including two types of features:
  • a class of features based on the background subtraction method, calculates features derived from changes in the gray value of the pixel, denoted as F1 i (f1 i1 , f1 i2 , ..., f1 im ); the second type of feature is based on the interaction operator
  • the color, texture and shape outline features are denoted as F2 i (f2 i1 , f2 i2 ,..., f2 in ); each user interaction icon area is composed of the first type
  • the central processing unit extracts and identifies feature information in the interactive image information, and may also: in a strong light environment, first use the moving target detection or tracking algorithm to detect and identify the interactive operating body in the projection. The position of the display space is compared with the position of each user interaction icon in the interactive image to obtain the user interaction icon number of the operation control; in the low light environment, the second type of feature (F2) is not considered, but is directly utilized.
  • the classifier trained for the first type of feature (F1) recognizes the first type of feature (F1) to obtain the user interaction icon number operated by the interactive operator.
  • the training of the classifier includes: in different illumination environments, the projection module projects the interaction image including the user interaction icon on different projection display interfaces; the image acquisition module collects the interaction image in real time, and extracts the interaction image. The feature information is added, and the feature information is tagged. The machine learning algorithm is used to train the tagged feature data to find the optimal model parameters and complete the classifier construction.
  • the different projection display interfaces are different color backgrounds or different texture backgrounds or different flatness backgrounds;
  • the machine learning algorithm may be a neural network or a support vector machine;
  • one type of the user interaction icon corresponds to one An interactive instruction;
  • the interactive instruction is input by the central processing unit to the projection module and/or directly to other devices connected to the central processing unit.
  • the interactive operator directly touches when performing an interactive operation. Interacting a user interaction icon area in the interactive image, or occluding a user interaction icon area in the interaction image; the interaction operation body is a user's hand or foot or an object controlled by the user; the user interaction icon of the interaction image
  • the information includes its color brightness and/or geometry; the user interaction icon is an application icon of the user interface or an operation icon in the application screen.
  • an interactive control system for a projection-based user interaction icon comprising: a central processing unit, a projection module, and an image acquisition module, wherein the central processing unit respectively Connecting to the projection module and the image acquisition module; the central processing unit is configured to: input an interaction image including a user interaction icon to the projection module, and cause the projection module to project the interactive image on the projection display interface;
  • the image acquisition module continuously acquires and acquires the interactive image information on the projection display interface, and transmits the interactive image information.
  • the central processing unit is configured to extract the feature information in the interactive image information, input the extracted feature information into the pre-stored trained classifier for identification, determine the user interaction icon controlled by the operation, and output the icon corresponding to the user interaction icon.
  • the interactive control system further includes: an audio output device and a storage device, wherein the audio output device and the storage device are respectively connected to a central processing unit, and the storage device is configured to store a preset audio library;
  • the central processing unit is further configured to extract an audio file corresponding to the user interaction icon from a preset audio library of the storage device, and to adjust the projection content of the projection module to the audio output device
  • the acquired audio file is outputted to cause the audio output device to output interactive audio according to the acquired audio file.
  • the image acquisition module is a camera; the feature information of the user interaction icon in the interaction image includes its color brightness and/or geometry; the user interaction icon is Application icon of the user interface or application screen operation icon.
  • the present invention has the following beneficial effects: the computer vision method is used to directly recognize whether the interactive operation body in the projection display space operates the user interaction icon, so the method is easy to implement, and the implementation cost is low, and the common structure-based structure is avoided.
  • the complex calculation process such as the interaction method of light or gesture recognition or the coordinate conversion involved in camera calibration in the system also avoids the segmentation and positioning of skin color; compared with the method based on structured light or gesture recognition, the present invention is in the glare Or in a projection environment such as low light or complete darkness, the interactive operation control of the user interaction icon can be performed more accurately.
  • FIG. 1 is a schematic structural diagram of an interactive control system for a projection-based user interaction icon according to an embodiment of the present invention
  • FIG. 2 is a flowchart of an interactive control method of a projection-based user interaction icon according to an embodiment of the present invention
  • FIG. 3 is a flow chart showing an example of extracting interactive image feature information in an interactive control method of a projection-based user interaction icon according to an embodiment of the present invention
  • FIG. 4 is a flowchart of another example of extracting interactive image feature information in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention
  • FIG. 5a is a schematic diagram of an example of a user interaction icon in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention
  • FIG. 5b is a schematic diagram of another example of a user interaction icon in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention.
  • FIG. 5c is a schematic diagram of still another example of a user interaction icon in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention.
  • FIG. 5d is a schematic diagram of an interaction image of an interaction process in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention.
  • FIG. 1 is a schematic structural diagram of an interaction control system for a projection-based user interaction icon according to an embodiment of the present invention.
  • an interactive control system 10 based on a projection user interaction icon includes: a central processing unit 11 a projection module 12 and an image acquisition module 13 , wherein the central processing unit 11 is connected to the projection module 12 and the image acquisition module 13 respectively; the central processing unit 11 is configured to: input the user interaction icon to the projection module 12 Interacting an image, causing the projection module 12 to project the interactive image on the projection display interface; when the interaction between the projection screen 14 of the projection module 12 and the image acquisition module 13 performs an interaction operation,
  • the image acquisition module 13 continuously acquires and acquires interactive image information on the projection display interface, and transmits the interactive image information to the central processing unit 11 to extract feature information in the interactive image information, and inputs the extracted feature information into a pre-stored image.
  • the trained classifier identifies, determines the user interaction icon that is controlled by the operation, and loses User interaction with the interactive instructions corresponding to the icon, changing the projection of the projector module 12 in accordance with the contents of interactive instructions outputted central processing unit 11, and again the image projector module 12 input interaction.
  • the user interaction icon corresponds to an interaction instruction; the interaction instruction can be input to the projection module 12 by the central processing unit 11, so that the projection module 12 changes the current projection content according to the interaction instruction;
  • the unit 11 is wirelessly connected to the external intelligent electronic device 20, such that the interactive control system 10 of the projection-based user interaction icon of the present invention can function as a remote controller to control the external intelligent electronic device 20 to perform operations related to the interactive instruction.
  • the wireless connection is established between the interactive control system 10 and the smart electronic device 20, and the distance is not limited.
  • the wireless connection may be connected by Bluetooth or WiFi.
  • the interactive control system 10 may further include: an audio output device and a storage device, the audio output device and the storage device being respectively connected to the central processing unit 11, the storage device for storing a preset audio library.
  • the central processing unit 11 is also used to preset audio from the storage device In the library, an audio file corresponding to the user interaction icon is extracted, and in the process of adjusting the projection content of the projection module 12, the acquired audio file is output to the audio output device to enable the audio output
  • the device outputs interactive audio according to the acquired audio file.
  • the image capturing module 13 is a camera; the feature information of the user interaction icon in the interaction image includes its color brightness and/or geometric shape; the user interaction icon may be an application icon of the user interface or an operation in the application screen. icon.
  • the invention further provides a method for interactively controlling a user interaction icon based on a projection.
  • the method includes:
  • Step (S2) an interaction operation between the projection screen 14 of the projection module 12 and the image acquisition module 13 performs an interaction operation on the projection display interface
  • Step (S5) The projection module 12 changes the projection content according to the interactive instruction output by the central processing unit 11, and returns to the step of inputting the interactive image to the projection module 12.
  • the image capturing module 13 can be a camera, and the framing range covers the projection screen 14 of the projection module 12, because the interactive operation body is located between the image acquisition module 13 and the projection screen 14, and the interactive operation body is located in the image collection.
  • the image acquired by the set module 13 includes an interactive operator screen and a projected screen 14.
  • the interactive operation screen and the projection screen 14 may be partially or completely overlapped or may not overlap.
  • a user interaction icon corresponds to an interaction instruction; the interaction instruction may be input by the central processing unit 11 to the projection module 12, so that the projection module 12 changes the current projection content according to the interaction instruction;
  • the external intelligent electronic device 20 wirelessly connected to the unit 11 causes the external intelligent electronic device 20 to perform an operation corresponding to the user interaction icon.
  • the central processing unit 11 can also be wired to the external smart electronic device 20.
  • FIG. 3 is a flowchart of an example of extracting interactive image feature information in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention; referring to FIG. 3, in the interaction control method: continuously acquiring an interaction image sequence, when When the number of images is less than a preset threshold, positioning a projection area of the user interaction icon in the interaction image and extracting the grayscale feature of the region, storing the projection region and the grayscale feature; when the number of images is equal to a preset threshold, The projection area of each stored image frame and the gradation feature are weighted and summed separately as reference background information; when the number of images is greater than the preset threshold, the interactive icon area in the interactive image is obtained according to the reference background information The feature derived from the grayscale change, and extracting the color and shape features of the interactive icon area in the interactive image. Finally, the extracted feature information is input into the pre-stored trained classifier for recognition, the user interaction icon controlled by the operation is determined, and an interaction instruction corresponding to the user interaction icon
  • the central processing unit 11 extracts feature information in the interactive image information, and the specific steps are: positioning the information based on the information of the user interaction icon in the interaction image, such as brightness, geometric shape, etc.
  • the position of the area in the image; the position of each user interaction icon can be described by a rectangular box, denoted as rect i (x i , y i , w i , h i ,); secondly, the feature of each user interaction icon area is extracted.
  • the first type of feature based on the background subtraction method to calculate the feature derived from the change of the gray value of the pixel, denoted as F1 i (f1 i1 , f1 i2 , ..., f1 im ); Class features, based on the texture and shape outline of the interactive operator, skin color features, denoted as F2 i (f2 i1 , f2 i2 ,..., f2 in ); each user interaction icon area is defined by the first type feature (F1)
  • the feature descriptor representation of the second type of feature (F2) is denoted as FT i (ft i1 , ft i2 , . . . , ft i(m+n) ).
  • FIG. 4 is a flow chart showing another example of extracting interactive image feature information in an interactive control method of a projection-based user interaction icon according to an embodiment of the present invention.
  • the moving target detection or tracking algorithm is used to detect and identify the position of the interactive operation body in the projection display space, and compare with the regional position of each user interaction icon to obtain the user interaction icon number of the operation control; in the low light projection environment
  • the classifier obtained for the first type of feature (F1) training is directly used to identify the first type of feature (F1) to obtain the user interaction icon number of the desired operation of the interactive operator.
  • the training of the classifier preferably includes: in different illumination environments, the projection module 12 projects the interaction image including the user interaction icon on different projection display interfaces, and performs interaction on the projection display interface by different interaction operators. Operation, the image acquisition module 13 collects the interactive image in real time, extracts the feature information in the interactive image, and adds a label to the feature information, and uses the machine learning algorithm to train the tagged feature data to find the optimal model parameter and complete the classifier. Construct.
  • the classifier generated by the machine learning algorithm training can perform self-applicable feature fusion: in a strong light projection environment, the projection display screen has low contrast, and the first type feature F1 is not obvious, but the image acquisition module 13 can clearly Collecting the geometric shape information of the interactive operation body, that is, the second type of feature F2 is obvious.
  • the recognition of the user interaction icon by the classifier is more dependent on the second type of feature F2; instead, the weak light projection
  • the image acquisition module 13 can almost not capture the geometric shape information of the interactive operation body, that is, the second type of feature F2 is not obvious, but the acquired interactive image has high contrast, and the first type of feature F1 is obvious.
  • the identification of whether the user interaction icon is operated by the interworking body is more dependent on the first type of feature F1.
  • the different projection display interfaces may be different color backgrounds or different texture backgrounds or different flatness backgrounds;
  • the machine learning algorithm may be a (depth) neural network or Support Vector Machines.
  • the user interaction icon area on the interactive image of the direct touch may be directly touched, or the interaction interaction body moves between the projection module 12 and the projection screen 14 to the user interaction icon in the interaction image.
  • the area is occluded; the interworking body may be a user's hand or foot or an object controlled by the user.
  • the feature information of the user interaction icon in the interaction image includes its color brightness and/or geometric shape; the user interaction icon may be an application icon of the user interface or an operation icon in the application screen.
  • FIG. 5a-5c both A schematic diagram of an example of a user interaction icon in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention
  • FIG. 5a is a conventional user interface, each icon corresponding to an operation, and the icon may correspond to an application APP or Corresponding to a device; for example, the icon of the user interface can be controlled to open the game interface, or the icon of the user interface can be clicked to control the device connected to it.
  • Figure 5b shows the game page interactive icon, which can control an icon to enter the corresponding game level.
  • Figure 5c shows an interactive icon of the video play page, which can control the interactive icon to pause/play the video, or the video playback process and the like.
  • FIG. 5d is a schematic diagram of an interaction image of an interaction process in an interaction control method of a projection-based user interaction icon according to an embodiment of the present invention; as shown in FIG. 5d, an interaction acquired by the image acquisition module 13 in an area of interactive operation body interaction control
  • the grayscale information of the user interaction icon area being operated in the image changes with respect to the non-interactive operation shown in Fig. 5a, so that features derived from grayscale changes can be extracted.
  • the interworking body is not limited to the illustrated hand, but may be a foot or an object controlled by the interworking body.
  • the present invention directly recognizes whether the user interaction icon in the projection display space is operated by the interactive operation body by using the computer vision method, the method is easy to implement and implemented.
  • the cost is low, avoiding complicated calculation processes such as common interaction methods based on structured light or gesture recognition or coordinate conversion involved in camera calibration in the system, and avoiding skin color segmentation and positioning; and based on structured light or gesture recognition
  • the invention can perform interactive operation control on the user interaction icon more accurately in a projection environment such as strong light or low light or complete darkness.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于投影的用户交互图标的交互控制方法及其系统,在所述方法中,投影模块投影(12)显示包含用户交互图标的交互图像投影画面(14);交互操作体执行交互操作;图像采集模块(13)连续采集获取所述交互图像的信息;中央处理单元(11)提取交互图像信息中的特征信息并进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令给投影模块(12);投影模块(12)根据中央处理单元(13)所输出的交互指令改变投影内容。利用计算机视觉方法直接识别投影显示空间中的用户交互图标有无被交互操作体触摸或遮挡,与基于结构光或者手势识别的方法相比,在强光或者弱光或者完全黑暗等投影环境下,均可以较为准确地对用户交互图标实行交互操作控制。

Description

一种基于投影的用户交互图标的交互控制方法及系统 技术领域
本发明涉及投影交互领域,尤其涉及一种基于投影的用户交互图标的交互控制方法及系统。
背景技术
21世纪以来,手机、计算机等电子设备的硬件性能和普及程度不断提高,触摸屏开始流行。触摸操作使人们脱离了键盘和鼠标的束缚,直接在屏幕上进行操作控制,更加人性化适用化。然而随着不同种类和规格的屏幕以及APP的出现,触摸操作的不便和局限性也渐渐显露出来:小尺寸触摸屏只不过是换了一种形式的鼠标和键盘,未能真正让用户摆脱硬件的束缚;相反地,挂在墙上的触摸大屏幕,操作时必须走近屏幕,使得操作控制不方便且不舒适。另外,个别应用场景中,用户不被允许或者不方便直接接触操作设备,例如正在手术的医生或者正在做饭的厨师等等。
互动投影是一种近年来比较流行的多媒体展示平台,采用计算机视觉技术和投影显示技术,用户可以直接使用脚或手与投影区域上的虚拟场景进行交互,来营造一种动感的交互体验。互动投影系统通过图像采集设备对目标影像进行采集拍摄,然后由影像数据处理系统处理,来识别、判断目标执行交互操作作用点的方法,具有自然、简洁、直接的特点,在虚拟现实、人机交互、视觉监控等领域均有着广泛的应用前景。
发明内容
本发明主要解决的技术问题是提供一种基于投影的用户交互图标的交互控制方法及系统,利用计算机视觉方法直接识别投影显示空间中交互操作体是否操作用户交互图标,从而能够容易实施本方法,且实施成本低、限制因素少,在强光或者弱光等投影环境下,均可以较为准确地对用户交互图标实行交互操作控制。
为解决上述技术问题,本发明采用的一个技术方案是:一种基于投影的用户交互图标的交互控制方法,包括以下步骤:(S1)向投影模块输入包含用户交互图标的交互图像,所述投影模块将所述交互图像投影显示在投影显示界面;(S2)交互操作体在投影显示界面上执行交互操作;(S3)图像采集模块连续采集获取投影显示界面上的交互图像信息,并将所述交互图像信息传输给中央处理单元;(S4)中央处理单元提取交互图像信息中的特征信息,将所提取的特征信息输入预先存储的训练好的分类器进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令;所述分类器存储于中央处理单元;(S5)投影模块根据中央处理单元所输出的交互指令改变投影内容,并且返回所述向投影模块输入交互图像的步骤;和/或由用户交互图标控制的电子设备根据交互指令执行对应的动作。
根据优选实施例,在所述交互控制方法中,连续获取交互图像序列,当图像数目小于预设阈值时,定位用户交互图标在交互图像中的位置并提取图标区域的灰度特征,存储所述位置以及灰度特征;当图像数目等于预设阈值时,对所存储的各帧交互图像的位置以及灰度特征赋予权值并分别累加求和,作为参考背景信息;当图像数目大于预设阈值时,根据上述参考背景信息,提取交互图像中交互图标区域的由灰度变化衍生的特征,以及颜色、形状特征。
另外,根据优选实施例,所述中央处理单元提取交互图像信息中的特征信息,具体步骤为:基于交互图像中用户交互图标的亮度、几何形状信 息定位其在交互图像中的区域位置;每一个用户交互图标的区域位置能够由一个矩形框描述,记为recti(xi,yi,wi,hi,);其次,提取每一个用户交互图标区域的特征,包括两类特征:第一类特征,基于背景减除法计算由像素灰度值发生变化所衍生出的特征,记为F1i(f1i1,f1i2,...,f1im);第二类特征,基于交互操作体的颜色、纹理及形状轮廓特征,记为F2i(f2i1,f2i2,...,f2in);每一个用户交互图标区域都由第一类特征(F1)、第二类特征(F2)组成的特征描述子表示,记为FTi(fti1,fti2,...,fti(m+n))。
而且,根据优选实施例,所述中央处理单元提取和识别交互图像信息中的特征信息,还可以为:在强光环境下,先利用运动目标检测或跟踪算法来检测与识别交互操作体在投影显示空间的位置,再与每一个用户交互图标在交互图像中的位置进行比较,来获取操作控制的用户交互图标编号;在弱光环境下,不用考虑第二类特征(F2),而直接利用针对第一类特征(F1)训练得到的分类器识别第一类特征(F1),以得到交互操作体所操作的用户交互图标编号。
根据优选实施例,所述对分类器的训练包括:在不同的光照环境下,投影模块将包含用户交互图标的交互图像投影在不同的投影显示界面;图像采集模块实时采集交互图像,提取交互图像中的特征信息,并给特征信息添加标签,利用机器学习算法对带标签的特征数据进行训练,寻找最优的模型参数,完成分类器的构建。
根据优选实施例,所述不同的投影显示界面是不同颜色背景或者不同纹理背景或者不同平整度背景;所述机器学习算法可以为神经网络或者支持向量机;一种所述用户交互图标对应一种交互指令;所述交互指令由中央处理单元输入到投影模块和/或直接输入到其他与中央处理单元相连接的设备。
另外,根据优选实施例,所述交互操作体执行交互操作时直接触摸投 影交互图像中的用户交互图标区域,或者对交互图像中的用户交互图标区域进行遮挡;所述交互操作体是用户的手或者脚或者由用户控制的物体;所述交互图像的用户交互图标的信息包括其颜色亮度和/或几何形状;所述用户交互图标是用户界面的应用图标或者应用画面中的操作图标。
为解决上述技术问题,本发明采用的另一个技术方案是:一种基于投影的用户交互图标的交互控制系统,包括:中央处理单元、投影模块和图像采集模块,其中,所述中央处理单元分别与投影模块和图像采集模块连接;所述中央处理单元,用于:向投影模块输入包含用户交互图标的交互图像,使所述投影模块将所述交互图像投影显示在投影显示界面;当位于所述投影模块的投影画面与图像采集模块之间的交互操作体在投影显示界面上执行交互操作时,使图像采集模块连续采集获取投影显示界面上的交互图像信息,并将所述交互图像信息传输给中央处理单元,以提取交互图像信息中的特征信息,将所提取的特征信息输入预先存储的训练好的分类器进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令给投影模块或者其他与中央处理单元相连接的设备;使投影模块根据中央处理单元所输出的交互指令改变投影内容,并且再向所述投影模块输入交互图像;和/或使与中央处理单元相连接的设备根据交互指令执行相应的动作。
另外,根据优选实施例,所述交互控制系统还包括:音频输出装置和存储装置,所述音频输出装置和存储装置分别与中央处理单元连接,所述存储装置用于存储预设声频库;所述中央处理单元还用于从存储装置的预设声频库中,提取与所述用户交互图标相对应的音频文件,并且在调整所述投影模块的投影内容的过程中,向所述音频输出装置输出获取到的音频文件,以使所述音频输出装置根据所述获取到的音频文件输出交互音频。
根据优选实施例,所述图像采集模块为摄像头;所述交互图像中用户交互图标的特征信息包括其颜色亮度和/或几何形状;所述用户交互图标是 用户界面的应用图标或者应用画面操作图标。
与现有技术相比,本发明具有如下有益效果:利用计算机视觉方法直接识别投影显示空间中交互操作体是否操作用户交互图标,因此本方法易于实施,且实施成本低,避免了常见的基于结构光或者手势识别的交互方法或系统中的相机标定所涉及的坐标转换等复杂计算过程,也避免了肤色分割、定位等环节;与基于结构光或者手势识别的方法相比,本发明在强光或者弱光或者完全黑暗等投影环境下,均可以较为准确地对用户交互图标实行交互操作控制。
附图说明
图1是根据本发明实施方式的基于投影的用户交互图标的交互控制系统的结构示意图;
图2是根据本发明实施方式的基于投影的用户交互图标的交互控制方法的流程图;
图3是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中提取交互图像特征信息的一例流程图;
图4是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中提取交互图像特征信息的另一例流程图;
图5a是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中用户交互图标的一例示意图;
图5b是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中用户交互图标的另一例示意图;
图5c是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中用户交互图标的又一例示意图;
图5d是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中交互过程的交互图像的示意图。
具体实施方式
下面结合附图,对本发明的具体实施方式进行详细说明,但应当理解本发明的保护范围并不受具体实施方式的限制。
请参阅图1,图1是根据本发明实施方式的基于投影的用户交互图标的交互控制系统的结构示意图,由图可知,基于投影的用户交互图标的交互控制系统10,包括:中央处理单元11、投影模块12和图像采集模块13,其中,所述中央处理单元11分别与投影模块12和图像采集模块13连接;所述中央处理单元11,用于:向投影模块12输入包含用户交互图标的交互图像,使所述投影模块12将所述交互图像投影显示在投影显示界面上;当位于所述投影模块12的投影画面14与图像采集模块13之间的交互操作体执行交互操作时,使图像采集模块13连续采集获取投影显示界面上的交互图像信息,并将所述交互图像信息传输给中央处理单元11,以提取交互图像信息中的特征信息,并将所提取的特征信息输入预先存储的训练好的分类器进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令,使投影模块12根据中央处理单元11所输出的交互指令改变投影内容,并且再向所述投影模块12输入交互图像。
其中,一种用户交互图标对应一种交互指令;所述交互指令可以由中央处理单元11输入到投影模块12,使投影模块12根据交互指令改变目前的投影内容;也可以直接输入到与中央处理单元11无线连接的外部智能电子设备20,这样本发明的所述基于投影的用户交互图标的交互控制系统10就可以充当遥控器的功能控制外部智能电子设备20进行与交互指令相关的操作,此处交互控制系统10与智能电子设备20之间建立无线连接,距离并不限定,所述无线连接可以是蓝牙或者WiFi方式进行连接。
此外,所述交互控制系统10还可以包括:音频输出装置和存储装置,所述音频输出装置和存储装置分别与中央处理单元11连接,所述存储装置用于存储预设声频库。所述中央处理单元11还用于从存储装置的预设声频 库中,提取与所述用户交互图标相对应的音频文件,并且在调整所述投影模块12的投影内容的过程中,向所述音频输出装置输出获取到的音频文件,以使所述音频输出装置根据所述获取到的音频文件输出交互音频。
另外,所述图像采集模块13为摄像头;所述交互图像中用户交互图标的特征信息包括其颜色亮度和/或几何形状;所述用户交互图标可以是用户界面的应用图标或者应用画面中的操作图标。
本发明又提供了一种基于投影的用户交互图标的交互控制方法,请参阅图2,其包括:
步骤(S1):向投影模块12输入包含用户交互图标的交互图像,所述投影模块12将所述交互图像投影显示在投影显示界面;
步骤(S2):位于所述投影模块12的投影画面14与图像采集模块13之间的交互操作体在投影显示界面上执行交互操作;
步骤(S3):图像采集模块13连续采集获取投影显示界面上的交互图像信息,并将所述交互图像信息传输给中央处理单元11;
步骤(S4):中央处理单元11提取交互图像信息中的特征信息,将所提取的特征信息输入预先存储的训练好的分类器进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令给投影模块12;以及
步骤(S5):投影模块12根据中央处理单元11所输出的交互指令改变投影内容,并且返回所述向投影模块12输入交互图像的步骤。
在本实施方式中所述图像采集模块13可为摄像头,其取景范围覆盖投影模块12的投影画面14,由于交互操作体位于图像采集模块13与投影画面14之间,并且交互操作体位于图像采集模块13的前方,因此,图像采 集模块13采集到的图像包含交互操作体画面和投影画面14。当然,交互操作体画面与投影画面14可以部分或者全部重叠,也可以不重叠。
另外,一种用户交互图标对应一种交互指令;所述交互指令可以由中央处理单元11输入到投影模块12,使投影模块12根据交互指令改变目前的投影内容;也可以直接输入到与中央处理单元11无线连接的外部智能电子设备20,使外部智能电子设备20进行与用户交互图标对应的操作。当然,中央处理单元11也可以与外部智能电子设备20有线连接。
图3是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中提取交互图像特征信息的一例流程图;请参阅图3,在所述交互控制方法中:连续获取交互图像序列,当图像数目小于预设阈值时,定位用户交互图标在交互图像中的投影区域并提取所述区域的灰度特征,存储所述投影区域以及灰度特征;当图像数目等于预设阈值时,对所存储的各图像帧的投影区域以及灰度特征赋予权值并分别累加求和,作为参考背景信息;当图像数目大于预设阈值时,根据上述参考背景信息,获取交互图像中交互图标区域的由灰度变化衍生的特征,并提取交互图像中交互图标区域的颜色、形状特征。最后,将所提取的特征信息输入预先存储的训练好的分类器进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令。其中,所述分类器存储于中央处理单元11中。
本发明优选在步骤(S4)中所述中央处理单元11提取交互图像信息中的特征信息,具体步骤为:基于交互图像中用户交互图标的信息,例如亮度、几何形状等信息,定位其在交互图像中的区域位置;每一个用户交互图标的区域位置可由一个矩形框描述,记为recti(xi,yi,wi,hi,);其次,提取每一个用户交互图标区域的特征,包括两类特征:第一类特征,基于背景减除法计算由像素灰度值发生变化所衍生出的特征,记为F1i(f1i1,f1i2,...,f1im);第二类特征,基于交互操作体的纹理及形状轮廓、肤色特征,记为F2i(f2i1,f2i2,...,f2in);每一个用户交互图标区域都由第一类特征(F1)、第二类特征(F2)组成的特征描述子表示,记为 FTi(fti1,fti2,...,fti(m+n))。
图4是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中提取交互图像特征信息的另一例流程图;请参阅图4,与图3的例子不同的是,在强光投影环境下,利用运动目标检测或者跟踪算法来检测与识别交互操作体在投影显示空间的位置,与每一个用户交互图标的区域位置进行比较,来获取操作控制的用户交互图标编号;在弱光投影环境下,不用考虑第二类特征(F2),而直接利用针对第一类特征(F1)训练得到的分类器识别第一类特征(F1),以得到交互操作体期望操作的用户交互图标编号。
此外,所述对分类器的训练优选包括:在不同的光照环境下,投影模块12将包含用户交互图标的交互图像投影在不同的投影显示界面,由不同的交互操作体在投影显示界面执行交互操作,图像采集模块13实时采集交互图像,提取交互图像中的特征信息,并给特征信息添加标签,利用机器学习算法对带标签的特征数据进行训练,寻找最优的模型参数,完成分类器的构建。
在此,所述利用机器学习算法训练生成的分类器能够进行自适用特征融合:在强光投影环境下,投影显示画面对比度低,第一类特征F1不明显,但图像采集模块13可以清楚地采集交互操作体的几何形状信息,即第二类特征F2明显,这种情况下分类器对用户交互图标是否被交互操作体操作的识别更依赖于第二类特征F2;相反,在弱光投影环境下,图像采集模块13几乎采集摄不到交互操作体的几何形状信息,即第二类特征F2不明显,但所采集的交互图像对比度高,第一类特征F1明显,这种情况下分类器对用户交互图标是否被交互操作体操作的识别更依赖于第一类特征F1。
另外,所述不同的投影显示界面可以是不同颜色背景或者不同纹理背景或者不同平整度背景;所述机器学习算法可以为(深度)神经网络或者 支持向量机。
另外,所述交互操作体执行交互操作时可以是直接触摸投影的交互图像上的用户交互图标区域,或者是交互操作体在投影模块12与投影画面14之间运动对交互图像中的用户交互图标区域进行遮挡;所述交互操作体可以是用户的手或者脚或者由用户控制的物体。
并且,所述交互图像中用户交互图标的特征信息包括其颜色亮度和/或几何形状;所述用户交互图标可以是用户界面的应用图标或者应用画面中的操作图标。
值得注意的是,本发明中所述的用户交互图标具有多种形式,只要可通过接触或遮挡图标来控制相关操作的用户交互图标均属于本发明的保护范围;请参阅图5a—5c,均为根据本发明实施方式的基于投影的用户交互图标的交互控制方法中用户交互图标的一例示意图;图5a为常规用户界面,每一个图标对应一种操作,所述图标可以对应一种应用APP或者对应某个设备;例如,可控制用户界面的图标打开游戏界面,也可以点击用户界面的图标去控制与其连接的设备。图5b表示的是游戏页面可交互图标,可控制某个图标进入相应的游戏关卡。图5c表示的是视频播放页面的可交互图标,可控制交互图标暂停/播放视频,或者视频的播放进程等等。
图5d是根据本发明实施方式的基于投影的用户交互图标的交互控制方法中交互过程的交互图像的示意图;如图5d所示,在交互操作体交互控制的区域,图像采集模块13采集的交互图像中被操作的用户交互图标区域的灰度信息相对图5a所示未发生交互操作时产生了变化,因此可提取由灰度变化衍生的特征。值得注意的是,交互操作体并不限于图示的手,也可以为脚或由交互操作体控制的物体。
通过上述构成,由于本发明利用计算机视觉方法直接识别投影显示空间中用户交互图标是否被交互操作体操作,所以本方法易于实施,且实施 成本低,避免了常见的基于结构光或者手势识别的交互方法或系统中的相机标定所涉及的坐标转换等复杂计算过程,也避免了肤色分割、定位等环节;与基于结构光或者手势识别的方法相比,本发明在强光或者弱光或者完全黑暗等投影环境下,均可以较为准确地对用户交互图标实行交互操作控制。
以上结合本发明的优选实施方式对本发明进行了详细说明,但本发明并不局限于此。对本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,都应该视为包括在本发明的保护范围内。

Claims (10)

  1. 一种基于投影的用户交互图标的交互控制方法,其特征在于,包括以下步骤:
    (S1)向投影模块输入包含用户交互图标的交互图像,所述投影模块将所述交互图像投影显示在投影显示界面;
    (S2)交互操作体在投影显示界面上执行交互操作;
    (S3)图像采集模块连续采集获取投影显示界面上的交互图像信息,并将所述交互图像信息传输给中央处理单元;
    (S4)中央处理单元提取交互图像信息中的特征信息,将所提取的特征信息输入预先存储的训练好的分类器进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令;所述分类器存储于中央处理单元;
    (S5)投影模块根据中央处理单元所输出的交互指令改变投影内容,并且返回所述向投影模块输入交互图像的步骤;和/或由用户交互图标控制的电子设备根据交互指令执行对应的动作。
  2. 根据权利要求1所述的交互控制方法,其特征在于,连续获取交互图像序列,当图像数目小于预设阈值时,定位用户交互图标在交互图像中的位置并提取图标区域的灰度特征,存储所述位置以及灰度特征;当图像数目等于预设阈值时,对所存储的各帧交互图像的位置以及灰度特征赋予权值并分别累加求和,作为参考背景信息;当图像数目大于预设阈值时,根据上述参考背景信息,提取交互图像中交互图标区域的由灰度变化衍生的特征,以及颜色、形状特征。
  3. 根据权利要求2所述的交互控制方法,其特征在于,所述中央处理 单元提取交互图像信息中的特征信息,具体步骤为:基于交互图像中用户交互图标的亮度、几何形状信息定位其在交互图像中的区域位置;每一个用户交互图标的区域位置能够由一个矩形框描述,记为recti(xi,yi,wi,hi,);其次,提取每一个用户交互图标区域的特征,包括两类特征:第一类特征,基于背景减除法计算由像素灰度值发生变化所衍生出的特征,记为F1i(f1i1,f1i2,...,f1im);第二类特征,基于交互操作体的颜色、纹理及形状轮廓特征,记为F2i(f2i1,f2i2,...,f2in);每一个用户交互图标区域都由第一类特征(F1)、第二类特征(F2)组成的特征描述子表示,记为FTi(fti1,fti2,...,fti(m+n))。
  4. 根据权利要求3所述的交互控制方法,其特征在于,所述中央处理单元提取和识别交互图像信息中的特征信息,还可以为:在强光环境下,先利用运动目标检测或跟踪算法来检测与识别交互操作体在投影显示空间的位置,再与每一个用户交互图标在交互图像中的位置进行比较,来获取操作控制的用户交互图标编号;在弱光环境下,不用考虑第二类特征(F2),而直接利用针对第一类特征(F1)训练得到的分类器识别第一类特征(F1),以得到交互操作体所操作的用户交互图标编号。
  5. 根据权利要求3所述的交互控制方法,其特征在于,所述对分类器的训练包括:在不同的光照环境下,投影模块将包含用户交互图标的交互图像投影在不同的投影显示界面;图像采集模块实时采集交互图像,提取交互图像中的特征信息并对特征信息添加标签,利用机器学习算法对带标签的特征数据进行训练,寻找最优的模型参数,完成分类器的构建。
  6. 根据权利要求5所述的交互控制方法,其特征在于,
    所述不同的投影显示界面是不同颜色背景或者不同纹理背景或者不同平整度背景;所述机器学习算法可以为神经网络或者支持向量机;一种所述用户交互图标对应一种交互指令;所述交互指令由中央处理单元输入 到投影模块和/或直接输入到其他与中央处理单元相连接的设备。
  7. 根据权利要求1所述的交互控制方法,其特征在于,
    所述交互操作体执行交互操作时直接触摸投影交互图像中的用户交互图标区域,或者对交互图像中的用户交互图标区域进行遮挡;所述交互操作体是用户的手或者脚或者由用户控制的物体;
    所述交互图像的用户交互图标的信息包括其颜色亮度和/或几何形状;所述用户交互图标是用户界面的应用图标或者应用画面中的操作图标。
  8. 一种基于投影的用户交互图标的交互控制系统,其特征在于,包括:中央处理单元、投影模块和图像采集模块,其中,所述中央处理单元分别与投影模块和图像采集模块连接;
    所述中央处理单元,用于:向投影模块输入包含用户交互图标的交互图像,使所述投影模块将所述交互图像投影显示在投影显示界面;当位于所述投影模块的投影画面与图像采集模块之间的交互操作体在投影显示界面上执行交互操作时,使图像采集模块连续采集获取投影显示界面上的交互图像信息,并将所述交互图像信息传输给中央处理单元,以提取交互图像信息中的特征信息,并将所提取的特征信息输入预先存储的训练好的分类器进行识别,确定被操作控制的用户交互图标并输出与用户交互图标相对应的交互指令给投影模块或者其他与中央处理单元相连接的设备;使投影模块根据中央处理单元所输出的交互指令改变投影内容,并且再向所述投影模块输入交互图像;和/或使与中央处理单元相连接的设备根据交互指令执行相应的动作。
  9. 根据权利要求8所述的交互控制系统,其特征在于,所述交互控制系统还包括:音频输出装置和存储装置,所述音频输出装置和存储装置分别与中央处理单元连接,所述存储装置用于存储预设声频库;
    所述中央处理单元还用于从存储装置的预设声频库中,提取与所述用户交互图标相对应的音频文件,并且在调整所述投影模块的投影内容的过程中,向所述音频输出装置输出获取到的音频文件,以使所述音频输出装置根据所述获取到的音频文件输出交互音频。
  10. 根据权利要求8或9所述的交互控制系统,其特征在于,所述图像采集模块为摄像头;所述交互图像中用户交互图标的特征信息包括其颜色亮度和/或几何形状;所述用户交互图标是用户界面的应用图标或者应用画面中的操作图标。
PCT/CN2016/093423 2016-06-28 2016-08-05 一种基于投影的用户交互图标的交互控制方法及系统 WO2018000519A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610486050.4A CN106201173B (zh) 2016-06-28 2016-06-28 一种基于投影的用户交互图标的交互控制方法及系统
CN201610486050.4 2016-06-28

Publications (1)

Publication Number Publication Date
WO2018000519A1 true WO2018000519A1 (zh) 2018-01-04

Family

ID=57460951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/093423 WO2018000519A1 (zh) 2016-06-28 2016-08-05 一种基于投影的用户交互图标的交互控制方法及系统

Country Status (2)

Country Link
CN (1) CN106201173B (zh)
WO (1) WO2018000519A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781734A (zh) * 2019-09-18 2020-02-11 长安大学 一种基于纸笔交互的儿童认知游戏系统
CN113696821A (zh) * 2020-05-22 2021-11-26 上海海拉电子有限公司 一种车辆信息交互系统及信息交互方法
CN114157846A (zh) * 2021-11-11 2022-03-08 深圳市普渡科技有限公司 机器人、投影方法及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108217353A (zh) * 2016-12-14 2018-06-29 三菱电机上海机电电梯有限公司 层站图像解析按钮及使用该装置的电梯和方法
CN107360407A (zh) * 2017-08-09 2017-11-17 上海青橙实业有限公司 画面合成放映方法及主控设备、附属设备
CN107656690A (zh) * 2017-09-18 2018-02-02 上海斐讯数据通信技术有限公司 一种基于投影技术的智能路由器交互方法及系统
CN109561333B (zh) * 2017-09-27 2021-09-07 腾讯科技(深圳)有限公司 视频播放方法、装置、存储介质和计算机设备
CN109064795B (zh) * 2018-07-16 2020-12-25 广东小天才科技有限公司 一种投影交互的方法及照明设备
CN112231023A (zh) * 2019-07-15 2021-01-15 北京字节跳动网络技术有限公司 一种信息显示方法、装置、设备及存储介质
CN111176521B (zh) * 2019-11-25 2021-10-01 广东小天才科技有限公司 一种消息显示方法及智能音箱、存储介质
CN111860142B (zh) * 2020-06-10 2024-08-02 南京翱翔信息物理融合创新研究院有限公司 一种面向投影增强的基于机器视觉的手势交互方法
CN118689365A (zh) * 2024-08-23 2024-09-24 力方数字科技集团有限公司 一种用于沙盘的光影交互控制方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101943947A (zh) * 2010-09-27 2011-01-12 鸿富锦精密工业(深圳)有限公司 交互显示系统
US20110154249A1 (en) * 2009-12-21 2011-06-23 Samsung Electronics Co. Ltd. Mobile device and related control method for external output depending on user interaction based on image sensing module
CN102236408A (zh) * 2010-04-23 2011-11-09 上海艾硕软件科技有限公司 基于图像识别、多投影机融合大屏幕的多点人机交互系统
CN103999025A (zh) * 2011-10-07 2014-08-20 高通股份有限公司 基于视觉的交互式投影系统
CN104808800A (zh) * 2015-05-21 2015-07-29 上海斐讯数据通信技术有限公司 智能眼镜设备、移动终端及移动终端操作方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110154249A1 (en) * 2009-12-21 2011-06-23 Samsung Electronics Co. Ltd. Mobile device and related control method for external output depending on user interaction based on image sensing module
CN102236408A (zh) * 2010-04-23 2011-11-09 上海艾硕软件科技有限公司 基于图像识别、多投影机融合大屏幕的多点人机交互系统
CN101943947A (zh) * 2010-09-27 2011-01-12 鸿富锦精密工业(深圳)有限公司 交互显示系统
CN103999025A (zh) * 2011-10-07 2014-08-20 高通股份有限公司 基于视觉的交互式投影系统
CN104808800A (zh) * 2015-05-21 2015-07-29 上海斐讯数据通信技术有限公司 智能眼镜设备、移动终端及移动终端操作方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781734A (zh) * 2019-09-18 2020-02-11 长安大学 一种基于纸笔交互的儿童认知游戏系统
CN110781734B (zh) * 2019-09-18 2023-04-07 长安大学 一种基于纸笔交互的儿童认知游戏系统
CN113696821A (zh) * 2020-05-22 2021-11-26 上海海拉电子有限公司 一种车辆信息交互系统及信息交互方法
CN114157846A (zh) * 2021-11-11 2022-03-08 深圳市普渡科技有限公司 机器人、投影方法及存储介质
CN114157846B (zh) * 2021-11-11 2024-01-12 深圳市普渡科技有限公司 机器人、投影方法及存储介质

Also Published As

Publication number Publication date
CN106201173A (zh) 2016-12-07
CN106201173B (zh) 2019-04-05

Similar Documents

Publication Publication Date Title
WO2018000519A1 (zh) 一种基于投影的用户交互图标的交互控制方法及系统
US11269481B2 (en) Dynamic user interactions for display control and measuring degree of completeness of user gestures
US10394334B2 (en) Gesture-based control system
JP5567206B2 (ja) コンピューティングデバイスインターフェース
CN106598227B (zh) 基于Leap Motion和Kinect的手势识别方法
US20190025936A1 (en) Systems and methods for extensions to alternative control of touch-based devices
US20170031455A1 (en) Calibrating Vision Systems
JP4323180B2 (ja) 自己画像表示を用いたインタフェース方法、装置、およびプログラム
WO2014113507A1 (en) Dynamic user interactions for display control and customized gesture interpretation
US20120280897A1 (en) Attribute State Classification
CN109145802B (zh) 基于Kinect的多人手势人机交互方法及装置
WO2013078989A1 (zh) 人机交互操作指令的触发控制方法和系统
JP6498802B1 (ja) 生体情報解析装置及びその顔型のシミュレーション方法
US20160147294A1 (en) Apparatus and Method for Recognizing Motion in Spatial Interaction
US20200311398A1 (en) Scene controlling method, device and electronic equipment
KR101525011B1 (ko) Nui 기반의 실감형 가상공간 디스플레이 제어장치 및 제어방법
Xu et al. Bare hand gesture recognition with a single color camera
TWI394063B (zh) 應用影像辨識之指令輸入系統以及方法
Singh et al. Digitized Interaction: A Gesture-Controlled Whiteboard System with OpenCV, MediaPipe and NumPy
US20230061557A1 (en) Electronic device and program
Forutanpour et al. ProJest: Enabling Higher Levels of Collaboration Using Today’s Mobile Devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16906916

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 210519)

122 Ep: pct application non-entry in european phase

Ref document number: 16906916

Country of ref document: EP

Kind code of ref document: A1