WO2017084286A1 - Système et procédé de projection multi-interactive - Google Patents

Système et procédé de projection multi-interactive Download PDF

Info

Publication number
WO2017084286A1
WO2017084286A1 PCT/CN2016/083257 CN2016083257W WO2017084286A1 WO 2017084286 A1 WO2017084286 A1 WO 2017084286A1 CN 2016083257 W CN2016083257 W CN 2016083257W WO 2017084286 A1 WO2017084286 A1 WO 2017084286A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
image
action
identifying
operating body
Prior art date
Application number
PCT/CN2016/083257
Other languages
English (en)
Chinese (zh)
Inventor
杨伟樑
高志强
王梓
Original Assignee
广景视睿科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广景视睿科技(深圳)有限公司 filed Critical 广景视睿科技(深圳)有限公司
Publication of WO2017084286A1 publication Critical patent/WO2017084286A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Definitions

  • the present invention relates to the field of projection processing technologies, and in particular, to a system and method for multiple interactive projection.
  • a projector (also known as a projector) is a device that projects an image or video onto a screen.
  • the image or video projected onto the screen is magnified several times or tens of times while maintaining sharpness. It is convenient for people to watch and gives people an open view. Therefore, the projector is very popular among users.
  • interactive projection which refers to the use of computer vision technology and projection display technology, so that users can directly use the foot or hand to interact with the virtual scene on the projection area.
  • the specific principle of the interactive projection system is that the image capture device is used to capture and capture the user, and then the image analysis system analyzes and adjusts the projected image to create an interactive effect between the user and the projection area.
  • an image capturing device is usually installed only on one side of a projection area, and the user can only perform an interactive operation within the capture range of a single image capturing device, limiting The scope of user interaction, interaction Sexuality.
  • the technical problem to be solved by the present invention is to provide a multi-interactive projection system and method, which can realize the operation of two projections on different projection areas in a different operation area, thereby realizing multiple interactive projection.
  • a technical solution adopted by the present invention is to provide a multi-interactive projection system, including a processor, a projector, a first video capture device, and a second video capture device, respectively, and the processor
  • the first video capture device is connected to the second video capture device;
  • the projector is configured to receive a projection image sent by the processor, and project the projection image;
  • the first video capture device is configured to collect a first video image when the first operating body performs the interactive operation in the first operating area;
  • the second video capturing device is configured to collect the second video image when the second operating body performs the interactive operation in the second operating area
  • the position of each pixel in the first video image and the second video image has a mapping relationship with the position of each pixel of the projected image;
  • the processor is configured to: identify according to the first video image a first operation performed by the first operating body, and the first operating action is located at a first video position in the first video image; And taking a first mapping position in the projected image that is mapped to the first video position
  • the step of the processor identifying the first operation performed by the first operating body according to the first video image includes: performing two consecutive images in the first video image according to an image difference algorithm Performing a subtraction process to obtain motion data of the first operating body, and identifying a first operation action corresponding to the motion data of the first operating body; the processor identifying the according to the second video image
  • the step of the second operation performed by the second operating body includes: performing subtraction processing on two consecutive frames of the second video image according to the image difference algorithm to obtain motion data of the second operating body And identifying a second operation action corresponding to the motion data of the second operating body.
  • the step of the processor adjusting the projected image according to the first mapping position, the first operation action, the second mapping position, and the second operation action comprises: identifying the first operation instruction according to the first operation action Identifying, in the projected image, a first operated object corresponding to the first mapping position, performing an operation indicated by the first operation instruction on the first operated object, and a second operation action, identifying a second operation instruction, identifying a second operated object corresponding to the second mapping position in the projected image, and performing the second on the second operated object
  • the operation indicated by the instruction is operated, and the projected image is adjusted according to the performed operation.
  • the system further includes a first voice collection device and a second voice collection device, where the first voice collection device is configured to collect the first one that is sent when the first operation body performs an interaction operation in the first operation area. a voice command; the second voice collection device is configured to collect a second voice command that is sent when the first operator performs an interaction operation in the second operation area;
  • the step of the processor adjusting the projected image according to the first mapping position, the first operation action, the second mapping position, and the second operation action, according to the first mapping position, the first operation action, and the second Mapping the position and the second operational action, and adjusting the projected image in conjunction with the first voice command and the second voice command.
  • the first video capture device and the second video capture device are both imaging devices configured with a charge coupled device CCD or a complementary metal oxide semiconductor CMOS.
  • the projector is a digital light processing DLP micro projection device with a zoom function.
  • another technical solution adopted by the present invention is to provide a method for multiple interactive projection, which comprises: after the projector projects the projected image, the first operating body is acquired during the interaction operation in the first operation area. a first video image, and a second video image when the second operation is performed in the second operation area, the position of each pixel in the first video image and the second video image and each pixel of the projected image
  • the position of the point has a mapping relationship; the first operation action performed by the first operation body is identified according to the first video image, and the first operation action is located at a first video position in the first video image Obtaining a first mapping position in the projected image that is mapped to the first video position; identifying, according to the second video image, a second operation performed by the second operating body, and the second The action operation is located at a second video position in the second video image; acquiring a second image of the projected image that is mapped to the second video position Position; position according to the first mapping, the operation of the first operation, a second operation and
  • identifying the first operating body includes: performing subtraction processing on two consecutive frames of the first video image according to the image difference algorithm to obtain motion data of the first operating body, and identifying the first operating body
  • the first operation action corresponding to the motion data; the step of identifying the second operation performed by the second operator according to the second video image comprises: according to the image difference algorithm, the The two consecutive frames of the two video images are subjected to subtraction processing to obtain motion data of the second operating body, and the second operation action corresponding to the motion data of the second operating body is identified.
  • the step of adjusting the projected image according to the first mapping position, the first operation action, the second mapping position, and the second operation action comprises: identifying the first operation instruction according to the first operation action, and identifying a first operated object corresponding to the first mapping position in the projected image, performing an operation indicated by the first operation instruction on the first operated object, and according to the a second operation instruction, identifying a second operation instruction, identifying a second operated object corresponding to the second mapping position in the projected image, and executing the second operation instruction on the second operated object The indicated operation, and adjusting the projected image according to the performed operation.
  • the method further includes: collecting the first voice command of the first operating body when the first video image of the first operating body is operated in the first operating area;
  • the method further includes: collecting the second voice command of the second operating body; according to the first mapping position, the first operating action And adjusting, by the second mapping position and the second operation action, the projection image, according to the first mapping position, the first operation action, the second mapping position, and the second operation action, and combining the first voice command And the second voice Let the projection image be adjusted.
  • the invention has the beneficial effects that the first video image of the first operating body and the second video image of the second operation are collected according to the prior art, and the first operation action is identified according to the first video image.
  • a first video location, the second operation action and the second video location are identified according to the second video image, and the second mapping mapped by the first mapping location, the first operational action, and the second video location mapped according to the first video location.
  • the position and the second operation action adjust the projection image, so that the two operation bodies operate the projection image in different operation areas, thereby implementing multiple interactive projection.
  • FIG. 1 is a schematic diagram of an embodiment of a multi-interactive projection system of the present invention
  • FIG. 2 is a flow chart of an embodiment of a multi-interactive projection of the present invention.
  • a multi-interactive projection system 20 includes a processor 21, a projector 22, a first video capture device 23, and a second video capture device 24.
  • the processor 21 and the projector 22, the first video capture device 23, and The second video capture device 24 is connected.
  • the connection between the processor 21 and the projector 22, the first video capture device 23, and the second video capture device 24 may be a wired connection or a wireless connection, for example, a WIFI wireless connection, a Bluetooth wireless connection. , 3G or 4G wireless communication connection and so on.
  • the processor 21 can run a Windows operating system, an Android operating system, an iOS operating system, etc., a processor 21 running the operating system, it is convenient to expand the function of the system 20 with multiple interactive projections.
  • the projector 22 is configured to receive a projection image sent by the processor 21 and project a projection image.
  • the first video capture device 23 is configured to collect the first video image when the first operating body performs the interactive operation in the first operating region.
  • the second video capture device 24 is configured to collect a second video image when the second operating body performs an interactive operation in the second operation area.
  • the positions of the respective pixels in the first video image and the second video image have a mapping relationship with the positions of the respective pixel points of the projected image.
  • the first operation body performs the interaction operation, which means that the first operation body operates the projection image
  • the second operation body performs the interaction operation, which means that the second operation body operates the projection image.
  • the processor 21 is configured to identify, according to the first video image, a first operation performed by the first operating body, and the first operation action is located at a first video position in the first video image, and acquire the first video in the projected image a first mapping position of the position mapping, a second operation performed by the second operating body according to the second video image, and a second action operation being located at the second video position in the second video image to acquire the projected image A second mapping location that is mapped to the second video location.
  • the first mapping position refers to a position corresponding to the first operation action in the projected image after the first operation action is mapped to the projection image
  • the second operation position is after the second operation action is mapped to the projection image, and the second operation action is projected. The corresponding position in the image.
  • the processor 21 is further configured to adjust the projection image according to the first mapping position, the first operation action, the second mapping position, and the second operation action, and send the adjusted projection image to the projector 22 to cause the projector 22 to perform the projection adjustment.
  • Projection image In short, after the operation body is operated, the projection image is adjusted according to the operation action of the operation body, which is equivalent to the adjustment of the projection image due to the operation of the operation body, and the interactive projection is realized; further, the present invention
  • the two operating bodies can be operated in two different operating regions, and the projected images are adjusted according to the operating actions of the two operating bodies to realize multiple interactive projections.
  • the projector 22 projects the badminton game interface. The operation area is played, and the badminton game is adjusted according to the operation of the player, and the badminton game interface projected by the projector 22 is also changed.
  • first operation area and the second operation area may be located on the front and rear sides of the projection area of the projector 22, the front side of the projection area is the side where the projector 22 is located, and the rear side of the projection area is far from the projector.
  • the above description only describes the manner in which two operating bodies operate in two different operating areas to realize multiple interactive projections. Those skilled in the art can also set three or four operating areas according to the technical idea of the present invention.
  • the operating bodies are located in different operating areas for operation, enabling multiple interactive projections.
  • the step of identifying the operation operation may be performed in combination with the image difference algorithm
  • the step of the processor 21 identifying the first operation action performed by the first operation body according to the first video image includes: the processor 21 according to the image difference algorithm
  • the two consecutive frames of the first video image are subjected to subtraction processing to obtain motion data of the first operational body, and identify a first operational motion corresponding to the motion data of the first operational body.
  • the processor 21 identifies, according to the second video image, the second operation performed by the second operating body, and performs subtraction processing on the two consecutive frames of the second video image according to the image difference algorithm to obtain a second operation.
  • the motion data of the body identifies a second operation action corresponding to the motion data of the second operator.
  • other methods of identifying the operating actions of the operating body may also be collected, for example: By tracking the hand motion trajectories of the first operating body and the second operating body, the first operating action and the second operating action are respectively identified according to the hand motion trajectories of the first operating body and the second operating body.
  • the step of the processor 21 adjusting the projected image according to the first mapping position, the first operation action, the second mapping position, and the second operation action comprises: identifying the first operation instruction according to the first operation action, and identifying the first image in the projection image a first operated object corresponding to the position, performing an operation indicated by the first operation instruction on the first operated object, and identifying the second operation instruction according to the second operation action, identifying the second and second in the projected image
  • the second operated object corresponding to the mapping position, for the second operated object performs an operation indicated by the second operation instruction, and adjusts the projected image according to the operation.
  • the operation action and the operation instruction may be predetermined to establish a correspondence relationship, for example, the hand support opening represents the opening instruction, the hand support is used to represent the closing instruction, the hand sliding to the right represents the rightward switching, and the like.
  • the voice command of the operating body can also be collected, and the projection content is adjusted in combination with the voice command.
  • the multi-interactive projection system 20 further includes a first voice collection device 25 and a second voice collection device 26.
  • the first voice collection device 25 is configured to collect a first voice command that is sent when the first operator performs an interaction operation in the first operation area.
  • the second voice collection device 26 is configured to collect a second voice command that is sent when the first operator performs an interaction operation in the second operation area.
  • the processor 21 adjusts the projected image according to the first mapping position, the first operation action, the second mapping position, and the second operation action, including: according to the first mapping position, the first operation action, the second mapping position, and the second operation action And combining the first voice command and the second voice command to adjust the projected image.
  • the first video capture device 23 and the second video capture device 24 are preferably imaging devices configured with a charge coupled device CCD or a complementary metal oxide semiconductor CMOS
  • the projector 22 is preferably Digital light processing DLP micro projection device with zoom function.
  • the first video image of the first operating body and the second video image of the second operation are collected, and the first operation action and the first video position are identified according to the first video image, according to the second video image. Identifying the second operation action and the second video position, and adjusting the projection image according to the first mapping position mapped by the first video position, the first operation action, the second mapping position mapped by the second video position, and the second operation action,
  • the two operating bodies are operated on the projected image in different operation areas, thereby implementing multiple interactive projections.
  • the present invention further provides an embodiment of a method of interactive projection. Please refer to Figure 2, the method includes:
  • Step S301 After the projection image is projected by the projector, collect the first video image when the first operation body performs the interaction operation in the first operation area, and collect the first operation body when the second operation body performs the interaction operation in the second operation area.
  • a video image wherein a position of each pixel in the first video image and the second video image has a mapping relationship with a position of each pixel of the projected image;
  • the interaction of the first operating body in the first operating region refers to the operation of the first operating body for the projected image
  • the interaction of the second operating body in the second operating region refers to the second operating body for the projected image.
  • the operation performed, for example, is to project a projected image of the automobile part, and the operating body performs a two-handed movement for the projected image of the automobile part.
  • Step S302 Identify, according to the first video image, a first operation performed by the first operating body, and the first operation action is located at a first video position in the first video image;
  • Step S302 can be specifically as follows: Image difference algorithm, the first video shadow Performing subtraction processing on the two consecutive frames of the image, obtaining motion data of the first operating body, and identifying a first operating motion corresponding to the motion data of the first operating body.
  • Image difference algorithm the first video shadow Performing subtraction processing on the two consecutive frames of the image, obtaining motion data of the first operating body, and identifying a first operating motion corresponding to the motion data of the first operating body.
  • other manners of identifying the first operating body such as gesture tracking recognition, etc., may also be collected.
  • Step S303 Acquire a first mapping position in the projected image that is mapped to the first video position.
  • the first mapping position is a position at which the first operational action is reflected to the projected image.
  • Step S304 Identify a second operation performed by the second operating body according to the second video image, and the second action operation is located at the second video position in the second video image;
  • the second operation action may also be combined with the image difference algorithm, and the step S304 may be specifically: subtracting two consecutive frames of the second video image according to the image difference algorithm to obtain the motion of the second operation body.
  • the data identifies a second operational action corresponding to the motion data of the second operational body.
  • other manners of identifying the second operating body of the second operating body such as gesture tracking recognition and the like, may also be collected.
  • Step S305 Acquire a second mapping position in the projected image that is mapped to the second video position.
  • the second mapping position refers to the position at which the second operational action is mapped to the projected image.
  • Step S306 Adjust the projection image according to the first mapping position, the first operation action, the second mapping position, and the second operation action.
  • step S306 is specifically: identifying the first operation instruction according to the first operation action, and identifying the projection image a first operated object corresponding to the first mapping position, performing an operation indicated by the first operation instruction on the first operated object, and identifying the second operation instruction and identifying the projected image according to the second operation action And a second operated object corresponding to the second mapped position, performing an operation indicated by the second operation instruction on the second operated object, and adjusting the projected image according to the performed operation.
  • the time is performed according to the time sequence of the operations performed by the first operating body and the second operating body, for example, the first operating body is first performed.
  • the interactive operation after the second operation body performs the interaction operation, firstly adjusts the projection content according to the interaction operation of the first operation body, and then adjusts the projection content according to the interaction operation of the second operation body.
  • the voice command of the operation body may be collected, and the projection content is adjusted in combination with the voice command, and the first video image when the first operation body performs the interaction operation in the first operation area is collected. And acquiring a first voice command of the first operating body, and collecting a second video image of the second operating body when acquiring the second video image when the second operating body performs the interaction operation in the second operating area Voice command.
  • Step S306 is specifically: adjusting the projected image according to the first mapping position, the first operation action, the second mapping position, and the second operation action, in combination with the first voice command and the second voice command.
  • the first video image of the first operating body and the second video image of the second operation are collected, and the first operation action and the first video position are identified according to the first video image, according to the second video image. Identifying the second operation action and the second video position, and adjusting the projection image according to the first mapping position mapped by the first video position, the first operation action, the second mapping position mapped by the second video position, and the second operation action, Realize two operating bodies in The different operating areas operate on the projected image to achieve multiple interactive projections.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention concerne un système et un procédé de projection multi-interactive. Le système comprend un processeur, un projecteur, un premier dispositif de capture vidéo et un second dispositif de capture vidéo. Le premier dispositif de capture vidéo capture une première image vidéo et le second dispositif de capture vidéo capture une seconde image vidéo. Le processeur est utilisé pour identifier une première action d'opération effectuée par un premier corps d'opération et une première position de vidéo de la première action d'opération ; obtenir une première position de mappage sur la première position de vidéo ; identifier une seconde action d'opération effectuée par un second corps d'opération et une seconde position de vidéo de la seconde action d'opération ; obtenir une seconde position de mappage dans une image projetée qui est mappée sur la seconde position de vidéo ; et ajuster l'image projetée en fonction de la première position de mappage, de la première action d'opération, de la seconde position de mappage, et de la seconde action d'opération, et transmettre l'image projetée ajustée au projecteur. Grâce au procédé décrit ci-dessus, la présente invention réalise que les deux corps d'opération opèrent sur l'image projetée dans différentes zones d'opération, réalisant ainsi une projection multi-interactive.
PCT/CN2016/083257 2015-11-20 2016-05-25 Système et procédé de projection multi-interactive WO2017084286A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510811560.XA CN105446623A (zh) 2015-11-20 2015-11-20 一种多互动投影的方法及系统
CN201510811560.X 2015-11-20

Publications (1)

Publication Number Publication Date
WO2017084286A1 true WO2017084286A1 (fr) 2017-05-26

Family

ID=55556890

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/083257 WO2017084286A1 (fr) 2015-11-20 2016-05-25 Système et procédé de projection multi-interactive

Country Status (2)

Country Link
CN (1) CN105446623A (fr)
WO (1) WO2017084286A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113247007A (zh) * 2021-06-22 2021-08-13 肇庆小鹏新能源投资有限公司 一种车辆控制方法和车辆

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699240B (zh) * 2015-02-09 2018-01-23 联想(北京)有限公司 一种控制方法及电子设备
CN105446623A (zh) * 2015-11-20 2016-03-30 广景视睿科技(深圳)有限公司 一种多互动投影的方法及系统
CN106055092A (zh) * 2016-05-18 2016-10-26 广景视睿科技(深圳)有限公司 一种实现互动投影的方法及系统
CN106095098A (zh) * 2016-06-07 2016-11-09 深圳奥比中光科技有限公司 体感交互装置以及体感交互方法
CN106293346A (zh) * 2016-08-11 2017-01-04 深圳市金立通信设备有限公司 一种虚拟现实视景的切换方法及终端

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872241A (zh) * 2009-04-26 2010-10-27 艾利维公司 建立网游共享空间的方法和系统
CN101995943A (zh) * 2009-08-26 2011-03-30 介面光电股份有限公司 立体影像互动系统
CN103176733A (zh) * 2011-12-20 2013-06-26 西安天动数字科技有限公司 电子互动鱼缸系统
CN105446623A (zh) * 2015-11-20 2016-03-30 广景视睿科技(深圳)有限公司 一种多互动投影的方法及系统

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8749557B2 (en) * 2010-06-11 2014-06-10 Microsoft Corporation Interacting with user interface via avatar
CN103218058A (zh) * 2012-01-18 2013-07-24 北京德信互动网络技术有限公司 一种基于投影技术的人机互动系统和方法
CN103455141B (zh) * 2013-08-15 2016-07-06 无锡触角科技有限公司 互动投影系统及其深度传感器和投影仪的校准方法
CN203930308U (zh) * 2014-05-15 2014-11-05 上海味寻信息科技有限公司 一种新型互动投影仪
CN104217619B (zh) * 2014-09-19 2017-05-17 广东建业显示信息技术有限公司 一种多人舞蹈教学互动投影装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872241A (zh) * 2009-04-26 2010-10-27 艾利维公司 建立网游共享空间的方法和系统
CN101995943A (zh) * 2009-08-26 2011-03-30 介面光电股份有限公司 立体影像互动系统
CN103176733A (zh) * 2011-12-20 2013-06-26 西安天动数字科技有限公司 电子互动鱼缸系统
CN105446623A (zh) * 2015-11-20 2016-03-30 广景视睿科技(深圳)有限公司 一种多互动投影的方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113247007A (zh) * 2021-06-22 2021-08-13 肇庆小鹏新能源投资有限公司 一种车辆控制方法和车辆

Also Published As

Publication number Publication date
CN105446623A (zh) 2016-03-30

Similar Documents

Publication Publication Date Title
WO2017084286A1 (fr) Système et procédé de projection multi-interactive
US11860511B2 (en) Image pickup device and method of tracking subject thereof
JP7514905B2 (ja) 通話中のテレビ会議の確立
US9270941B1 (en) Smart video conferencing system
CN104777991B (zh) 一种基于手机的远程互动投影系统
WO2016029641A1 (fr) Procédé et appareil d'acquisition de photographie
CN106600548B (zh) 鱼眼摄像头图像处理方法和系统
WO2016070688A1 (fr) Procédé et système de commande à distance pour une interface d'exploitation virtuelle
JP5612774B2 (ja) 追尾枠の初期位置設定装置およびその動作制御方法
JP6057570B2 (ja) 立体パノラマ映像を生成する装置及び方法
TWI547177B (zh) 視角切換方法及其攝影機
JP2013257686A (ja) 投影型画像表示装置及び画像投影方法、並びにコンピューター・プログラム
WO2020063307A1 (fr) Procédé de surveillance de dispositif photographique, système de tête de support et dispositif mobile
US20130202158A1 (en) Image processing device, image processing method, program and recording medium
CN108377398B (zh) 基于红外的ar成像方法、系统、及电子设备
TW201024908A (en) Panoramic image auto photographing method of digital photography device
KR20130117032A (ko) 포커스 제어장치 및 방법
WO2017197779A1 (fr) Procédé et système de mise en œuvre d'une projection interactive
TWI451344B (zh) 手勢辨識系統及手勢辨識方法
CN203966055U (zh) 无线交互投影系统
CN107368104B (zh) 基于手机app和家用智能云台摄像机的任意点定位方法
TW201414307A (zh) 會議終端及該會議終端的視頻處理方法
KR100660137B1 (ko) 레이저 포인터를 이용한 입력 장치와 그를 이용한프리젠테이션 제공 시스템
KR102404130B1 (ko) 텔레 프레젠스 영상 송신 장치, 텔레 프레젠스 영상 수신 장치 및 텔레 프레젠스 영상 제공 시스템
WO2014048280A1 (fr) Système d'interaction homme-machine, et dispositif capteur d'images infrarouge

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16865467

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 31.8.18)

122 Ep: pct application non-entry in european phase

Ref document number: 16865467

Country of ref document: EP

Kind code of ref document: A1