WO2018195973A1 - 利用手持终端进行光标定位的方法、手持终端和电子设备 - Google Patents

利用手持终端进行光标定位的方法、手持终端和电子设备 Download PDF

Info

Publication number
WO2018195973A1
WO2018195973A1 PCT/CN2017/082537 CN2017082537W WO2018195973A1 WO 2018195973 A1 WO2018195973 A1 WO 2018195973A1 CN 2017082537 W CN2017082537 W CN 2017082537W WO 2018195973 A1 WO2018195973 A1 WO 2018195973A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point
display screen
captured
handheld terminal
Prior art date
Application number
PCT/CN2017/082537
Other languages
English (en)
French (fr)
Inventor
王长海
马维金
Original Assignee
王长海
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 王长海 filed Critical 王长海
Priority to CN201780002600.3A priority Critical patent/CN108235749A/zh
Priority to PCT/CN2017/082537 priority patent/WO2018195973A1/zh
Publication of WO2018195973A1 publication Critical patent/WO2018195973A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Definitions

  • the present disclosure relates to interface interaction techniques, and more particularly to methods and apparatus for positioning on a screen using an imaging device.
  • the prior art provides a method of utilizing image localization, by arranging a specific pattern in a surrounding environment of a television, or displaying a specific pattern on a screen, by which the position of the screen is positioned in an image captured by the camera, and the camera is calculated. The projection relationship with the screen, which in turn positions the cursor position.
  • this method requires layout of the pattern in the environment, which increases the user's usage threshold, and displaying a specific pattern on the screen may interfere with the normal display of the user.
  • the present disclosure proposes an easy-to-use solution, without special settings for the surrounding environment, or changing the actual content, using only the reference image displayed on the interface, positioning the reference image in the captured image to achieve intuitive Cursor coordinate control.
  • a method for performing cursor positioning using a handheld terminal is provided.
  • the handheld terminal is provided with a camera, including:
  • a point in the acquired image is converted to a cursor position of the display screen according to the transformation matrix.
  • a point in the captured image is a fixed point.
  • the fixed point is the center point of the acquired image.
  • one point in the acquired image is a floating point.
  • the center point of the captured image and the center point of the target image are connected in a straight line, and one of the straight lines is selected A point is used as the floating point, wherein part or all of the video image currently played by the display screen included in the captured image is taken as a target image.
  • the method further includes: selecting, from the plurality of acquired images captured by the image capturing, a captured image having the same time stamp as the reference image as the first captured image;
  • the constructing the transformation matrix according to the collected image and the reference image includes:
  • a transformation matrix is constructed with the first acquired image and the reference image.
  • the constructing the transformation matrix according to the acquired image and the reference image comprises:
  • the transformation matrix is constructed based on the matched feature points.
  • the handheld device is a remote control, a mobile phone or a PAD
  • the electronic device is a television or a computer.
  • a handheld terminal is provided, and the camera is provided with a camera, including:
  • a wireless receiving module configured to receive a video image currently played on a display screen of the electronic device as a reference image
  • An image capturing module wherein the captured image is captured by the camera, and the captured image includes part or all of a video image currently played on a display screen of the electronic device;
  • a calculation module configured to construct a transformation matrix according to the acquired image and the reference image
  • the output module converts a point in the acquired image into a cursor position of the display screen according to the transformation matrix, and outputs the position to the electronic device.
  • the output module selects a fixed point in the captured image to be converted into a cursor position of the display screen.
  • the fixed point is a center point of the acquired image.
  • the output module selects one of the captured images to be converted into a cursor position of the display screen.
  • the output module connects a center point of the captured image and a center point of the target image into a straight line, and selects a point on the straight line as the floating point, wherein the image included in the captured image is Part or all of the video image currently being played on the display screen is used as the target image.
  • the calculation module comprises:
  • a comparison unit configured to compare time stamps of the captured image and the reference image to select a captured image having the same time stamp as the reference image
  • the first calculating unit is configured to calculate the acquired image and the reference image with the same time stamp to construct a transformation matrix.
  • an electronic device comprising:
  • a first receiving module configured to establish a wireless connection with the handheld terminal, and receive the captured image captured by the handheld terminal, where the captured image includes part or all of the video image currently played on the display screen of the electronic device;
  • An acquiring module configured to acquire a video image currently played by a display screen of the electronic device as a reference image
  • a first calculating module configured to obtain a transformation matrix according to the reference image and the acquired image, thereby establishing a transformation matrix between the reference image and the matching point on the acquired image;
  • a setting module configured to: use the transformation matrix to solve a coordinate point on a reference image corresponding to a point on the acquired image, and set a cursor point on the display screen at the coordinate point.
  • the captured image is captured by the camera, and the cursor position is set according to the comparison between the collected image and the reference image, thereby realizing the cursor positioning function by the image technology, which is completely different from the existing mouse positioning method.
  • Figure 1 is a scene diagram of cursor positioning on a display screen using a camera.
  • FIG. 2 is a flow chart of a method of using a camera to position an on-screen cursor in accordance with an embodiment of the present disclosure.
  • FIG 3 is an exemplary diagram of acquiring an image in an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of the principle of imaging with a camera in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a display screen in accordance with an embodiment of the present disclosure.
  • FIG. 6 is another example diagram of acquiring an image in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a structural diagram of a handheld terminal corresponding to an embodiment of the present disclosure and the present invention.
  • the flowchart, block diagrams in the figures illustrate possible architectures, functions and operations of the systems, methods, and apparatus of the embodiments of the present disclosure.
  • the blocks in the flowcharts and block diagrams may represent a module, a block, or a Code, the modules, program segments, and code are executable instructions that implement the specified logic functions. It should also be noted that the implementation of the specified logic function is executable Instructions can be recombined to generate new modules and blocks.
  • the blocks and block diagrams of the figures are therefore merely illustrative of the processes and steps of the embodiments and are not intended to limit the invention.
  • Figure 1 is a scene diagram of cursor positioning on a display screen using a camera.
  • the top of the handheld device 200 is provided with a camera 202.
  • the handheld device 200 is used as a mouse-like pointing device.
  • the pointing direction 201 of the camera is used to control the cursor position 12 on the screen 100 so that the cursor follows the camera 202.
  • the display screen 100 is a display screen of an electronic device 300 (such as a computer or a television), and the types of the display screen include, but are not limited to, a liquid crystal panel, an LED screen, and a plasma screen.
  • FIG. 2 illustrates a flow chart of a method of cursor positioning on a screen using a camera, including the following steps, in accordance with an embodiment of the present disclosure.
  • step S11 the currently played image on the display screen of the electronic device is received as a reference image, and the captured image is captured by the camera.
  • communication between the electronic device 300 and the handheld device 200 can be performed by wireless or wired means.
  • the electronic device 300 intercepts the video image currently playing on the display screen as a reference image and transmits it to the handheld device 200, and the handheld device 200 stores the reference image after playing.
  • the handheld device 200 performs real-time shooting by using the camera 202 to obtain a captured image.
  • the camera 202 can capture some or all of the display screens into the captured image, so the captured image may include the current display of the display. Part or all of the video image.
  • the captured image shown in Figure 3 contains the entire display, which in turn contains all of the video images currently being played by the display.
  • the transformation matrix can be calculated accordingly, and thus the implementation of the present disclosure is not affected.
  • step S12 a transformation matrix is constructed based on the acquired image and the reference image.
  • FIG 3 is an exemplary diagram of acquiring an image in an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of the principle of imaging with a camera in accordance with an embodiment of the present disclosure.
  • the camera coordinate system 20 is constructed with the imaging plane in which the image 20 is collected, and the optical center of the camera is used as the origin Oc.
  • the X-axis, the Y-axis, and the Z-axis of the camera coordinate system 40 are represented by Xc, Yc, and Zc, respectively.
  • the X axis is in the horizontal direction along the acquired image 20
  • the Y axis is along the vertical direction of the acquired image 20
  • the Z axis is the direction perpendicular to the plane in which the image 20 is acquired.
  • Collecting images 20 contains the video image 21 currently being played by the display screen.
  • the reference image 10 is located in the reference coordinate system 11, with the upper left corner of the display screen as the origin Or, and the X, Y, and Z axes of the reference coordinate system 11 are represented by Xr, Yr, and Zr, respectively.
  • the coordinates of all points on image 10 on the z-axis are zero.
  • the point on the reference image 10 can be expressed as P(x, y, 0), or P(x, y).
  • the reference image 10 and the acquired image 20 are compared, and the projection transformation relationship H of the reference coordinate system 11 and the camera coordinate system 40 is calculated.
  • the point P(x, y) on the reference image 10 satisfies the transformation relationship with the imaging point Q(u, v) on the acquired image 20,
  • w is the homogeneous coefficient
  • H is a transformation matrix, and the projection transformation relationship can be abbreviated as
  • the transformation matrix H is required, and at least 4 sets of corresponding points are needed, and are not on a straight line. The more points, the more accurate the obtained transformation matrix H is. Thus, for each image point on the acquired image 20, its corresponding coordinate point on the reference image can be obtained by transforming the matrix.
  • the video image currently played on the display screen included in the captured image 20 is marked as the target image 21, but the position of the target image 21 in the reference image 10 cannot be obtained in advance.
  • the calculation can be performed using the ASIFT algorithm.
  • the algorithm extracts the feature points of the reference image 10 and the acquired image 20, and matches the feature points to obtain matching points, and constructs a transformation matrix according to the matched feature points.
  • the ASIFT algorithm has affine non-deformation, and can output enough matching points for the image to ensure that the transformed transformation matrix H is relatively accurate.
  • step S13 a point in the acquired image is converted into a cursor position of the display screen according to the transformation matrix.
  • a point is specified on the captured image, and the connection between the point and the optical center of the camera intersects with the display interface at a certain point, and the coordinates of the point are calculated as a new cursor position using the transformation matrix H described above.
  • the point coordinates are passed to the remote electronic device 300.
  • the point may also be a dynamic floating point, which is specified according to the position of the reference image in the acquired image. As shown in FIG.
  • the point reflecting the pointing direction of the camera is set as the midpoint of the straight line (line segment) of the center point 22 (Xp, Yp) of the captured image 20 and the center point of the target image 21.
  • any point of the straight line connecting the center point 22 (Xp, Yp) of the image 20 and the center point of the target image 21 may be selected as a point reflecting the pointing direction of the camera.
  • the way the dynamic floating point is specified allows the user to be in a larger Pointing within the range, fine-tuning the indicated position on the screen, which is suitable for situations that are far away from the screen.
  • the dynamic floating point can also control the cursor position to move in full screen within a small pointing range, which is suitable for the case where the target image is very close to the screen and the target image occupies a large area on the captured image.
  • different captured images are obtained by continuously changing the shooting angle, direction and position of the camera, and then a new cursor position is obtained according to the comparison of each captured image and the reference image. That is to say, when the shooting angle, direction and position of the camera are constantly changing, these changes will eventually be reflected in each captured image, so the position of the display screen pointed by the camera can be analyzed according to the captured image and used as the cursor position.
  • FIG. 7 is a structural diagram of a handheld terminal corresponding to an embodiment of the present disclosure and the present invention.
  • the handheld terminal provided in FIG. 7 corresponds to the “method of using the camera to perform on-screen cursor positioning” shown in FIG. 2, and includes a wireless receiving module 701, an image capturing module 702, a calculating module 703, and an output module 704.
  • the wireless receiving module 701 is configured to establish wireless communication with the electronic device, and receive a video image currently played by the display screen as a reference image.
  • the image capturing module 702 captures the captured image through the camera, and the captured image includes the entire display screen, and further includes the video image currently played by the display screen.
  • the calculation module 703 is configured to obtain a transformation matrix according to the reference image and the acquired image, thereby establishing a transformation matrix between the reference image and the matching points on the acquired image.
  • the output module 704 uses the transformation matrix obtained by the calculation module 703 to solve the coordinate points on the reference image corresponding to a point on the acquired image, and transmits the coordinate point to the electronic device, so that the electronic device sets a new cursor point.
  • the output module 704 selects a fixed point in the captured image to be converted to a cursor position of the display screen.
  • the fixed point is the center point of the acquired image.
  • the output module 704 selects one of the captured images to be converted into a cursor position of the display screen.
  • the output module 704 connects the center point of the acquired image and the center point of the target image into a straight line, and selects a point on the straight line as the floating point, wherein the video currently played by part or all of the display screen included in the image is captured.
  • the image is the target image.
  • the calculation module 703 comprises: a comparison unit and a first calculation unit.
  • the comparison unit is configured to compare the time stamps of the acquired image and the reference image to select a captured image having the same time stamp as the reference image.
  • the first calculation unit is configured to calculate the acquired image and the reference image having the same time stamp to construct a transformation matrix.
  • handheld terminals and methods in the present disclosure correspond to each other, and the implementation details of the method are also applicable to the handheld terminal, and thus the function description is performed in a relatively simple manner in the handheld terminal.
  • each of the above modules may also be deployed in whole or in part to the electronic device.
  • the present disclosure may provide an electronic device that cooperates with the handheld terminal to implement cursor positioning, including: a first receiving module, configured to establish a wireless connection with the handheld terminal, and receive the captured image captured by the handheld terminal,
  • the acquisition image includes part or all of the video image currently being played on the display screen of the electronic device;
  • the acquisition module is configured to acquire a video image currently played by the display screen of the electronic device as a reference image;
  • a coordinate point on the image and a cursor point on the display screen is set at the coordinate point.
  • a method for performing cursor positioning using image technology is introduced, which is different from the existing mouse positioning method.
  • a product can be designed, a camera is set on the product, and the cursor is positioned by shooting at different angles of the camera.
  • a touch screen is set on the product, and the page scroll or button click is realized by touch on the screen.
  • the present disclosure also proposes a method of using a camera as a three-dimensional space gesture input device.
  • the spatial attitude of the camera coordinate system 40 in the reference coordinate system 11 that is, the three-dimensional coordinates and the rotation angles of the three directions are calculated and transmitted to the electronic device end.
  • K is the transformation matrix. In the same way as before, at least four points that are not collinear can be solved.
  • K can be represented by the rotation matrix and displacement vector of the camera coordinate system 40, as follows
  • rx, ry, rz are the direction vectors of the x-axis, y-axis, and z-axis of the camera coordinate system in the reference coordinate system, respectively, and the three vectors are orthogonal.
  • the coordinates of the original coordinate of the camera coordinate system in the reference coordinate system. Can get
  • Such a rotation matrix R, and the displacement vector T represent the spatial attitude of the camera in the reference coordinate system 11.
  • a three-dimensional control effect can be achieved, that is, the cursor control of the three-dimensional image can be performed by the camera, thereby enriching the interaction mode between the user and the remote device.
  • the television is used as an electronic device and the mobile phone is used as a handheld device.
  • the top of the phone is equipped with a camera, or the mirror is used to let the camera on the back take the top direction.
  • the hand-held machine can conveniently and intuitively control the pointing of the camera.
  • Software is installed on both the phone and the TV to communicate with each other.
  • the phone points to the TV screen, capture the captured image including the screen.
  • the software of the mobile phone controlling the television captures the current display image and transmits it as a reference image to the mobile phone.
  • the software of the mobile terminal extracts the feature points of the two images, matches the corresponding feature points, and calculates the mapping relationship from the acquired image to the reference image.
  • the pointing position that is, the position captured at the center of the captured image.
  • the pixel position of the pixel in the reference image is calculated. This location is the pointing position of the phone camera on the TV screen. Send the location to the TV and move the mouse to that point to mark the location.
  • the remote controller is used to control the pointing position of the mouse on the projection screen during the presentation of the conference.
  • the projected computer acts as a remote device and communicates with the remote control over a wireless network.
  • the remote control is a handheld device with a camera installed. When the camera points to the screen, the captured image containing the screen is delivered to the computer. At the same time, the computer intercepts the projected picture as a reference image.
  • the computer After the computer obtains two images, the feature points in the two images are extracted, the positions of the corresponding points are matched, and the mapping relationship from the acquired image to the reference image is calculated.
  • the pointing position that is, the position captured at the center of the captured image.
  • the pixel position of the pixel in the reference image is calculated. This position is the pointing position of the remote control on the projection screen, and the mouse is moved to the point mark pointing position.
  • buttons and the scroll wheel By attaching the left and right buttons and the scroll wheel to the remote control, you can operate the projection screen in the presentation just like the mouse.
  • the app uses the phone as a gamepad.
  • the blocks can be struck from different angles, and the handles need to be used to specify the position and angle.
  • the mobile phone communicates with the computer via wired or wireless means.
  • a camera is attached to the phone.
  • the computer intercepts the currently displayed picture as a reference image and transmits it to the mobile phone.
  • the software of the mobile terminal extracts the feature points of the two images, matches the corresponding feature points, and calculates the coordinate vector and the rotation matrix from the coordinate system of the camera in the reference image.
  • the coordinate vector and the rotation matrix are sent to the computer to control the position and rotation angle of the building block.
  • Individual modules or units of the system can be implemented in hardware, firmware or software.
  • the software includes, for example, an encoding program formed using various programming languages such as JAVA, C/C++/C#, and SQL.
  • Systems and methods in accordance with the present disclosure can be deployed on a single or multiple servers. For example, different modules can be deployed on different servers to form a dedicated server. Alternatively, the same functional unit, module, or system can be deployed distributed across multiple servers to reduce load stress.
  • the server includes, but is not limited to, a plurality of PCs, PC servers, blades, supercomputers, and the like connected in the same local area network and through the Internet.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Position Input By Displaying (AREA)

Abstract

一种利用手持终端进行光标定位的方法、手持终端和电子设备(300)。所述手持终端上设置有摄像头(202),所述方法包括:接收电子设备(300)的显示屏(100)上当前播放的视频图像(21)作为参考图像(10),并通过所述摄像头(202)拍摄采集图像(20)(S11),所述采集图像(20)内包含所述电子设备(300)的显示屏(100)上当前播放的视频图像(21)的部分或全部;根据所述采集图像(20)和所述参考图像(10)构建变换矩阵(S12);以及根据所述变换矩阵,将所述采集图像(20)中的一点转换为显示屏(100)的光标位置(12)(S13)。所述方法实现了通过图像技术实现光标定位功能。

Description

利用手持终端进行光标定位的方法、手持终端和电子设备 技术领域
本公开涉及界面交互技术,具体涉及利用成像设备实现在屏幕上定位的方法和装置。
背景技术
随着人机交互技术的不断发展,和电子设备例如计算机、电视机等设备的交互手段已不再依赖鼠标输入。目前,利用成像设备来实现人机交互输入的技术越来越具有实践价值。例如,现有技术提供了一种利用图像定位的方法,通过在电视周围环境中布置特定图案,或者在屏幕上显示特定图案,凭借这些图案,在摄像头采集的图像中定位屏幕的位置,计算摄像头与屏幕的投影关系,继而定位光标位置。
但该方法需要在环境中布置图案,增加了用户的使用门槛,而在屏幕中显示特定图案又会干扰用户的正常显示。
发明内容
有鉴于此,本公开提出一种易于使用的方案,不用对周围环境做特别的设置,或者改变现实的内容,仅使用界面上显示的参考图像,就定位参考图像在采集图像中位置,实现直观的光标坐标控制。
根据本公开的第一方面,提供一种利用手持终端进行光标定位的方法,所述手持终端上设置有摄像头,包括:
通过所述摄像头拍摄采集图像,所述采集图像内包含所述电子设备的显示屏上当前播放的视频图像的部分或全部;
接收电子设备的显示屏上当前播放的视频图像作为参考图像;
根据所述采集图像和所述参考图像构建变换矩阵;以及
根据所述变换矩阵,将所述采集图像中的一点转换为显示屏的光标位置。
优选地,所述采集图像中的一点为固定点。
优选地,将所述固定点为所述采集图像的中心点。
优选地,所述采集图像中的一点为浮动点。
优选地,将所述采集图像的中心点和目标图像的中心点连接成直线,选取所述直线上的一 点作为所述浮动点,其中,将所述采集图像中包含的所述显示屏当前播放的视频图像的部分或全部作为目标图像。
优选地,所述方法还包括:从所述摄像拍摄的多张采集图像中选择和所述参考图像具有相同的时间戳的采集图像作为第一采集图像;
则所述根据所述采集图像和所述参考图像构建变换矩阵包括:
以所述第一采集图像和所述参考图像构建变换矩阵。
优选地,所述根据所述采集图像和所述参考图像构建变换矩阵包括:
以所述电子终端的显示屏构建参考坐标系,以所述采集图像所在的成像平面构建摄像头坐标系;
采用ASIFT算法获取所述采集图像和所述参考图像在各自的坐标系内的匹配的特征点,其中,所述特征点大于4组,且不在一条直线上;以及
根据所述匹配的特征点构建所述变换矩阵。
优选地,所述手持设备为遥控器、移动电话或PAD,所述电子设备为电视机或计算机。
根据本公开的第二方面,提供一种手持终端,所述手持终端上设置有摄像头,包括:
无线接收模块,用于接收电子设备的显示屏上当前播放的视频图像作为参考图像;
图像拍摄模块,通过所述摄像头拍摄采集图像,所述采集图像内包含所述电子设备的显示屏上当前播放的视频图像的部分或全部;
计算模块,根据所述采集图像和所述参考图像构建变换矩阵;
输出模块,根据所述变换矩阵,将所述采集图像中的一点转换为显示屏的光标位置,并输出给所述电子设备。
优选地,所述输出模块选取所述采集图像中的一个固定点转换为显示屏的光标位置。
优选地,所述固定点为所述采集图像的中心点。
优选地,所述输出模块选取所述采集图像中的一个浮动点转换为显示屏的光标位置。
优选地,所述输出模块将所述采集图像的中心点和目标图像的中心点连接成直线,选取所述直线上的一点作为所述浮动点,其中,将所述采集图像中包含的所述显示屏当前播放的视频图像的部分或全部作为目标图像。
优选地,所述计算模块包括:
比对单元,用于比较所述采集图像和所述参考图像的时间戳,以选择和参考图像具有相同时间戳的采集图像;
第一计算单元,用于将具有相同时间戳的采集图像和参考图像进行计算,构建变换矩阵。
根据本公开的第三方面,提供一种电子设备,包括:
第一接收模块,用于和手持终端建立无线连接,接收所述手持终端拍摄的采集图像,所述采集图像中包含所述电子设备的显示屏上当前播放的视频图像的部分或全部;
获取模块,用于获取所述电子设备的显示屏当前播放的视频图像作为参考图像;
第一计算模块,用于根据参考图像和采集图像获得变换矩阵,从而建立起参考图像和采集图像上的匹配点之间的变换矩阵;
设置模块,用于所述变换矩阵,求解所述采集图像上的一点对应的参考图像上的坐标点,并将所述显示屏上的光标点设置在所述坐标点。
本公开实施例通过摄像头拍摄采集图像,并根据采集图像和参考图像的比对设置光标位置,实现了通过图像技术实现光标定位功能,和现有的鼠标定位方法截然不同。
附图说明
通过参照以下附图对本公开实施例的描述,本公开的上述以及其它目的、特征和优点将更为清楚。
图1是利用摄像头进行显示屏上的光标定位的场景图。
图2是根据本公开实施例的利用摄像头进行屏幕上光标定位的方法的流程图。
图3是本公开实施例中采集图像的示例图。
图4是根据本公开实施例中利用摄像头成像的原理示意图。
图5是根据本公开实施例的显示屏的示意图。
图6是根据本公开实施例的采集图像的另一示例图。
图7是根据本公开实施例和发明对应的手持终端的结构图。
具体实施方式
以下基于实施例对本公开进行描述,但是本公开并不仅仅限于这些实施例。在下文对本公开的细节描述中,详尽描述了一些特定的细节部分。对本领域技术人员来说没有这些细节部分的描述也可以完全理解本公开。为了避免混淆本公开的实质,公知的方法、过程、流程没有详细叙述。另外附图不一定是按比例绘制的。
附图中的流程图、框图图示了本公开实施例的系统、方法、装置的可能的体系框架、功能和操作,流程图和框图上的方框可以代表一个模块、程序段或仅仅是一段代码,所述模块、程序段和代码都是用来实现规定逻辑功能的可执行指令。也应当注意,所述实现规定逻辑功能的可执行 指令可以重新组合,从而生成新的模块和程序段。因此附图的方框以及方框顺序只是用来更好的图示实施例的过程和步骤,而不应以此作为对发明本身的限制。
图1是利用摄像头进行显示屏上的光标定位的场景图。
如图1所示,手持设备200的顶部设置有摄像头202,将手持设备200用作类似鼠标的指示装置,将用摄像头的指向方向201,控制屏幕100上的光标位置12,使光标跟随摄像头202的指向方向移动。显示屏100为电子设备300(例如计算机或电视机)的显示屏,该显示屏的类型包括但不局限于液晶屏、LED屏、等离子屏。
图2示出了根据本公开实施例的利用摄像头进行屏幕上光标定位的方法的流程图,包括以下步骤。
在步骤S11中,接收电子设备的显示屏上当前播放的图像作为参考图像,并通过摄像头拍摄采集图像。
参考图1,电子设备300和手持设备200之间可以通过无线或有线的方式进行通信。电子设备300截取显示屏上当前播放的视频图像作为参考图像并发送给手持设备200,手持设备200接收到播放的参考图像后进行存储。同时,手持设备200利用摄像头202进行实时拍摄,从而获得采集图像,在拍摄时,摄像头202可以将部分或全部的显示屏都拍摄到采集图像中,因此采集图像中可以包含了显示屏当前播放的视频图像的部分或者全部。图3示出了的采集图像包含整个显示屏,进而包含显示屏当前播放的视频图像的全部。但本领域的技术人员可以理解,即使采集图像中只包含部分的视频图像,也能够据此计算出变换矩阵,因此不会影响本公开的实施。
值得指出的是,在实际的操作中参考图像与拍摄的采集图像可能存在延迟,然而这个延迟通常不会引起问题,因为图像传输所造成的延迟一般非常短,可以通过降低图像的分辨率和仅传输灰度图像的方式来降低数据量,借以降低延迟。或者在一分钟内连续拍摄多张采集图像并存储每张采集图像的时间戳,通过采集图像和参考图像的时间戳进行比对,从多张采集图像中选择一张和参考图像具有相同的时间戳的采集图像,用于后续的步骤S12。另外视频图像的变化通常不会每一帧都有剧烈的变化。在实际处理图像时,还可以引入图像平滑处理技术,防止数据的跳变。经过上述处理基本上能够消除延迟对结果的影响。
在步骤S12中,根据采集图像和参考图像构建变换矩阵。
图3是本公开实施例中采集图像的示例图。
图4是根据本公开实施例中利用摄像头成像的原理示意图。
参考图3和图4,以采集图像20所在的成像平面构建摄像头坐标系20,摄像头光心作为原点Oc,摄像头坐标系40的X轴、Y轴和Z轴分别用Xc、Yc和Zc表示,X轴为沿采集图像20的水平方向,Y轴沿采集图像20的竖直方向,Z轴为和采集图像20的平面垂直的方向。采集图像 20中包含了显示屏当前播放的视频图像21。图上22(Xp,Yp)为采集图像20的中点。采集图像上的任一点可以用Q(u,v,d)表示,其中d为常数,表示摄像头坐标系40的原点Oc到采集图像20的距离,以像素为单位,常取d=1。
参考图4和图5,参考图像10位于参考坐标系11内,以显示屏的左上角为原点Or,参考坐标系11的X轴、Y轴和Z轴分别用Xr、Yr和Zr表示,参考图像10上所有的点在z轴上的坐标为0。因此参考图像10上点可以表示为P(x,y,0),或者P(x,y)。
参考图4,比对参考图像10与采集图像20,计算出参考坐标系11与摄像头坐标系40的投影变换关系H。
参考图像10上的点P(x,y),与其在采集图像20上的成像点Q(u,v),满足变换关系,
Figure PCTCN2017082537-appb-000001
w为齐次系数,令
Figure PCTCN2017082537-appb-000002
H为变换矩阵,投影变换关系可以简写为
P=H*Q
要求得变换矩阵H,最少需要4组对应的点,且不在一条直线上,点数越多,得到的变换矩阵H越准确。这样对于采集图像20上的每一个成像点,都可以通过变换矩阵得到其在参考图像上的对应坐标点。
如图4所示,采集图像20中包含的显示屏上当前播放的视频图像标记为目标图像21,但是目标图像21在参考图像10中的位置并不能提前获得。为了找到目标图像21和参考图像10的对应关系,可以使用ASIFT算法进行计算。该算法提取参考图像10与采集图像20的特征点,并将特征点匹配,获得匹配点,根据匹配的特征点构建变换矩阵。ASIFT算法具有仿射不变形,对于图像可以稳定的输出足够的匹配点,从而保证求解到的变换矩阵H相对准确。
在步骤S13中,根据变换矩阵,将采集图像中的一点转换为显示屏的光标位置。
如图4所示,在采集图像上指定一点,该点与摄像头光心的连线,与显示屏界面相交于某点,使用前述的变换矩阵H,计算出该点坐标作为新的光标位置,并将该点坐标传递给远程的电子设备300。
摄像头所指向的光标位置可以为采集图像20的中心点在参考图像上的对应坐标点,如图上的中心点22(Xp,Yp)对应的坐标点12(Xc,Yc),即满足公式:12(Xc,Yc)=H*22(Xp,Yp),这时摄像头在手持设备的正前方,用户可以直观的指示摄像头的位置。如果摄像头并不在手持设备的正前方,可以选取其他的固定点,以反映摄像头的指向方向。另外,该点也可以是动态浮动点,根据参考图像在采集图像中的位置来指定。如图6所示,将反映摄像头的指向方向的点设定为采集图像20的中心点22(Xp,Yp)和目标图像21的中心点的直连线(线段)的中点。但不局限于此,理论上,可以选取采集图像20的中心点22(Xp,Yp)和目标图像21的中心点的直线连接线的任意一点作为反映摄像头的指向方向的点。动态浮动点的指定方式,允许用户在在较大的 指向范围内,精细的调整屏幕上的指示位置,这适合于距离屏幕很远的场合。动态浮动点也可以在较小的指向范围内控制光标位置在全屏移动,这适合距离屏幕很近,目标图像在采集图像上占据很大面积的场合。
在上述实施例中,通过不断地变换摄像头的拍摄角度、方向和位置,得到不同的采集图像,进而根据每个采集图像和参考图像的比对获得一个新的光标位置。也就是说,当摄像头的拍摄角度、方向和位置不断变化时,这些变化最终会体现在每个采集图像中,因此可以根据采集图像中分析摄像头指向的显示屏的位置并将其作为光标位置。
图7是根据本公开实施例和发明对应的手持终端的结构图。图7提供的手持终端和图2所示的“利用摄像头进行屏幕上光标定位的方法”对应,包括无线接收模块701、图像拍摄模块702、计算模块703和输出模块704。
无线接收模块701用于和电子设备建立无线通信,接收显示屏当前播放的视频图像作为参考图像。
图像拍摄模块702通过摄像头拍摄采集图像,采集图像中包含有整个显示屏,进而包含显示屏当前播放的视频图像。
计算模块703用于根据参考图像和采集图像获得变换矩阵,从而建立起参考图像和采集图像上的匹配点之间的变换矩阵。
输出模块704利用计算模块703获得的变换矩阵,求解采集图像上的一点对应的参考图像上的坐标点,并将该坐标点传输给电子设备,以使电子设备设置新的光标点。
优选地,输出模块704选取所述采集图像中的一个固定点转换为显示屏的光标位置。例如,该固定点为所述采集图像的中心点。
优选地,输出模块704选取所述采集图像中的一个浮动点转换为显示屏的光标位置。例如,输出模块704将采集图像的中心点和目标图像的中心点连接成直线,选取直线上的一点作为所述浮动点,其中,将采集图像中包含的部分或全部的显示屏当前播放的视频图像作为目标图像。
优选地,计算模块703包括:比对单元和第一计算单元。
比对单元用于比较采集图像和参考图像的时间戳,以选择和参考图像具有相同时间戳的采集图像。
第一计算单元用于将具有相同时间戳的采集图像和参考图像进行计算,构建变换矩阵。
本领域的技术人员可以理解,本公开中的手持终端和方法互相对应,方法的实现细节也同样适用于手持终端,因此在手持终端中以相对简略的方式进行功能描述。
应该指出的是,上述各个模块也可以全部或部分地部署到电子设备端。
例如,基于此,本公开可以提供一种电子设备,和手持终端协作实现光标定位,包括:第一接收模块,用于和手持终端建立无线连接,接收所述手持终端拍摄的采集图像,所述采集图像中包含所述电子设备的显示屏上当前播放的视频图像的部分或全部;获取模块,用于获取所述电子设备的显示屏当前播放的视频图像作为参考图像;第一计算模块,用于根据参考图像和采集图像获得变换矩阵,从而建立起参考图像和采集图像上的匹配点之间的变换矩阵;设置模块,用于所述变换矩阵,求解所述采集图像上的一点对应的参考图像上的坐标点,并将所述显示屏上的光标点设置在所述坐标点。
在本公开实施例中,介绍了一种利用图像技术进行光标定位的方法,和现有的鼠标定位方法不同。基于此,可以设计一种产品,在该产品上设置摄像头,通过摄像头不同角度的拍摄实现光标定位,同时,在该产品上设置一个触控屏,通过在屏幕上的触摸实现页面滚动或按键点击功能。从而,使该产品完全具有鼠标的功能。
基于上述的方法,本公开还提出一种把摄像头作为三维空间姿态输入设备的方法。使用前述的变换矩阵H,计算出摄像头坐标系40在参考坐标系11中的空间姿态,即三维坐标和三个方向的旋转角度,传递给电子设备端。
为了得到摄像头的空间姿态,首先建立一个与之前相反的映射关系,从参考坐标系映射到摄像头坐标系:
Figure PCTCN2017082537-appb-000003
Figure PCTCN2017082537-appb-000004
K为变换矩阵,以前面类似的方法,最少要多于四个不共线的点就可以求解出来。
K可以用摄像头坐标系40的旋转矩阵与位移向量来表示,如下
Figure PCTCN2017082537-appb-000005
其中
Figure PCTCN2017082537-appb-000006
为旋转矩阵,rx,ry,rz分别为摄像头坐标系的x轴,y轴,z轴在参照坐标系中的方向向量,且三个向量正交。
Figure PCTCN2017082537-appb-000007
为摄像头坐标系的原心在参考坐标系的坐标。可以得到
Figure PCTCN2017082537-appb-000008
Figure PCTCN2017082537-appb-000009
Figure PCTCN2017082537-appb-000010
求解r13,r23,r33的过程中,会遇到开方。在我们的使用场景中,摄像头一定在电子设备的显示屏的前面,参考坐标系的z轴从正面穿过界面,即tz小于零。进而可以排除无用的解。
这样的旋转矩阵R,与位移向量T表示了摄像头在参考坐标系11中的空间姿态。把这种空间姿态作为输入传给对远程设备,可以实现三维的控制效果,即可以通过摄像头进行三维画面的光标控制,以此丰富用户与远程设备间的交互方式。
上述实施例可以应用于以下场景中。
1、手机作为电视遥控器
这种场景下,电视机作为电子设备,手机作为手持设备。手机的顶部装有摄像头,或通过反射镜让背面的摄像头拍摄顶部方向。手握手机可以方便直观的控制摄像头的指向。在手机和电视上都安装有软件,可以相互通信。
手机指向电视屏幕时,拍摄包含屏幕在内的采集图像。同时手机控制电视机的软件截取当前显示画面,作为参考图像传送给手机。手机端的软件提取两幅图像的特征点,匹配对应的特征点,计算从采集图像到参考图像的映射关系。
选择手机摄像头的正对方向作为指向位置,即采集图像的中心所拍摄的位置。计算该像素在参考图像中的像素位置。该位置即为手机摄像头在电视屏幕上的指向位置。将该位置发送给电视机,移动鼠标到该点以标志指向位置。
2、会议投影遥控器
该遥控器用于在会议演示时,控制投影画面上的鼠标指向位置。投影的电脑作为远程设备,与遥控器通过无线网络通信。遥控器作为手持设备,安装有摄像头。当摄像头指向屏幕时,拍摄包含屏幕的采集图像传递给电脑。同时电脑截取投影的画面作为参考图像。
电脑取得两幅图像后,提取两幅图像中的特征点,匹配对应的点的位置,计算从采集图像到参考图像的映射关系。
选择遥控器摄像头的正对方向作为指向位置,即采集图像的中心所拍摄的位置。计算该像素在参考图像中的像素位置。该位置即为遥控器在投影屏幕上的指向位置,移动鼠标到该点标志指向位置。
遥控器上附加左右键与滚轮,就可以和鼠标一样在演示中操作投影屏幕。
3、用手机在电脑中搭积木的游戏
该应用方案将手机作为游戏手柄使用。在这个假象的游戏中,积木可以用不同角度垒起来,需要使用手柄来指定的摆放位置和角度。
手机与电脑通过有线或者无线的方式通信。手机上附带有摄像头。
用手机指向电脑屏幕,拍摄包含屏幕的采集图像。同时电脑截取当前显示的画面作为参考图像传递给手机。
手机端的软件提取两幅图像的特征点,匹配对应的特征点,计算从摄像头在参考图像所在坐标系中的坐标向量与旋转矩阵。将该坐标向量与旋转矩阵发送给电脑,控制积木的摆放位置与旋转角度。
系统的各个模块或单元可以通过硬件、固件或软件实现。软件例如包括采用JAVA、C/C++/C#、SQL等各种编程语言形成的编码程序。虽然在方法以及方法图例中给出本公开实施例的步骤以及步骤的顺序,但是所述步骤实现规定的逻辑功能的可执行指令可以重新组合,从而生成新的步骤。所述步骤的顺序也不应该仅仅局限于所述方法以及方法图例中的步骤顺序,可以根据功能的需要随时进行调整。例如将其中的某些步骤并行或按照相反顺序执行。
根据本公开的系统和方法可以部署在单个或多个服务器上。例如,可以将不同的模块分别部署在不同的服务器上,形成专用服务器。或者,可以在多个服务器上分布式部署相同的功能单元、模块或系统,以减轻负载压力。所述服务器包括但不限于在同一个局域网以及通过Internet连接的多个PC机、PC服务器、刀片机、超级计算机等。
以上所述仅为本公开的优选实施例,并不用于限制本公开,对于本领域技术人员而言,本公开可以有各种改动和变化。凡在本公开的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (15)

  1. 一种利用手持终端进行光标定位的方法,所述手持终端上设置有摄像头,包括:
    通过所述摄像头拍摄采集图像,所述采集图像内包含所述电子设备的显示屏上当前播放的视频图像的部分或全部;
    接收电子设备的显示屏上当前播放的视频图像作为参考图像;
    根据所述采集图像和所述参考图像构建变换矩阵;以及
    根据所述变换矩阵,将所述采集图像中的一点转换为显示屏的光标位置。
  2. 根据权利要求1所述的方法,其中,所述采集图像中的一点为固定点。
  3. 根据权利要求2所述的方法,其中,所述固定点为所述采集图像的中心点。
  4. 根据权利要求1所述的方法,其中,所述采集图像中的一点为浮动点。
  5. 根据权利要求4所述的方法,其中,将所述采集图像的中心点和目标图像的中心点连接成直线,选取所述直线上的一点作为所述浮动点,其中,将所述采集图像中包含的所述显示屏当前播放的视频图像的部分或全部作为目标图像。
  6. 根据权利要求1所述的方法,所述方法还包括:从所述摄像拍摄的多张采集图像中选择和所述参考图像具有相同的时间戳的采集图像作为第一采集图像;
    则所述根据所述采集图像和所述参考图像构建变换矩阵包括:
    以所述第一采集图像和所述参考图像构建变换矩阵。
  7. 根据权利要求1所述的方法,其中,所述根据所述采集图像和所述参考图像构建变换矩阵包括:
    以所述电子终端的显示屏构建参考坐标系,以所述采集图像所在的成像平面构建摄像头坐标系;
    采用ASIFT算法获取所述采集图像和所述参考图像在各自的坐标系内的匹配的特征点,其中,所述特征点大于4组,且不在一条直线上;以及
    根据所述匹配的特征点构建所述变换矩阵。
  8. 根据权利要求1所述的方法,其中,所述手持设备为遥控器、移动电话或PAD,所述电子设备为电视机或计算机。
  9. 一种手持终端,所述手持终端上设置有摄像头,包括:
    无线接收模块,用于接收电子设备的显示屏上当前播放的视频图像作为参考图像;
    图像拍摄模块,通过所述摄像头拍摄采集图像,所述采集图像内包含所述电子设备的显示屏上当前播放的视频图像的部分或全部;
    计算模块,根据所述采集图像和所述参考图像构建变换矩阵;
    输出模块,根据所述变换矩阵,将所述采集图像中的一点转换为显示屏的光标位置,并输出给所述电子设备。
  10. 根据权利要求9所述的手持终端,其中,所述输出模块选取所述采集图像中的一个固定点转换为显示屏的光标位置。
  11. 根据权利要求10所述的手持终端,其中,所述固定点为所述采集图像的中心点。
  12. 根据权利要求9所述的手持终端,其中,所述输出模块选取所述采集图像中的一个浮动点转换为显示屏的光标位置。
  13. 根据权利要求12所述的手持终端,所述输出模块将所述采集图像的中心点和目标图像的中心点连接成直线,选取所述直线上的一点作为所述浮动点,其中,将所述采集图像中包含的所述显示屏当前播放的视频图像的部分或全部作为目标图像。
  14. 根据权利要求9所述的手持终端,所述计算模块包括:
    比对单元,用于比较所述采集图像和所述参考图像的时间戳,以选择和参考图像具有相同时间戳的采集图像;
    第一计算单元,用于将具有相同时间戳的采集图像和参考图像进行计算,构建变换矩阵。
  15. 一种电子设备,包括:
    第一接收模块,用于和手持终端建立无线连接,接收所述手持终端拍摄的采集图像,所述采集图像中包含所述电子设备的显示屏上当前播放的视频图像的部分或全部;
    获取模块,用于获取所述电子设备的显示屏当前播放的视频图像作为参考图像;
    第一计算模块,用于根据参考图像和采集图像获得变换矩阵,从而建立起参考图像和采集图像上的匹配点之间的变换矩阵;
    设置模块,用于所述变换矩阵,求解所述采集图像上的一点对应的参考图像上的坐标点,并将所述显示屏上的光标点设置在所述坐标点。
PCT/CN2017/082537 2017-04-28 2017-04-28 利用手持终端进行光标定位的方法、手持终端和电子设备 WO2018195973A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780002600.3A CN108235749A (zh) 2017-04-28 2017-04-28 利用手持终端进行光标定位的方法、手持终端和电子设备
PCT/CN2017/082537 WO2018195973A1 (zh) 2017-04-28 2017-04-28 利用手持终端进行光标定位的方法、手持终端和电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/082537 WO2018195973A1 (zh) 2017-04-28 2017-04-28 利用手持终端进行光标定位的方法、手持终端和电子设备

Publications (1)

Publication Number Publication Date
WO2018195973A1 true WO2018195973A1 (zh) 2018-11-01

Family

ID=62645390

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/082537 WO2018195973A1 (zh) 2017-04-28 2017-04-28 利用手持终端进行光标定位的方法、手持终端和电子设备

Country Status (2)

Country Link
CN (1) CN108235749A (zh)
WO (1) WO2018195973A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857621A (zh) * 2019-04-30 2020-10-30 陈逸曦 一种基于手持终端作为显示单元的显示矩阵
CN114710601B (zh) * 2022-03-07 2024-03-12 深圳创维-Rgb电子有限公司 一种基于拍摄设备的屏幕书写方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006130243A2 (en) * 2005-05-27 2006-12-07 Motorola, Inc. User interface controller method and apparatus for a handheld electronic device
CN102508565A (zh) * 2011-11-17 2012-06-20 Tcl集团股份有限公司 遥控器光标定位方法、装置、遥控器及光标定位系统
CN102662501A (zh) * 2012-03-19 2012-09-12 Tcl集团股份有限公司 光标定位系统、方法、被遥控装置及遥控器
CN102984563A (zh) * 2011-09-05 2013-03-20 富士通半导体(上海)有限公司 一种智能遥控电视系统及其遥控方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008181198A (ja) * 2007-01-23 2008-08-07 Funai Electric Co Ltd 画像表示システム
JP5710589B2 (ja) * 2009-04-08 2015-04-30 クゥアルコム・インコーポレイテッドQualcomm Incorporated 改良されたハンドヘルド画面検知ポインタ
CN102122359B (zh) * 2011-03-03 2013-01-23 北京航空航天大学 一种图像配准方法及装置
CN102289304B (zh) * 2011-08-17 2014-05-07 Tcl集团股份有限公司 一种遥控器模拟鼠标移动的实现方法、实现系统及遥控器
CN103369383A (zh) * 2012-03-26 2013-10-23 乐金电子(中国)研究开发中心有限公司 空间遥控器的控制方法、装置、空间遥控器及多媒体终端
CN106527762A (zh) * 2016-11-10 2017-03-22 深圳市鹰眼在线电子科技有限公司 光标坐标确定方法、光标坐标确定装置和鼠标控制系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006130243A2 (en) * 2005-05-27 2006-12-07 Motorola, Inc. User interface controller method and apparatus for a handheld electronic device
CN102984563A (zh) * 2011-09-05 2013-03-20 富士通半导体(上海)有限公司 一种智能遥控电视系统及其遥控方法
CN102508565A (zh) * 2011-11-17 2012-06-20 Tcl集团股份有限公司 遥控器光标定位方法、装置、遥控器及光标定位系统
CN102662501A (zh) * 2012-03-19 2012-09-12 Tcl集团股份有限公司 光标定位系统、方法、被遥控装置及遥控器

Also Published As

Publication number Publication date
CN108235749A (zh) 2018-06-29

Similar Documents

Publication Publication Date Title
WO2018171429A1 (zh) 图像拼接方法、装置、终端及存储介质
EP1899793B1 (en) Control device for information display, corresponding system, method and program product
WO2016017932A1 (ko) 사용자의 시점을 고려하여 동작인식하는 인터페이스 제공방법 및 제공장치
US20130113738A1 (en) Method and apparatus for controlling content on remote screen
US11669173B2 (en) Direct three-dimensional pointing using light tracking and relative position detection
US20220284620A1 (en) Information terminal apparatus and position recognition sharing method
CN112783700A (zh) 用于基于网络的远程辅助系统的计算机可读介质
JP2020178248A (ja) 投影制御装置、投影制御方法、投影システム、プログラム、記憶媒体
WO2018195973A1 (zh) 利用手持终端进行光标定位的方法、手持终端和电子设备
KR101496441B1 (ko) 평면 디스플레이 장치와 영상 센서를 정합하기 위한 장치 및 방법과, 그 방법을 이용해 정합된 평면 디스플레이 장치와 영상 센서를 구비한 전자 장치
KR20130130283A (ko) 증강현실을 위한 스마트폰 자이로센서 기반의 정면객체 생성시스템 및 방법
US20170302908A1 (en) Method and apparatus for user interaction for virtual measurement using a depth camera system
EP3293960A1 (en) Information processing device, information processing method, and program
WO2024055531A1 (zh) 照度计数值识别方法、电子设备及存储介质
WO2018193509A1 (ja) 遠隔作業支援システム、遠隔作業支援方法、及びプログラム
KR102588858B1 (ko) 3d 투어 비교 표시 시스템
US9300908B2 (en) Information processing apparatus and information processing method
US20130271371A1 (en) Accurate extended pointing apparatus and method thereof
JP2015184986A (ja) 複合現実感共有装置
US20180061135A1 (en) Image display apparatus and image display method
KR100980261B1 (ko) 포인팅/인터페이스 시스템
WO2019100547A1 (zh) 投影控制方法、装置、投影交互系统及存储介质
CN111625210A (zh) 一种大屏控制方法、装置及设备
JP2013257830A (ja) 情報処理装置
CN108090933B (zh) 二维平面标定方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17906949

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: 1205A 04.02.2020

122 Ep: pct application non-entry in european phase

Ref document number: 17906949

Country of ref document: EP

Kind code of ref document: A1