WO2019187167A1 - Terminal display image display method - Google Patents

Terminal display image display method Download PDF

Info

Publication number
WO2019187167A1
WO2019187167A1 PCT/JP2018/014029 JP2018014029W WO2019187167A1 WO 2019187167 A1 WO2019187167 A1 WO 2019187167A1 JP 2018014029 W JP2018014029 W JP 2018014029W WO 2019187167 A1 WO2019187167 A1 WO 2019187167A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
display
user
image
sensor
Prior art date
Application number
PCT/JP2018/014029
Other languages
French (fr)
Japanese (ja)
Inventor
古賀信明
Original Assignee
株式会社スペシャルエフエックススタジオ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社スペシャルエフエックススタジオ filed Critical 株式会社スペシャルエフエックススタジオ
Publication of WO2019187167A1 publication Critical patent/WO2019187167A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Definitions

  • the present invention relates to terminal display control.
  • the use of a mobile terminal while riding on a vehicle depends on whether the movement of the mobile terminal of the user or the movement of the vehicle is both Because it cannot be distinguished, it cannot be used in a moving vehicle.
  • the terminal recognizes the relative positional relationship between the terminal and the user's periphery, fixes the virtual reference plane to an arbitrary position in the space around the user, and displays an arbitrary image at an arbitrary position and size of the virtual reference plane.
  • the mobile terminal is obtained from a combination of one or more cameras built in the mobile terminal and sensors such as angular velocity, acceleration, infrared, ultrasonic, visible light, laser, radio wave, magnetism, GPS, or 3D depth sensor. Based on the information that is obtained, the 3D data in the space around the user with the feature points that do not deform with time, or the 3D data of the face or head or body part of the user using the mobile device, Remember.
  • the positional relationship between the three-dimensional data and the mobile terminal may be extracted in real time using a three-dimensional depth sensor from the beginning, but from the viewpoint of power saving and data saving, the three-dimensional sensor such as a three-dimensional depth sensor is used.
  • the image feature point is extracted using the three-dimensional image data together, and the relative positional relationship between the portable terminal and the three-dimensional object is calculated in real time from the image sensor and the angular velocity sensor built in the portable terminal. Is good.
  • an acceleration sensor may be added to the above.
  • the virtual reference plane can be fixed at an arbitrary location and size around the mobile terminal.
  • Virtual projection of data to be displayed such as a displayable image, video, text, etc. that the user wants to display on the mobile terminal in parallel with the normal direction of the virtual reference plane, and the virtual projection area and the display of the mobile terminal Part or all of the data to be virtually projected is displayed at the intersection of the surfaces.
  • the user said by this application refers to the user who uses a terminal or a portable terminal. Since it is necessary to perform real-time processing, the minimum required data acquisition is desirable, but if the machine power is sufficient and the power consumption can be ignored, further information acquisition and analysis may be performed.
  • the real time referred to in the present application includes a delay (for example, about 10 msec) that does not interfere with practical use even if the reflection of the process result and the actual time are not strictly synchronized.
  • the portable terminal 001 of FIGS. 3, 4, and 5 does not have an image sensor or the like, it is omitted as an explanatory diagram of the display screen.
  • the description is mainly focused on the portable terminal, but the application of the present invention may be a terminal similar to a desktop personal computer or a notebook personal computer instead of the portable terminal.
  • the present invention even if the user's space is a space that is moved by a vehicle or the like, after the user determines a virtual display area of an appropriate size, by holding the terminal in that area, When the portion corresponding to the virtual display content is displayed in the display (display) of the terminal and the terminal can be freely scrolled as a result of moving the terminal within the virtual display area.
  • the display is not scrolling, the fixed invisible display information is held over the display screen (frame) of the mobile terminal against a three-dimensional object around the mobile terminal user or a part of the user's own body, It feels like the information is displayed at the corresponding position, so it is very easy to see and there is little stress.
  • information obtained by combining one or more built-in image sensors with sensors such as angular velocity, infrared, ultrasonic, visible light, laser, radio wave, magnetism, GPS, etc., or a three-dimensional depth sensor
  • sensors such as angular velocity, infrared, ultrasonic, visible light, laser, radio wave, magnetism, GPS, etc.
  • a three-dimensional depth sensor A program that allows a computer built in a mobile terminal to analyze 3D information around a mobile terminal or part of a mobile terminal user's body (3D modeling data generation) based on information obtained using a 3D sensor
  • the mobile terminal user places an image, video, and text that can be displayed on the mobile terminal by the mobile terminal user at an arbitrary location.
  • the data of the eyelid projected on the terminal display screen is displayed on the actual terminal display screen by intersecting the terminal display screen with the area where the data to be displayed is virtually projected in parallel with the normal direction of the virtual reference plane.
  • the application of the present invention is most likely to be a portable terminal, and since it can be scrolled by moving its face, it can also be used as a terminal operating means for people with reduced mobility.
  • an explanation will be given with an example of the portable terminal that most people will use.
  • the mobile terminal is one or more cameras built into the mobile terminal, or a position sensor that combines ultrasonic waves, lasers, radio waves, magnetism, infrared rays, visible light, acceleration, etc., or a three-dimensional depth sensor.
  • the 3D spatial information of the mobile terminal and the user's periphery of the mobile terminal is acquired from the 3D sensor, and the 3D reference solid with the time axis obtained in this process is set as the “parent”.
  • 3D depth measurement using parallax if it is close to the mobile terminal, such as a part of the body of the mobile terminal user, time difference in which the light emitted from the mobile terminal is reflected and returned for things farther than the user, etc. You can use the three-dimensional depth measurement method based on the measurement method and the measurement method based on the measurement accuracy and power consumption.
  • the “parent” that is the reference in the three-dimensional space to be detected is determined to be a non-moving wall or window around the user or a part of the body such as the user's own face (head). This may be performed by the user himself / herself, or the computer inside the mobile terminal may automatically determine from the environmental conditions and learning results.
  • This virtual reference plane has a relationship of “child” fixed depending on the “parent”.
  • the user virtually arranges the display screen data at a desired position on the virtual reference plane, sets an easy-to-see size, and starts using it.
  • the 3D information of the “parent” and the location information of the mobile terminal are detected in real time, and unless the setting is changed, is the “child” always fixed to the “parent” based on the information? It moves dependently in real time like
  • the virtual display screen arranged by the user on the virtual reference plane is in a state of being virtually projected in parallel with the normal line of the virtual reference plane, and when the display screen of the mobile terminal intersects with the parallel projection area, The same data as the heel display screen data that is virtually projected onto the displayed part is displayed on the display screen of the terminal, and even if the mobile terminal is tilted with respect to the virtual reference plane, the virtual image projected in parallel on the display screen of the mobile terminal Since the display data is incident obliquely, the display screen looks distorted when viewed from the front with respect to the screen, but when viewed from the mobile terminal user, V20 in FIG. 2 and 4B and 5B in FIGS. Looks almost undistorted.
  • the virtual reference plane that is “child” in the virtual space is fixed to the face (head) of the user. It moves with the movement of the user's face (head) as if it were, so you can either fix the mobile device and change the orientation of the user's face (head), or move the mobile device if the user does not move The display contents are scrolled.
  • the light emitted from the display screen is used to illuminate the image sensor or use short-distance parallax.
  • There is a big merit to make "parent” face for reasons such as improving accuracy.
  • the virtual display area is fixed in the space, so it is stable by moving the mobile device regardless of the posture and movement of the mobile device user. You can scroll the display content.
  • a sensor equivalent to the portable terminal is built in, or an external three-dimensional sensor has a signal transmission means to the personal computer by wired or wireless connection, and those Analyzing the 3D information of the personal computer and its surroundings from the data and the 3D information of the PC user's body, for example, the 3D information of the face and head, and the positional relationship, based on the position of the personal computer user's face (head)
  • the display location may be changed (scrolled).
  • a three-dimensional sensor such as a three-dimensional depth sensor generally consumes more power than other sensors, and it is desirable to avoid using the device for a long time as much as possible, particularly in a portable terminal with limited power.
  • Image data and 3D data judgment are collated with the library and past learning results, etc., and if it is recognized as wrinkled, uncollected hair, curtain-like, etc., there is a possibility of movement, Those that are recognized as flat surfaces such as walls, those that have a straight element, and those that have an angle close to a right angle may be determined not to move.
  • three or more feature points on the obtained “parent” image data are extracted as indicated by p1 to p3 in FIG. 6 and collated with the three-dimensional data matching the points.
  • the feature points that are easily discriminated by the image data and one of the three-dimensional data do not always match.
  • a plane is created using the three points of the three-dimensional data closest to the feature point on the image data that does not match the three-dimensional data, and the feature point of the non-matching image is at the center point on the plane. You may assume that there is.
  • an analysis using one image sensor, the three feature points, and the angular velocity may be performed at two or more locations and the average value may be used.
  • Three-dimensional analysis may be performed in the same manner as in the previous period using the angular velocity of each axis obtained from the sensor.
  • the mobile device sets the mobile device to the space measurement mode to obtain the 3D depth in the space where the mobile device is used, but before performing the 3D analysis, there is no possibility of movement in the interior or interior walls, windows, chairs, furniture, etc.
  • the distinction between the interior and the human body or curtain that may move may be estimated from the actual image while referring to the learning results from the image, or set manually at the discretion of the user.
  • three-dimensional measurement is performed in consideration of the time axis, it is determined whether it is actually moving, and the solid that is determined as “does not move” is set as the reference solid and is set as the “parent”.
  • images with lens distortion that causes contradiction in 3D analysis caused by the lens attached to the image sensor built in the mobile terminal are released by using image warping in the process before analysis. Perform three-dimensional analysis.
  • the user sets the virtual reference plane 00 at an arbitrary place where it is easy to use.
  • data such as text that the user wants to read is displayed at an arbitrary position and size on the display of the portable terminal.
  • the positional information of feature points found on the user's eyes, eye corners, ears, glasses, or face, and the head, which are almost the same in the human body, is added. May be.
  • temporary changes in the placement due to changes in facial expressions, such as those seen on faces often return to the original position after a certain amount of time. If there is no change in the positional relationship between the corners of the eyes and the eyes where there is no change in the relationship, even if the positional relationship between other feature points collapses, it can be judged as ⁇ temporary disruption of the positional relationship due to facial expression changes '' good.
  • a part of the user's body, for example, the head or face is “parent” 000, the terminal display screen can be scrolled by moving the user's head, and conversely, the terminal can be scrolled by moving the terminal.
  • the terminal computer may automatically determine what the positional relationship is not changed by the combined use of learned image recognition and stereoscopic recognition. For example, “a solid with a shape of wrinkles” may move, “a flat surface without unevenness”, “an object with a 90 ° angle on acquired 3D data”, or an object with a straight outline May be judged as having a low possibility of movement.
  • the parts of the human body and the curtains that move by the wind, etc., in which the three-dimensional information in which the positional relationship between the point groups is fixed cannot be obtained are delicate, and their identification is very important for quick three-dimensional information acquisition.
  • three-dimensional data with a smaller surface is more difficult to change with the passage of time than three-dimensional data with a large surface, for example, a human face is more likely to lose its fixed three-dimensional data due to changes in facial expressions.
  • the shape is almost unchanged.
  • the display of the portable terminal may be interrupted when a message to that effect is displayed.
  • the most likely use of the present invention is in a portable terminal, in addition to the case where you want to see data larger than the display screen finely and quickly on a portable terminal, Since it can be scrolled by moving its face, it can also be used as a terminal operation means for people who are handicapped.
  • 3B is a view of a user's appearance when projected from a virtual reference plane (“child”) on the display surface of the 3B mobile terminal.
  • 4A is a perspective view when projected from the virtual reference plane (“child”) on the display surface of the mobile terminal tilted 4A 4B from the virtual reference plane (“child”) on the display surface of the mobile terminal tilted 4B
  • 5A is a perspective from the user when projected 5A is a perspective view when projected from a virtual reference plane (“child”) on the display surface of the mobile terminal tilted in all three axes 5B
  • a mobile terminal tilted in both the three axes 6A which is the appearance from the user when projected from the virtual reference plane (“child”) on the display surface, and the point where the distance between the points that can be analyzed in three dimensions is known, and the angular velocity sensor and image sensor.

Abstract

[Problem] Whereas a display scrolling of a terminal such as a cellular phone has been carried out by detecting a movement with an acceleration sensor or the like, the application of such technology has not been possible in a moving vehicle or the like. [Solution] A three-dimensional measurement is carried out with a solid body in the vicinity of a terminal or a portion of a user's body as a reference solid body, and a position relation with regard to the terminal is continuously measured in realtime with the reference solid body treated as a "parent." With an arbitrary virtual reference plane determined by the user of the terminal treated as a "child," display data is virtually positioned in the virtual reference plane at an arbitrary position and size determined by the user, and the "child" is fixed to the "parent" in a three-dimensional position relation. When a display part of the terminal overlaps a space wherein the display data is virtually projected in parallel to the direction of a normal of the virtual reference plane, the display data which should be virtually projected in said display part at that position is displayed in realtime in the display part of the terminal, thereby enabling the user of the terminal to intuitively carry out a scrolling of an enlarged display screen.

Description

端末の表示画像の表示法Display method of terminal display image
本発明は端末の表示制御に関する The present invention relates to terminal display control.
携帯端末の普及で多くの人が日常的に携帯端末を利用する様になったが、携帯端末の高機能化によりその表示内容や表示情報の表示密度は上がっている。
さらに、その表示密度は判読し辛いと感ずるユーザーも、それを読み取るユーザーの高齢化により増えてきている。
この問題の通常の解決法は単に携帯端末画面をズーム拡大してスクロールするのがもっとも一般的であり、ユーザーの目をカメラで認識し、ユーザーの視点と携帯端末の距離に応じた画面の拡大率を変える出願もあった。
With the widespread use of mobile terminals, many people have come to use mobile terminals on a daily basis. However, the display density of display contents and display information has increased due to the higher functionality of mobile terminals.
Furthermore, users who feel that the display density is difficult to read are increasing due to the aging of users who read the display density.
The usual solution to this problem is simply to zoom in and scroll the mobile device screen, recognize the user's eyes with the camera, and enlarge the screen according to the distance between the user's viewpoint and the mobile device. Some applications changed rates.
特開2011-248811JP2011-248811A 特開2005-251100JP-A-2005-251100 特開平10-254614JP-A-10-254614
携帯端末の表示画面の小さな文字などの表示を見やすくするには、画面の大形化が最も手っ取り早いが、小型化や携帯性との二律背反になってしまう。
また、殆どの携帯端末の表示装置が持っている画面の表示拡大、スクロールのする方法があるが、小さな画面を拡大すれば、判読しやすくなる反面、一部しか見えないので全体を素早く見るにはスワイプ等によるスクロールを使う事になるので拡大率によっては頻繁にスクロールの必要が有り、操作に不慣れだとスクロールのつもりが他の操作をしてしまうなど、直感的で的確な操作は難しくなる。
In order to make it easier to see the display of small characters on the display screen of a mobile terminal, the screen is most easily enlarged. However, it is a trade-off between downsizing and portability.
In addition, there is a method of enlarging and scrolling the screen of most mobile terminal display devices, but if you enlarge a small screen it will be easier to read, but only a part can be seen, so you can see the whole quickly. Will use scrolling by swiping, etc., so it may be necessary to scroll frequently depending on the enlargement ratio, and if you are unfamiliar with the operation, the intention of scrolling will perform other operations, etc. Intuitive and accurate operation will be difficult .
また、参考文献1~3にある様な手法によると、乗り物に乗った状態での携帯端末の利用において、ユーザーの持つ携帯端末の動きなのか、乗り物の動きによるものなのか、その両方によるものなのか区別できない為、移動中の乗り物の中では使用できない。 In addition, according to the methods described in References 1 to 3, the use of a mobile terminal while riding on a vehicle depends on whether the movement of the mobile terminal of the user or the movement of the vehicle is both Because it cannot be distinguished, it cannot be used in a moving vehicle.
そこで、端末がその端末とユーザー周辺の相対的な位置関係を認識し、仮想基準平面をユーザー周辺の空間に任意の位置に固定して、任意の画像を前記仮想基準平面の任意の位置と大きさで仮想配置し、その画像の仮想平行投影表示領域内に端末をかざす事で見る仕組みを考案した。 Therefore, the terminal recognizes the relative positional relationship between the terminal and the user's periphery, fixes the virtual reference plane to an arbitrary position in the space around the user, and displays an arbitrary image at an arbitrary position and size of the virtual reference plane. Now we have devised a mechanism for virtual placement and viewing by holding the terminal in the virtual parallel projection display area of the image.
前述の問題を解決するには、単に絶対的な移動量の検出、つまり加速度センサー等による携帯端末の移動量検出量に基づく表示の制御では無く、携帯端末周辺に限った携帯端末ユーザー空間と共に存在する立体親として、携帯端末の相対的な位置関係を検出して、その空間内に仮想表示領域を「子」として設定し、「子」を「親」に対して相対的に固定する事でユーザーにとって見やすい表示環境を提供する。 To solve the above-mentioned problem, it is not just the detection of the absolute movement amount, that is, the display control based on the movement amount detection amount of the mobile terminal by the acceleration sensor, etc., but it exists with the mobile terminal user space limited to the periphery of the mobile terminal As a solid parent, the relative positional relationship of the mobile terminal is detected, the virtual display area is set as “child” in the space, and “child” is fixed relative to “parent”. Provide a user-friendly display environment.
具体的には携帯端末に内蔵された1台以上のカメラと、角速度、加速度、赤外線、超音波、可視光線、レーザー、電波、磁気、GPS、等のセンサーの組み合わせ、または三次元深度センサーから得られる情報を基に、ユーザー周辺の空間に存在する、時間と供に変形しない特徴点を持つ立体か、携帯端末を使用するユーザーの顔または頭部または体の一部の立体データを取得し、記憶する。 Specifically, it is obtained from a combination of one or more cameras built in the mobile terminal and sensors such as angular velocity, acceleration, infrared, ultrasonic, visible light, laser, radio wave, magnetism, GPS, or 3D depth sensor. Based on the information that is obtained, the 3D data in the space around the user with the feature points that do not deform with time, or the 3D data of the face or head or body part of the user using the mobile device, Remember.
終始三次元深度センサーを使って、前記立体データと前記携帯端末の位置関係をリアルタイムで取り出しても良いが、省電力と省データの観点から、前記立体データを三次元深度センサーなどの三次元センサーで最初に取得し、その立体の画像データを併用して画像的な特徴点を抽出し、携帯端末内蔵のイメージセンサー、角速度センサーから携帯端末と前記立体の相対的位置関係をリアルタイムで算出するのが良い。
この時、乗り物などによって携帯端末ユーザーを含むその周辺環境全体が移動しない場合は上記に加速度センサーを加えても良い。
The positional relationship between the three-dimensional data and the mobile terminal may be extracted in real time using a three-dimensional depth sensor from the beginning, but from the viewpoint of power saving and data saving, the three-dimensional sensor such as a three-dimensional depth sensor is used. First, the image feature point is extracted using the three-dimensional image data together, and the relative positional relationship between the portable terminal and the three-dimensional object is calculated in real time from the image sensor and the angular velocity sensor built in the portable terminal. Is good.
At this time, if the entire surrounding environment including the mobile terminal user does not move due to a vehicle or the like, an acceleration sensor may be added to the above.
携帯端末ユーザー自身の体の一部もしくはその周辺の三次元の立体情報の把握し、それらの画像を併せて取得した後、前記立体情報と、携帯端末の相対的な位置関係をリアルタイムで取得する事で、携帯端末周辺の任意の場所と大きさで仮想基準平面を固定する事が出来る。 After grasping three-dimensional three-dimensional information of a part of the mobile terminal user's own body or its surroundings and acquiring those images together, the relative positional relationship between the three-dimensional information and the mobile terminal is acquired in real time. Thus, the virtual reference plane can be fixed at an arbitrary location and size around the mobile terminal.
ユーザーが前記携帯端末に表示したい表示可能な画像、映像、テキストなどの、表示すべきデータを前記仮想基準平面の法線方向と平行に仮想投影させ、その仮想投影領域と、前記携帯端末の表示面が交わる部分に前記仮想投影されるべきデータの一部または全部が表示される。 Virtual projection of data to be displayed such as a displayable image, video, text, etc. that the user wants to display on the mobile terminal in parallel with the normal direction of the virtual reference plane, and the virtual projection area and the display of the mobile terminal Part or all of the data to be virtually projected is displayed at the intersection of the surfaces.
なお、本出願で言うユーザーとは端末または携帯端末を使用するユーザーを指す。
リアルタイム処理である必要性から、データ取得は必要最小限が望ましいが、マシンパワーに余裕があり、消費電力が無視できる状況であれば、それ以上の情報取得、解析をしても良い。
In addition, the user said by this application refers to the user who uses a terminal or a portable terminal.
Since it is necessary to perform real-time processing, the minimum required data acquisition is desirable, but if the machine power is sufficient and the power consumption can be ignored, further information acquisition and analysis may be performed.
また、本出願で言うリアルタイムとはプロセスの結果の反映と実時間が厳密に同期していなくても実用上差し支えない程度の遅延(例えば10msec程度)も含まれる。 In addition, the real time referred to in the present application includes a delay (for example, about 10 msec) that does not interfere with practical use even if the reflection of the process result and the actual time are not strictly synchronized.
また、図面中に使用しているテキストは本発明の発明者自身の著書「古賀信明著 アナログ基礎講座I(112ページ)」より引用している。 The text used in the drawings is cited from the author's own book "Nobuaki Koga, Analog Basic Course I (page 112)" of the present invention.
図3、4、5の携帯端末001にはイメージセンサー等が付いていない様に見えるが、表示画面の説明図として省略してある。 Although it seems that the portable terminal 001 of FIGS. 3, 4, and 5 does not have an image sensor or the like, it is omitted as an explanatory diagram of the display screen.
本明細書中は、主に携帯端末を中心に述べているが、本発明の適用は携帯端末に代えてデスクトップパソコンやノートパソコンに類する端末であっても良い。 In the present specification, the description is mainly focused on the portable terminal, but the application of the present invention may be a terminal similar to a desktop personal computer or a notebook personal computer instead of the portable terminal.
なお、本明細書中の端末周辺の三次元データ、または端末ユーザーの体の一部の三次元データの取得の際、必ずしも端末を移動して計測する必要は無い。 Note that when acquiring three-dimensional data around the terminal in this specification or three-dimensional data of a part of the terminal user's body, it is not always necessary to move and measure the terminal.
本発明によれば、使用者の空間が乗り物などによって移動する空間であっても、適切な大きさの大きさの仮想表示領域をユーザーが決めた後、その領域内に端末をかざす事により、前記仮想表示内容に対応する部分がその端末の表示器(ディスプレイ)の中に表示され、端末を前記仮想表示領域内で端末を動かす事で結果的に自由にスクロール出来るが、ユーザーから見た場合、その表示はスクロールでは無く、携帯端末ユーザー周辺の立体物、またはユーザ-自身の体の一部に対して、固定された見えない表示情報が携帯端末の表示画面(フレーム)をかざす事によって、対応する位置にその情報が表示される様に感ずるので、非常に見やすくストレスが少ない。 According to the present invention, even if the user's space is a space that is moved by a vehicle or the like, after the user determines a virtual display area of an appropriate size, by holding the terminal in that area, When the portion corresponding to the virtual display content is displayed in the display (display) of the terminal and the terminal can be freely scrolled as a result of moving the terminal within the virtual display area. , The display is not scrolling, the fixed invisible display information is held over the display screen (frame) of the mobile terminal against a three-dimensional object around the mobile terminal user or a part of the user's own body, It feels like the information is displayed at the corresponding position, so it is very easy to see and there is little stress.
また、携帯端末ユーザーの頭や顔に対して仮想基準平面を固定する事で、頭を動かさずに端末を注視しつつ端末を動かすか、逆に端末を動かさず、端末を注視しつつ頭の向きを変える事で直感的にスクロールが出来、手が不自由なユーザーであっても楽にスクロールが出来る。 Also, by fixing the virtual reference plane to the mobile device user's head and face, move the terminal while gazing at the terminal without moving the head, or conversely, without moving the terminal, By changing the direction, you can scroll intuitively, even if you are a handicapped user.
基準立体(「親」)を携帯端末ユーザーの顔にした場合の説明図である。It is explanatory drawing at the time of making a reference | standard solid ("parent") a portable terminal user's face. 仮想基準平面(「子」)に対して携帯端末が傾いた場合と平行の場合の一例である。This is an example of the case where the mobile terminal is inclined and parallel to the virtual reference plane (“child”). 仮想基準平面(「子」)に対して携帯端末へ平行投影される表示の一例である。It is an example of the display parallel-projected to a portable terminal with respect to a virtual reference plane (“child”). 同じく携帯端末がu軸で傾いた場合の表示の一例である。Similarly, it is an example of a display when the mobile terminal is tilted about the u axis. 同じくu,v,w軸方向で傾いた場合の表示の一例である。Similarly, it is an example of display when tilted in the u, v, and w axis directions. イメージセンサーの角速度と特徴点間の距離が判っている時に、三次元解析が可能な撮影環境の説明図である。It is explanatory drawing of the imaging environment in which a three-dimensional analysis is possible when the angular velocity of an image sensor and the distance between feature points are known.
例えば、携帯端末に内蔵された一つ以上のイメージセンサーと、角速度、赤外線、超音波、可視光線、レーザー、電波、磁気、GPS、などのセンサー等を組み合わせて得られる情報か、三次元深度センサーなどの三次元センサーを使って得られる情報を基に、携帯端末に内蔵されたコンピューターが、携帯端末周辺か携帯端末ユーザーの体の一部の三次元情報を解析(3Dモデリングデータ生成)出来るプログラムと、 For example, information obtained by combining one or more built-in image sensors with sensors such as angular velocity, infrared, ultrasonic, visible light, laser, radio wave, magnetism, GPS, etc., or a three-dimensional depth sensor A program that allows a computer built in a mobile terminal to analyze 3D information around a mobile terminal or part of a mobile terminal user's body (3D modeling data generation) based on information obtained using a 3D sensor When,
前記情報取得方法を使って生成された前記3Dモデリングデータと端末の位置関係をリアルタイムで解析するプログラムか、前記3Dモデリングデータとイメージセンサーを併用して、抽出した3点以上の特徴点を使い、イメージセンサーと、角速度センサーを併用して、前記3Dモデリングデータとの位置関係をリアルタイムで解析するプログラムを持ち、 Using a program that analyzes the positional relationship between the 3D modeling data generated using the information acquisition method and the terminal in real time, or using the 3D modeling data and an image sensor in combination, and using three or more feature points extracted, Using a combination of an image sensor and an angular velocity sensor, it has a program that analyzes the positional relationship with the 3D modeling data in real time,
前記で得られた三次元情報を基に、携帯端末ユーザーが決めた仮想基準平面上に、携帯端末ユーザーが任意の場所に配置する、携帯端末ユーザーが携帯端末に表示可能な画像、映像、テキストなどの、表示すべきデータを前記仮想基準平面の法線方向と平行に仮想投影した領域と端末表示画面が交わる事で端末表示画面に投影される筈のデータが実際の端末表示画面に表示されるプログラムを持つ携帯端末表示装置。 On the virtual reference plane determined by the mobile terminal user based on the three-dimensional information obtained above, the mobile terminal user places an image, video, and text that can be displayed on the mobile terminal by the mobile terminal user at an arbitrary location. The data of the eyelid projected on the terminal display screen is displayed on the actual terminal display screen by intersecting the terminal display screen with the area where the data to be displayed is virtually projected in parallel with the normal direction of the virtual reference plane. Mobile terminal display device with a program.
尚、この明細書に記載してある形態は発明を実施する為の形態の一つで有り、請求項に合致していれば、この形態に限定される事は無い。 The form described in this specification is one of the forms for carrying out the invention, and is not limited to this form as long as it matches the claims.
本発明の用途は、最も考えられるのが携帯端末であり、また、顔を動かす事でスクロール可能な事から、手が不自由な人向けの端末操作手段としても利用可能であるが、本出願の明細書では最も多くの人が利用するであろう携帯端末を中心に、その一例を挙げて説明をする。 The application of the present invention is most likely to be a portable terminal, and since it can be scrolled by moving its face, it can also be used as a terminal operating means for people with reduced mobility. In this specification, an explanation will be given with an example of the portable terminal that most people will use.
まず、携帯端末には携帯端末に内蔵された1台以上のカメラ、または、超音波やレーザー、電波、磁気、赤外線、可視光線、加速度などを組み合わせた位置センサーから、または、三次元深度センサーなどの三次元センサーから携帯端末と携帯端末のユーザー周辺の3次元空間情報を取得し、このプロセスで得られる時間軸を持った三次元基準立体を「親」とする。 First of all, the mobile terminal is one or more cameras built into the mobile terminal, or a position sensor that combines ultrasonic waves, lasers, radio waves, magnetism, infrared rays, visible light, acceleration, etc., or a three-dimensional depth sensor. The 3D spatial information of the mobile terminal and the user's periphery of the mobile terminal is acquired from the 3D sensor, and the 3D reference solid with the time axis obtained in this process is set as the “parent”.
この時、携帯端末ユーザーの体の一部の様に携帯端末に近い場合は視差を利用した三次元深度計測、ユーザーより遠いものについては携帯端末から出た光が反射して戻ってくる時間差などによる三次元深度計測法などと、測定精度や消費電力などを基準に測定法を使い分けても良い。 At this time, 3D depth measurement using parallax if it is close to the mobile terminal, such as a part of the body of the mobile terminal user, time difference in which the light emitted from the mobile terminal is reflected and returned for things farther than the user, etc. You can use the three-dimensional depth measurement method based on the measurement method and the measurement method based on the measurement accuracy and power consumption.
また、検出する三次元空間での基準となる「親」はユーザー周辺の壁や窓などの動かないものにするか、ユーザー自身の顔(頭)等の体の一部にするかを決定するのはユーザー自身が行っても良く、携帯端末内部のコンピューターが環境条件や学習結果から自動で判断しても良い。 In addition, the “parent” that is the reference in the three-dimensional space to be detected is determined to be a non-moving wall or window around the user or a part of the body such as the user's own face (head). This may be performed by the user himself / herself, or the computer inside the mobile terminal may automatically determine from the environmental conditions and learning results.
次に、ユーザーは携帯端末を見やすい位置にかざして仮想基準平面を決める。
この仮想基準平面は前記「親」に従属して固定される「子」の関係になっている。
Next, the user determines the virtual reference plane by holding it over a position where the mobile terminal can be easily seen.
This virtual reference plane has a relationship of “child” fixed depending on the “parent”.
ユーザーは、前記仮想基準平面の希望の位置に表示画面データを仮想配置し、見やすい大きさを設定して、使用を開始する。
「親」の三次元情報と、それに対する携帯端末の位置情報はリアルタイムで検出されており、設定を変更しない限り、前記情報を基に常に「子」は「親」に対して固定しているかの様にリアルタイムで従属して動く。
The user virtually arranges the display screen data at a desired position on the virtual reference plane, sets an easy-to-see size, and starts using it.
The 3D information of the “parent” and the location information of the mobile terminal are detected in real time, and unless the setting is changed, is the “child” always fixed to the “parent” based on the information? It moves dependently in real time like
前記仮想基準平面にユーザーが配置した前記仮想表示画面は前記仮想基準平面の法線と平行に仮想投影されている様な状態であり、その平行投影領域に携帯端末の表示画面が交わると、交わった部分に仮想投影される筈の表示画面データと同じデータが端末の表示画面に表示され、たとえ携帯端末が仮想基準平面に対して傾いていても、携帯端末の表示画面に平行投影された仮想表示データが斜めに入射することになるので、表示画面をその画面に対して正面から見ると歪んで見えるが、携帯端末ユーザーから見ると図2のV20や、図4,5の4B、5Bの様に殆ど歪まずに見える。 The virtual display screen arranged by the user on the virtual reference plane is in a state of being virtually projected in parallel with the normal line of the virtual reference plane, and when the display screen of the mobile terminal intersects with the parallel projection area, The same data as the heel display screen data that is virtually projected onto the displayed part is displayed on the display screen of the terminal, and even if the mobile terminal is tilted with respect to the virtual reference plane, the virtual image projected in parallel on the display screen of the mobile terminal Since the display data is incident obliquely, the display screen looks distorted when viewed from the front with respect to the screen, but when viewed from the mobile terminal user, V20 in FIG. 2 and 4B and 5B in FIGS. Looks almost undistorted.
それは、恰もプロジェクターで平行投影されたイメージの中に端末の表示画面と同じ形状の白いカードを差し入れると、そのカードにその投影されたイメージの一部が映り、それをプロジェクターの光軸上から見ている状態に似ており、光軸上から観察する限り、カードが光軸に対して傾いてもカードに映るイメージは変形して見えない状態に近い。 That is, when a white card with the same shape as the display screen of the terminal is inserted into the image projected in parallel by the projector, a part of the projected image is reflected on the card, and it is seen from the optical axis of the projector. Similar to the viewing state, as long as the observation is made from above the optical axis, even if the card is tilted with respect to the optical axis, the image shown on the card is deformed and cannot be seen.
図1の1A、1Bの様に携帯端末のユーザーの顔(頭)を「親」にした場合、仮想空間内で「子」である仮想基準平面は前記ユーザーの顔(頭)に固定されているかの様に前記ユーザーの顔(頭)の動きに付いて動くので、携帯端末を固定してユーザーの顔(頭)の向きを変えるか、ユーザー自身が動かない場合は携帯端末を動かす事で表示内容がスクロールされる。 When the face (head) of the user of the mobile terminal is “parent” as in 1A and 1B of FIG. 1, the virtual reference plane that is “child” in the virtual space is fixed to the face (head) of the user. It moves with the movement of the user's face (head) as if it were, so you can either fix the mobile device and change the orientation of the user's face (head), or move the mobile device if the user does not move The display contents are scrolled.
ユーザーの周辺の環境の変化が激しい場合や、殆ど光を反射しない環境などの場合、また非常に暗い環境の場合は、表示画面の発光がイメージセンサーの照明となったり近距離の視差を利用する事で精度が上がる等の理由で「親」を顔にするメリットは大きい。
また、携帯端末ユーザーの周辺の壁や窓などを「親」にすると、空間に仮想表示領域が固定されるので、携帯端末ユーザーの姿勢や動きとは関係なく、携帯端末を動かす事で、安定して表示内容をスクロールする事が出来る。
When the environment around the user is drastically changed, in an environment that hardly reflects light, or in a very dark environment, the light emitted from the display screen is used to illuminate the image sensor or use short-distance parallax. There is a big merit to make "parent" face for reasons such as improving accuracy.
In addition, when the wall or window around the mobile device user is set as the “parent”, the virtual display area is fixed in the space, so it is stable by moving the mobile device regardless of the posture and movement of the mobile device user. You can scroll the display content.
携帯端末に代えて通常のデスクトップやノートパソコンにおいても、前記携帯端末に準ずるセンサーを内蔵し、または外部の三次元センサーが有線もしくは無線による接続にて前記パソコンへの信号伝達手段を持ち、それらのデータから前記パソコンとその周辺の三次元情報と前記パソコンユーザーの体の一部、例えば顔や頭の三次元情報と位置関係を解析し、それに基づいて前記パソコンユーザーの顔(頭)の位置によって表示される場所を変える(スクロールする)としてもよい。 In a normal desktop or laptop computer instead of a portable terminal, a sensor equivalent to the portable terminal is built in, or an external three-dimensional sensor has a signal transmission means to the personal computer by wired or wireless connection, and those Analyzing the 3D information of the personal computer and its surroundings from the data and the 3D information of the PC user's body, for example, the 3D information of the face and head, and the positional relationship, based on the position of the personal computer user's face (head) The display location may be changed (scrolled).
端末と端末ユーザーの位置関係のリアルタイムの検出は、センサーそのものの消費電力及びその取り扱うデータ量はバッテリー駆動時間や画像処理による遅延時間に大きく関わってくる為、できるだけ省電力、省データにする必要がある。
通常、三次元深度センサーなどの三次元センサーは一般的に他のセンサーより消費電力が大きく、特に電力に限りの有る携帯端末などでは極力長時間の使用は避けたい。
Real-time detection of the positional relationship between the terminal and the terminal user requires that the power consumption of the sensor itself and the amount of data handled by it are greatly related to the battery drive time and the delay time due to image processing. is there.
In general, a three-dimensional sensor such as a three-dimensional depth sensor generally consumes more power than other sensors, and it is desirable to avoid using the device for a long time as much as possible, particularly in a portable terminal with limited power.
そこで、三次元センサーの使用を最小限にとどめ、二次元のイメージセンサーと角度センサーを使った三次元解析の方法を説明する。
まず、三次元の立体で「親」となる基準立体を三次元深度センサーなどでその立体情報をそのイメージデータと供に取得する。
この時、三次元計測中に結果が不安定な部分やイメージデータと立体データ判定等から動く可能性のあるものは除外される。
イメージデータと立体データ判定とは、ライブラリーや過去の学習結果等と照合して、しわのあるものや、まとめていない髪の毛、カーテン状のもの等と認識されると、動く可能性が有り、壁などの平面と認められるもの、直線の要素を持つもの、直角に近い角を持つものなどは動かない可能性がある等と判定される。
Therefore, we will explain the method of 3D analysis using a 2D image sensor and an angle sensor while minimizing the use of 3D sensors.
First, a three-dimensional solid that becomes a “parent” is acquired by using a three-dimensional depth sensor or the like together with the image data.
At this time, the part where the result is unstable during the three-dimensional measurement or the possibility of moving from the image data and the three-dimensional data determination is excluded.
Image data and 3D data judgment are collated with the library and past learning results, etc., and if it is recognized as wrinkled, uncollected hair, curtain-like, etc., there is a possibility of movement, Those that are recognized as flat surfaces such as walls, those that have a straight element, and those that have an angle close to a right angle may be determined not to move.
続いて、得られた「親」のイメージデータ上の特徴点を図6のp1~p3の様に三点以上抽出して、その点と合致する三次元データと照合する。
この時、前記イメージデータで判別しやすい前記特徴点と前記三次元データの一つと、必ずしも合致するとは限らない。
その場合は、前記三次元データと合致しない前記イメージデータ上の特徴点に最も近い前記三次元データの三点を使って平面を作り、前記合致しないイメージの特徴点はその平面上の中心点にあると仮定しても良い。
Subsequently, three or more feature points on the obtained “parent” image data are extracted as indicated by p1 to p3 in FIG. 6 and collated with the three-dimensional data matching the points.
At this time, the feature points that are easily discriminated by the image data and one of the three-dimensional data do not always match.
In that case, a plane is created using the three points of the three-dimensional data closest to the feature point on the image data that does not match the three-dimensional data, and the feature point of the non-matching image is at the center point on the plane. You may assume that there is.
また、同じ基準立体に対して、ひとつのイメージセンサーと前記3点の特徴点と角速度を使った解析を2箇所以上で行ってその平均値を使っても良い。
さらに、前記3点の特徴点に対して、端末に内蔵され、間隔が固定された2つ以上のイメージセンサーの各イメージセンサーから得られる視差情報と、同じく前記端末に内蔵固定された三軸角速度センサーから得られる各軸の角速度を使って前期同様に三次元解析を行っても良い。
ユーザーが仮想基準平面と、その仮想基準平面に配置する表示データを決定すると、三次元深度センサーなどの三次元センサーによる計測から、イメージセンサーと角速度センサーを使った解析に切り替わる。その際、抽出された各特徴点p1~p3とイメージセンサーとの距離は引き継がれ、2Dによる立体解析開始の際の足掛かりとする。
In addition, for the same reference solid, an analysis using one image sensor, the three feature points, and the angular velocity may be performed at two or more locations and the average value may be used.
Further, with respect to the three feature points, disparity information obtained from each image sensor of two or more image sensors built in the terminal and having a fixed interval, and a triaxial angular velocity also fixed in the terminal. Three-dimensional analysis may be performed in the same manner as in the previous period using the angular velocity of each axis obtained from the sensor.
When the user determines the virtual reference plane and the display data to be arranged on the virtual reference plane, the measurement is switched from measurement using a three-dimensional sensor such as a three-dimensional depth sensor to analysis using an image sensor and an angular velocity sensor. At this time, the distance between the extracted feature points p1 to p3 and the image sensor is taken over, and is used as a starting point for starting the 3D analysis by 2D.
その後は、図6の6Bに示す様にイメージセンサーがとらえた画像上のip1~ip3が成す各点間の画像上の距離と、実際の点間の距離、そしてイメージセンサーのuvw軸の角速度から「親」と端末の位置関係を算出し、その結果を基に仮想基準平面00である「子」の、空間内の位置を固定する。 After that, as shown in 6B of FIG. 6, from the distance on the image formed by ip1 to ip3 on the image captured by the image sensor, the distance between the actual points, and the angular velocity of the uvw axis of the image sensor. The positional relationship between the “parent” and the terminal is calculated, and based on the result, the position in the space of the “child” that is the virtual reference plane 00 is fixed.
以下に、ユーザーが雑誌の様な携帯画面より大きな面積に多くのテキストが書かれたデータを読みたい場合を例に更に詳しく説明する。
最初に携帯端末を空間計測モードにして携帯端末を使用する空間で三次元深度を得るが、三次元解析を行う前に、動く可能性の無い、車内や室内の壁や窓、椅子や家具などの内装と、動く可能性のある人体やカーテン等の区別を画像からそれまでの学習結果を参照しつつ、実際の画像から推測するか、ユーザーの判断で手動で設定する。
次に時間軸を加味した三次元計測を行い、実際に動いているかを判定して、「動かない」と判定されたその立体を基準立体とし、「親」とする。
Hereinafter, a case where the user wants to read data in which a lot of text is written in an area larger than a portable screen such as a magazine will be described in more detail.
First, set the mobile device to the space measurement mode to obtain the 3D depth in the space where the mobile device is used, but before performing the 3D analysis, there is no possibility of movement in the interior or interior walls, windows, chairs, furniture, etc. The distinction between the interior and the human body or curtain that may move may be estimated from the actual image while referring to the learning results from the image, or set manually at the discretion of the user.
Next, three-dimensional measurement is performed in consideration of the time axis, it is determined whether it is actually moving, and the solid that is determined as “does not move” is set as the reference solid and is set as the “parent”.
このまま三次元深度センサーを用いて親と端末の位置関係を検出し続けても良いが、このプロセスに消費される電力や扱うデータの軽さから、イメージセンサーからの画像より特徴点候補を抽出しつつイメージセンサーが内蔵された端末の3軸の角速度の変化を時間軸と供に対応させて、イメージセンサーの位置と特徴点の位置の推測をすると、現実的に望ましい。
計測初期に動かないものと判断されたものであっても、位置データ使用中に他のデータと矛盾する部分は随時排除する。
そのためにも、基準立体上で特徴点として使用可能な候補は最初の計測時に可能な限り挙げておく事が望ましい。
随時前記の計測ルーチンを繰り返してリアルタイムに更新して精度を上げてゆく。
You can continue to detect the positional relationship between the parent and the terminal using the 3D depth sensor, but feature point candidates are extracted from the image from the image sensor due to the power consumed by this process and the lightness of the data handled. On the other hand, it is practically desirable to estimate the position of the image sensor and the position of the feature point by corresponding the change in the three-axis angular velocity of the terminal incorporating the image sensor together with the time axis.
Even if it is determined that it does not move at the initial stage of measurement, a portion inconsistent with other data during use of position data is excluded as needed.
Therefore, it is desirable to list candidates that can be used as feature points on the reference solid as much as possible at the time of the first measurement.
The above measurement routine is repeated at any time and updated in real time to improve accuracy.
ここで述べるまでも無く、携帯端末内蔵のイメージセンサーに付随するレンズに起因する、三次元解析で矛盾を生むレンズディストーションを持つイメージは解析前のプロセスで映像ワーピングなどを使ってディストーション解除して映像の三次元解析を行う。 Needless to say here, images with lens distortion that causes contradiction in 3D analysis caused by the lens attached to the image sensor built in the mobile terminal are released by using image warping in the process before analysis. Perform three-dimensional analysis.
仮想空間表示に必要な「親」となる三次元情報が得られたら、ユーザーは使用しやすい任意の場所に仮想基準平面00を設定する。
次に携帯端末のディスプレイにユーザーが読みたいテキストなどのデータを任意の位置と大きさに表示させる。
When the three-dimensional information that becomes the “parent” necessary for the virtual space display is obtained, the user sets the virtual reference plane 00 at an arbitrary place where it is easy to use.
Next, data such as text that the user wants to read is displayed at an arbitrary position and size on the display of the portable terminal.
ユーザーの体の一部以外の立体に「親」を設定した場合、表示から外れた部分を読みたければ、その方向に携帯端末を動かす事で読む事が出来、更に拡大、縮小表示したければ、携帯端末のピンチアウト、ピンチインするかそのまま携帯を表示面に垂直に上げ下げする事で表示可能である。
この場合、端末の該仮想基準平面00に対して垂直に上げると拡大して下げると縮小するのが使いやすいと思われるが、ユーザーの意図によっては逆に端末の該仮想基準平面00に対して垂直に下げると拡大して上げると縮小するように設定しても良い。
When “parent” is set to a solid other than a part of the user's body, if you want to read the part that is off the display, you can read it by moving the mobile terminal in that direction, and you want to further enlarge or reduce the display It can be displayed by pinching out and pinching in the mobile terminal or by raising and lowering the mobile phone vertically with respect to the display surface.
In this case, it seems that it is easy to use a reduction when the terminal is vertically expanded with respect to the virtual reference plane 00 of the terminal, but conversely, depending on the user's intention, the virtual reference plane 00 with respect to the terminal. It may be set so that when it is vertically lowered, it is enlarged and when it is raised, it is reduced.
端末とユーザー周辺の三次元情報に代えて、人体であっても、ほぼ位置関係が変わらないユーザーの目頭、目尻の位置、耳、眼鏡、または顔、頭に見出される特徴点の位置情報を加えても良い。
特にこれら顔などに見られる、表情などの変化による一時的な配置の変化は、一定時間経つと元の位置に戻る場合が多いので、その間を代用データで埋めて補間するか、殆ど表情による位置関係変化の無い目尻と目頭の位置関係変化が無い場合、他の特徴点の位置関係が崩れても「表情変化による一時的な位置関係の崩れ」と判断して、その部分を無視しても良い。
ユーザーの体の一部、例えば頭や顔を「親」000とした場合は、ユーザーの頭を動かす事で端末表示画面のスクロールが出来、逆に端末を動かしても同様にスクロールが出来る。
Instead of three-dimensional information around the terminal and the user, the positional information of feature points found on the user's eyes, eye corners, ears, glasses, or face, and the head, which are almost the same in the human body, is added. May be.
In particular, temporary changes in the placement due to changes in facial expressions, such as those seen on faces, often return to the original position after a certain amount of time. If there is no change in the positional relationship between the corners of the eyes and the eyes where there is no change in the relationship, even if the positional relationship between other feature points collapses, it can be judged as `` temporary disruption of the positional relationship due to facial expression changes '' good.
If a part of the user's body, for example, the head or face, is “parent” 000, the terminal display screen can be scrolled by moving the user's head, and conversely, the terminal can be scrolled by moving the terminal.
三次元データの取得不備や安定さに欠ける測定環境などにより正しい三次元空間の生成が不安定な場合、携帯端末の表示が不安定にならない様にその表示を動かすパス(軌跡)を滑らかにしても良い。 If the generation of the correct 3D space is unstable due to inadequate 3D data acquisition or a measurement environment that is not stable, smooth the path (trajectory) that moves the display so that the display on the mobile device does not become unstable. Also good.
端末のコンピューターが、学習した画像認識と立体認識の併用によって互いに位置関係を変えないものを自動判断させても良い。
例えば、「しわが入っている形状の立体」は動く可能性がある、「凹凸の無い平面」や「取得した三次元データ上で90°のアングルを持つもの」や「直線の輪郭を持つもの」は動く可能性が低い・・等といった判断をしてもよい。
The terminal computer may automatically determine what the positional relationship is not changed by the combined use of learned image recognition and stereoscopic recognition.
For example, “a solid with a shape of wrinkles” may move, “a flat surface without unevenness”, “an object with a 90 ° angle on acquired 3D data”, or an object with a straight outline May be judged as having a low possibility of movement.
点群同士の互いの位置関係が固定した立体情報が得られない人体の部位や風で動くカーテン等は微妙で、これらの識別は素早い三次元情報取得に非常に重要である。
一般に広い面の立体データよりも小さな面を持つ立体データの方が時間の経過に従って形を変えにくい、例えば人間の顔は表情の変化により固定した立体データが崩れやすいが、耳のみの場合その立体形状は殆ど変わらない。また、同じ人体の一部であっても座った時の膝の形状などもその立体形状は殆ど変わらないという事実を「親」とするデータ取得の際に反映すると良い。
The parts of the human body and the curtains that move by the wind, etc., in which the three-dimensional information in which the positional relationship between the point groups is fixed cannot be obtained are delicate, and their identification is very important for quick three-dimensional information acquisition.
In general, three-dimensional data with a smaller surface is more difficult to change with the passage of time than three-dimensional data with a large surface, for example, a human face is more likely to lose its fixed three-dimensional data due to changes in facial expressions. The shape is almost unchanged. In addition, it is preferable to reflect the fact that the three-dimensional shape of the knee, etc. when sitting down, even if it is a part of the same human body, hardly changes when acquiring the data as the “parent”.
ユーザーが歩きながら携帯端末を見る時には、加速度センサー等から得られる歩行パターン、1台以上のカメラからの静物と判断される情報量の少なさなどから「歩きスマホ状態」等と判断して、危険防止の観点から、その旨のメッセージを表示すると供に携帯端末の表示を中断しても良い。 When a user looks at a mobile device while walking, it is dangerous because it is judged as a “walking smartphone state” or the like because of the walking pattern obtained from an acceleration sensor, etc., or the small amount of information that is judged as a still life from one or more cameras. From the viewpoint of prevention, the display of the portable terminal may be interrupted when a message to that effect is displayed.
本発明の使途は、最も考えられるのが携帯端末であり、携帯端末で表示画面より大きなデータを細かく素早く見たい場合の他、
顔を動かす事でスクロール可能な事から、手が不自由な人向けの端末操作手段としても利用可能である。
The most likely use of the present invention is in a portable terminal, in addition to the case where you want to see data larger than the display screen finely and quickly on a portable terminal,
Since it can be scrolled by moving its face, it can also be used as a terminal operation means for people who are handicapped.
000 基本立体(「親」)
001 携帯端末
100 基準立体(「親」)と相対的に固定される仮想基準平面(「子」)と平行投影領域
101 端末の表示面と平行投影領域が交わり表示される部分
 00 ユーザーの顔または頭(基準立体(「親」))と、相対的に固定される、ユーザーが最初に決める仮想基準平面(「子」)
 10、10a、10b、10c、 携帯端末に表示される表示例
 11 携帯端末画面に高密度に表示された状態
 20 仮想基準平面(「子」)に対して平行でない携帯端末に表示される表示例
V10 ユーザーから見た10、10a、10b、10cの表示画面
V20 ユーザーから見た20の表示画面
  V 視線
  1 顔を動かさずにユーザーが動かせる視線
 1A ユーザーが正面を見た時の仮想基準平面(「子」)と平行投影領域
 1B ユーザーが下を見た時の仮想基準平面(「子」)と平行投影領域
 3A 携帯端末の表示面に仮想基準平面(「子」)から投影される時の斜視図である
 3B 携帯端末の表示面に仮想基準平面(「子」)から投影される時のユーザーからの見た目である
 3C 携帯端末の表示面に表示されるべき全画面が表示された高密度表示画面である
 4A 傾いた携帯端末の表示面に仮想基準平面(「子」)から投影される時の斜視図である
 4B 傾いた携帯端末の表示面に仮想基準平面(「子」)から投影される時のユーザーからの見た目である
 5A 三軸共に傾いた携帯端末の表示面に仮想基準平面(「子」)から投影される時の斜視図である
 5B 三軸共に傾いた携帯端末の表示面に仮想基準平面(「子」)から投影される時のユーザーからの見た目である
 6A 三次元解析可能な点同士の距離が判っている点と角速度センサーとイメージセンサーの図である
 6B 点同士の距離が判っている三次元空間中の点を撮影した30の画像(イメージ)である
 p1、p2,p3 点同士の距離が判っている三次元空間中の点
 ip1、ip2、ip3 撮影された点同士の距離が判っている三次元空間中の点のイメージデータ
 u、v、w 角度軸
 30 イメージセンサー
301 画角
000 Basic solid ("parent")
001 Mobile terminal 100 Virtual reference plane (“child”) and parallel projection area fixed relative to the reference solid (“parent”)
101 Portion where the display surface of the terminal intersects with the parallel projection area 00 The user's face or head (reference solid ("parent")) and the virtual reference plane ("child"")
10, 10 a, 10 b, 10 c, display example displayed on mobile terminal 11 state displayed on mobile terminal screen with high density 20 display example displayed on mobile terminal not parallel to virtual reference plane (“child”) V10 Display screens 10, 10 a, 10 b, 10 c viewed from the user V20 Display screens 20 viewed from the user V Line of sight 1 Line of sight that the user can move without moving the face 1A Virtual reference plane when the user looks at the front (“ Child ") and parallel projection area 1B Virtual reference plane (" child ") and parallel projection area when the user looks down 3A Perspective when projected from the virtual reference plane (" child ") on the display surface of the mobile terminal FIG. 3B is a view of a user's appearance when projected from a virtual reference plane (“child”) on the display surface of the 3B mobile terminal. 4A is a perspective view when projected from the virtual reference plane (“child”) on the display surface of the mobile terminal tilted 4A 4B from the virtual reference plane (“child”) on the display surface of the mobile terminal tilted 4B 5A is a perspective from the user when projected 5A is a perspective view when projected from a virtual reference plane ("child") on the display surface of the mobile terminal tilted in all three axes 5B A mobile terminal tilted in both the three axes 6A, which is the appearance from the user when projected from the virtual reference plane (“child”) on the display surface, and the point where the distance between the points that can be analyzed in three dimensions is known, and the angular velocity sensor and image sensor. 30 images (images) taken of points in 3D space with known distances p1, p2, p3 Points in 3D space with known distances between points ip1, ip2, ip3 Know the distance between the points Image data of points in the 3D space u, v, w Angle axis 30 Image sensor 301 Angle of view

Claims (4)

  1. コンピューターと表示装置を持つ端末に於いて、
    前記端末に内蔵された三次元センサー、または前記端末に接続された外部の三次元センサーにより、
    端末ユーザーまたは前記コンピューターが「親」と決めた端末周辺の立体か、前記端末ユーザー自身の体の一部を基準立体として、
    前記三次元センサーでリアルタイムに三次元計測をしながら、
    前記三次元計測データを使って、前記「親」は、前記端末ユーザーが決めた「子」となる仮想基準平面と対応し、前記「親」と前記「子」の位置関係を固定させ、
    前記仮想基準平面に前記端末ユーザーが決めた位置と大きさの表示画像を仮想配置し、
    前記表示画像は前記仮想基準平面の法線方向と平行に仮想投影される領域と、
    前記端末の表示画面が交わる部分に仮想投影される筈の前記表示画像を
    前記端末の表示画面に表示させる端末の表示方式。
    In a terminal with a computer and a display device,
    By a three-dimensional sensor built in the terminal or an external three-dimensional sensor connected to the terminal,
    The terminal user or a solid around the terminal determined by the computer as a “parent” or a part of the terminal user's own body as a reference solid,
    While performing 3D measurement in real time with the 3D sensor,
    Using the three-dimensional measurement data, the “parent” corresponds to a virtual reference plane that is a “child” determined by the terminal user, and fixes the positional relationship between the “parent” and the “child”.
    Virtually arrange the display image of the position and size determined by the terminal user on the virtual reference plane,
    The display image is virtually projected in parallel with the normal direction of the virtual reference plane;
    A terminal display method in which the display image of the eyelid virtually projected on a portion where the display screen of the terminal intersects is displayed on the display screen of the terminal.
  2. コンピューターと表示装置を持つ端末に於いて、
    前記端末に内蔵された三次元センサー、または前記端末に接続された外部の三次元センサーにより、
    前記端末周辺の立体か、前記端末ユーザー自身の体の一部の三次元計測を行った後、
    前記端末ユーザーまたは前記コンピューターが「親」と決めた基準立体の情報を基にリアルタイムで前記端末に内蔵されたイメージセンサーから得られたイメージと前記イメージセンサーに固定された三軸角速度センサーから得られたデータを使って三次元解析しながら、
    前記三次元解析データを使って、前記「親」は、ユーザーが決めた「子」となる仮想基準平面と対応し、前記「親」に対して前記「子」の位置関係を固定させ、
    前記仮想基準平面に前記端末ユーザーが決めた位置と大きさの表示画像を仮想配置し、
    前記表示画像は前記仮想基準平面の法線方向と平行に仮想投影される領域と、
    前記端末の表示画面が交わる部分に仮想投影される筈の前記表示画像を前記端末の表示画面に表示される端末の表示方式。
    In a terminal with a computer and a display device,
    By a three-dimensional sensor built in the terminal or an external three-dimensional sensor connected to the terminal,
    After performing a three-dimensional measurement of the solid around the terminal or a part of the terminal user's own body,
    Obtained in real time from an image sensor built in the terminal based on information of a reference solid determined by the terminal user or the computer as a “parent” and a triaxial angular velocity sensor fixed to the image sensor Using 3D analysis
    Using the three-dimensional analysis data, the “parent” corresponds to a virtual reference plane that becomes a “child” determined by the user, and fixes the positional relationship of the “child” to the “parent”.
    Virtually arrange the display image of the position and size determined by the terminal user on the virtual reference plane,
    The display image is virtually projected in parallel with the normal direction of the virtual reference plane;
    A display method for a terminal, wherein the display image of the eyelid virtually projected onto a portion where the display screen of the terminal intersects is displayed on the display screen of the terminal.
  3. 立体上にある各点同士の距離が判っている、撮影可能な3点以上の三次元空間にある点と、
    その点をリアルタイムで撮影するイメージセンサーと、
    そのイメージセンサーの三軸の角速度をリアルタイムで計測するセンサーと、
    を時間軸上で同期させ、
    前記3点以上の点とイメージセンサーとの三次元空間内の位置関係を
    前記3点以上の点が写ったイメージと、それを撮影した前記イメージセンサーの角速度から検出する方法。
    A point in a three-dimensional space of three or more points that can be photographed, in which the distance between each point on the solid is known,
    An image sensor that captures that point in real time,
    A sensor that measures the three-axis angular velocity of the image sensor in real time,
    Are synchronized on the time axis,
    A method of detecting a positional relationship between the three or more points and the image sensor in a three-dimensional space from an image in which the three or more points are captured and an angular velocity of the image sensor that has captured the image.
  4. 端末に内蔵固定された二つ以上のイメージセンサーと同じ端末に内蔵固定された三軸角速度センサーを使い、該基準立体上の3点以上の点を前記2つ以上のイメージセンサーが写したイメージデータ上の視差と、前記三軸角速度センサーから得られる角速度情報を使って前記基準立体と前記端末との三次元空間内の位置関係をリアルタイムで取得する方法。 Using two or more image sensors fixed in the terminal and three-axis angular velocity sensors fixed in the same terminal, the image data obtained by copying the three or more points on the reference solid by the two or more image sensors A method for acquiring in real time a positional relationship between the reference solid and the terminal in a three-dimensional space using the above parallax and angular velocity information obtained from the three-axis angular velocity sensor.
PCT/JP2018/014029 2018-03-31 2018-03-31 Terminal display image display method WO2019187167A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-070487 2018-03-31
JP2018070487A JP2019185076A (en) 2018-03-31 2018-03-31 Method for displaying display image on terminal

Publications (1)

Publication Number Publication Date
WO2019187167A1 true WO2019187167A1 (en) 2019-10-03

Family

ID=68057989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/014029 WO2019187167A1 (en) 2018-03-31 2018-03-31 Terminal display image display method

Country Status (2)

Country Link
JP (1) JP2019185076A (en)
WO (1) WO2019187167A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003281504A (en) * 2002-03-22 2003-10-03 Canon Inc Image pickup portion position and attitude estimating device, its control method and composite reality presenting system
JP2012208260A (en) * 2011-03-29 2012-10-25 Sony Corp Information processing terminal, information processing method, and program
JP2015141700A (en) * 2014-01-30 2015-08-03 京セラ株式会社 Display device and display method
JP2015176246A (en) * 2014-03-13 2015-10-05 株式会社Nttドコモ display device and program
JP2016514384A (en) * 2013-01-30 2016-05-19 クアルコム,インコーポレイテッド Real-time 3D reconstruction using power efficient depth sensor
JP2016146103A (en) * 2015-02-09 2016-08-12 カシオ計算機株式会社 Display device, information display method, and information display program
JP2017508171A (en) * 2013-11-25 2017-03-23 クアルコム,インコーポレイテッド Power efficient use of depth sensors on mobile devices

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003281504A (en) * 2002-03-22 2003-10-03 Canon Inc Image pickup portion position and attitude estimating device, its control method and composite reality presenting system
JP2012208260A (en) * 2011-03-29 2012-10-25 Sony Corp Information processing terminal, information processing method, and program
JP2016514384A (en) * 2013-01-30 2016-05-19 クアルコム,インコーポレイテッド Real-time 3D reconstruction using power efficient depth sensor
JP2017508171A (en) * 2013-11-25 2017-03-23 クアルコム,インコーポレイテッド Power efficient use of depth sensors on mobile devices
JP2015141700A (en) * 2014-01-30 2015-08-03 京セラ株式会社 Display device and display method
JP2015176246A (en) * 2014-03-13 2015-10-05 株式会社Nttドコモ display device and program
JP2016146103A (en) * 2015-02-09 2016-08-12 カシオ計算機株式会社 Display device, information display method, and information display program

Also Published As

Publication number Publication date
JP2019185076A (en) 2019-10-24

Similar Documents

Publication Publication Date Title
US20210407203A1 (en) Augmented reality experiences using speech and text captions
US20200209961A1 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US9411413B2 (en) Three dimensional user interface effects on a display
US20230082063A1 (en) Interactive augmented reality experiences using positional tracking
US11854147B2 (en) Augmented reality guidance that generates guidance markers
US11954268B2 (en) Augmented reality eyewear 3D painting
US20210407205A1 (en) Augmented reality eyewear with speech bubbles and translation
US20210409628A1 (en) Visual-inertial tracking using rolling shutter cameras
US11587255B1 (en) Collaborative augmented reality eyewear with ego motion alignment
US11741679B2 (en) Augmented reality environment enhancement
KR102159767B1 (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US20210406542A1 (en) Augmented reality eyewear with mood sharing
WO2019187167A1 (en) Terminal display image display method
US20230007227A1 (en) Augmented reality eyewear with x-ray effect
KR20230124077A (en) Augmented reality precision tracking and display
KR20200111144A (en) Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US20240144611A1 (en) Augmented reality eyewear with speech bubbles and translation
US20240069642A1 (en) Scissor hand gesture for a collaborative object
US20240036336A1 (en) Magnified overlays correlated with virtual markers
US20240079031A1 (en) Authoring tools for creating interactive ar experiences
US20240071020A1 (en) Real-world responsiveness of a collaborative object
KR20200127312A (en) Apparatus and method for shopping clothes using holographic images
KR20160014091A (en) Holography touch technology and Projector touch technology
KR20160085416A (en) Holography touch method and Projector touch method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18912168

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18912168

Country of ref document: EP

Kind code of ref document: A1