CN111522442A - interaction method and device for ARKit augmented reality environment on iOS device - Google Patents

interaction method and device for ARKit augmented reality environment on iOS device Download PDF

Info

Publication number
CN111522442A
CN111522442A CN202010275962.3A CN202010275962A CN111522442A CN 111522442 A CN111522442 A CN 111522442A CN 202010275962 A CN202010275962 A CN 202010275962A CN 111522442 A CN111522442 A CN 111522442A
Authority
CN
China
Prior art keywords
virtual object
user
rear camera
gesture
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010275962.3A
Other languages
Chinese (zh)
Inventor
周红桥
魏一雄
张燕龙
郭磊
陈亮希
周金文
田富君
陈兴玉
张红旗
李广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 38 Research Institute
Original Assignee
CETC 38 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 38 Research Institute filed Critical CETC 38 Research Institute
Priority to CN202010275962.3A priority Critical patent/CN111522442A/en
Publication of CN111522442A publication Critical patent/CN111522442A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/113Recognition of static hand signs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an interaction method and device for an ARKit augmented reality environment on iOS equipment, wherein the method comprises the following steps: 1) acquiring current position and orientation information of a rear camera of the iOS device; 2) acquiring a target sight of the rear camera, wherein the target sight is a sight preset by a user within a visual angle range of the rear camera; 3) and judging whether the virtual object is focused or not according to the intersection state of the target sight line and the bounding box of the virtual object, and carrying out visual marking, wherein the virtual object is a virtual object within a visual angle range corresponding to the current position and orientation information of the rear camera. By applying the embodiment of the invention, whether the mobile terminal is selected can be judged according to the result, and the operation can be further carried out by adopting gestures.

Description

interaction method and device for ARKit augmented reality environment on iOS device
Technical Field
The invention relates to the technical field of virtual reality, in particular to an interaction method of an ARKit augmented reality environment on iOS equipment.
Background
Augmented Reality (AR) is a new technology that "seamlessly" integrates real world information and virtual world information. The AR technology adopts a modern high-tech means with a computer as a core, and presents a virtual picture with the real world as a background by recognizing a scene of the real world, thereby realizing enhancement of the real world perceived by a user. In practice, physiological senses such as auditory sense, gustatory sense, olfactory sense and tactile sense can be generated and can be overlaid into a real scene in a fusion mode.
AR requires specialized hardware and software to achieve the enhanced experience of virtual-real fusion. Some typical AR devices include: goold Glass published in Google 2012, Microsoft HoloLens published by Microsoft in 2015, and Magic Leap One published by Magic Leap corporation 2017. Compared with the AR dedicated devices, google and apple are focused on the Android and iOS devices (mobile phones or tablets) which are widely used at present, and by pushing out the arcre and ARKit development platforms, developers can conveniently create AR programs for the Android and iOS mobile devices with huge numbers of current users.
However, the inventor finds that when the user changes the pose of the mobile device, all virtual and real objects within the view angle range of the rear camera of the user can be displayed on the screen of the mobile device. The user cannot select a specific virtual object to operate because the user cannot focus on the virtual object.
Disclosure of Invention
The technical problem to be solved by the present invention is how to "focus" on a specific virtual object.
The invention solves the technical problems through the following technical means:
the invention provides an interaction method of an ARKit augmented reality environment on iOS equipment, which comprises the following steps:
1) acquiring current position and orientation information of a rear camera of the iOS device;
2) acquiring a target sight of the rear camera, wherein the target sight is a sight preset by a user within a visual angle range of the rear camera;
3) and judging whether the virtual object is focused or not according to the intersection state of the target sight line and the bounding box of the virtual object, and carrying out visual marking, wherein the virtual object is a virtual object within a visual angle range corresponding to the current position and orientation information of the rear camera.
Optionally, the step 3) includes:
acquiring all virtual objects in a visual angle range according to the current position and the visual angle range corresponding to the orientation information of the rear camera, taking the rear camera as a starting point, taking the extending direction of a target sight line of the rear camera as a direction, emitting virtual rays outwards, and judging which bounding box of the virtual objects the virtual rays intersect;
determining that the virtual object is focused, and/or displaying a "focus" marker, upon intersection of the virtual ray with a bounding box of the virtual object;
determining that the virtual object is unfocused when the virtual ray does not intersect a bounding box of the virtual object.
Through the 'focusing' mark, a virtual object focused on the screen of the mobile device terminal can be determined, so that the virtual object can be further operated by gestures.
Optionally, the method further includes:
shooting a user-defined gesture example input by a user by using a camera, acquiring a meaning corresponding to the gesture example, and marking the gesture example with the meaning as a label to obtain a plurality of gesture samples;
training a first machine learning model using the gesture samples,
optionally, the gesture example includes:
one or a combination of gestures demonstrated by the user with the hand, gestures drawn by the user on paper, and gestures printed by the user on paper.
Optionally, the gesture sample corresponding to each label includes: gesture instances taken from at least two angles.
Optionally, the method further includes:
sensing a touch track input by a user by using a screen, acquiring a meaning corresponding to the touch track, and marking the touch track by using the meaning as a label to obtain a plurality of track samples;
training a second machine learning model using the trajectory samples.
The embodiment of the invention provides an interaction device for an ARKit augmented reality environment on iOS equipment, which comprises:
the acquisition module is used for acquiring the current position and orientation information of the rear camera of the iOS device;
the acquisition module is used for acquiring a target sight of the rear camera, wherein the target sight is a sight preset by a user within a visual angle range of the rear camera;
and the judging module is used for judging whether the virtual object is focused or not according to the intersection state of the target sight line and the bounding box of the virtual object, and carrying out visual marking, wherein the virtual object is a virtual object within a visual angle range corresponding to the current position and orientation information of the rear camera.
Optionally, the determining module is configured to:
acquiring all virtual objects in a visual angle range according to the current position and the visual angle range corresponding to the orientation information of the rear camera, taking the rear camera as a starting point, taking the extending direction of a target sight line of the rear camera as a direction, emitting virtual rays outwards, and judging which bounding box of the virtual objects the virtual rays intersect;
when the virtual ray intersects with the bounding box of the virtual object, determining that the virtual object is focused, and displaying a focusing mark;
determining that the virtual object is unfocused when the virtual ray does not intersect a bounding box of the virtual object.
Optionally, the apparatus further comprises:
the marking module is used for shooting a user-defined gesture example input by a user by using a camera, acquiring a meaning corresponding to the gesture example, and marking the gesture example by using the meaning as a label to obtain a plurality of gesture samples;
a first training module to: training a first machine learning model using the gesture samples,
optionally, the gesture example includes:
one or a combination of gestures demonstrated by the user with the hand, gestures drawn by the user on paper, and gestures printed by the user on paper.
Optionally, the gesture sample corresponding to each label includes: gesture instances taken from at least two angles.
Optionally, the apparatus further comprises:
the sensing module is used for sensing a touch track input by a user by using a screen, acquiring a meaning corresponding to the touch track, and marking the touch track with the meaning as a label to obtain a plurality of track samples;
a second training module to: training a second machine learning model using the trajectory samples.
The invention has the advantages that:
by applying the embodiment of the invention, according to the crossing state of the target sight line and the bounding box of the virtual object, whether the virtual object is positioned to cross the target sight line can be judged by using a geometric algorithm, and then whether the virtual object is selected can be judged according to the result.
Drawings
Fig. 1 is a schematic flowchart of an interaction method for an ARKit augmented reality environment on an iOS device according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an interaction apparatus in an ARKit augmented reality environment on an iOS device according to an embodiment of the present invention;
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Fig. 1 is a schematic flowchart of an interaction method for an ARKit augmented reality environment on an iOS device according to an embodiment of the present invention, as shown in fig. 1, the method includes:
s101: acquiring current position and orientation information of a rear camera of the iOS device;
acquiring pose and orientation information of a virtual object from terminal equipment, such as a gyroscope and other equipment on a mobile phone, wherein the pose information comprises: the position of the mobile phone, the inclination angle of the mobile phone and the inclination direction of the mobile phone; the orientation information includes: the direction of the viewing angle centerline of the camera, etc.
S102: acquiring a target sight of the rear camera, wherein the target sight is a sight preset by a user within a visual angle range of the rear camera;
the target sight line is a sight line which is preset in a terminal device, such as a mobile phone, of a user in a normal direction within a visual angle range of a rear camera, or a sight line which has a preset included angle with the normal direction, and the included angle can be adjusted according to actual requirements.
S103: and judging whether the virtual object is focused or not according to the intersection state of the target sight line and the bounding box of the virtual object, and carrying out visual marking, wherein the virtual object is a virtual object within a visual angle range corresponding to the current position and orientation information of the rear camera.
For example, all virtual objects in a view angle range can be acquired according to the view angle range corresponding to the current position and orientation information of the rear camera, the rear camera is used as a starting point, the extending direction of the center line of the target sight line of the rear camera is used as a direction, a virtual ray is emitted outwards, the virtual ray is sequentially composed of a plurality of points, and the virtual ray is judged to intersect with the bounding box of the virtual object according to whether the coordinates of the points on the virtual ray are located in the area surrounded by the points of the bounding box of the virtual object;
when the virtual ray intersects with the bounding box of the virtual object, judging that the virtual object is focused, and displaying a focusing mark, such as a ring mark;
and when the virtual ray does not intersect with the bounding box of the virtual object, determining that the virtual object is not focused and does not display any mark.
In practical applications, the "in focus" mark may be displayed on the surface facing the rear camera, or directly above it, etc.
By applying the embodiment of the invention, according to the crossing state of the target sight line and the bounding box of the virtual object, whether the virtual object is positioned to cross the target sight line can be judged by using a geometric algorithm, and further whether the virtual object is selected, namely whether the virtual object is focused can be judged according to the result.
Example 2
The embodiment 2 of the invention is added with the following steps on the basis of the embodiment 1:
shooting a user-defined gesture example input by a user by using a camera, acquiring a meaning corresponding to the gesture example, and marking the gesture example with the meaning as a label to obtain a plurality of gesture samples;
illustratively, the sources of gesture instances may include one or a combination of gestures that are presented by the user with a hand, gestures that the user draws on paper, and gestures that the user prints on paper. The gestures of the user demonstrated with hands may include: or the user makes a self-defined gesture in the shooting range of the camera, and the camera shoots an image according to the gesture example of the user.
The classification of gesture instances includes: standard gestures and custom gestures, wherein,
standard gestures may include: single finger clicking, double clicking and simultaneous clicking of a plurality of fingers define a touch gesture;
the custom gesture may include: such as circling, forking, hooking, and crisscross (O, X, V, Cross, etc.).
And labeling the meaning seat instance labels of each gesture instance with the gesture instances, forming gesture samples by the gesture instances and the meanings, and training the first machine learning model by using the gesture samples. In general, the first machine learning model may be an existing gesture recognition model or an existing image classification model.
When the user-defined gesture recognition device is used, a user makes a user-defined gesture, the iOS device obtains gesture information input by the user, the gesture information is recognized through the first machine learning model, and then corresponding operation is executed.
The embodiment of the invention can learn the user-defined gesture by using a machine learning technology, thereby providing a gesture interaction function, expanding the interaction capability of the ARKit augmented reality environment and enhancing the user interaction experience.
Further, in order to improve the robustness of the system, the gesture sample corresponding to each tag includes: gesture instances taken from at least two angles.
For example, for a gesture of drawing a circle in the air by a user, the circle drawn by the user may be different at different times or the gestures of the mobile phone at different times are different, so that the recognition result of the first machine learning model is deviated, and therefore, gesture examples at different angles are collected as gestures in a gesture sample, and the robustness of the model can be improved.
Example 3
The embodiment 3 of the invention is added with the following steps on the basis of the embodiment 1:
sensing a touch track input by a user by using a screen, acquiring a meaning corresponding to the touch track, and marking the touch track by using the meaning as a label to obtain a plurality of track samples; for example, programmatically, such as a user drawing a custom gesture in a lattice displayed on the iOS device;
training a second machine learning model using the trajectory samples.
By applying the embodiment of the invention, the user-defined gesture input by the user on the iOS device can be recognized, and the interactivity of the device is improved.
Example 5
Corresponding to embodiment 1 of the present invention, embodiment 5 of the present invention provides an interactive apparatus for an ARKit augmented reality environment on an iOS device, where the apparatus includes:
the acquisition module 201 is used for acquiring the current position and orientation information of a rear camera of the iOS device;
an obtaining module 202, configured to obtain a target line of sight of the rear camera, where the target line of sight is a line of sight preset by a user within a viewing angle range of the rear camera;
and the judging module 203 is configured to judge whether the virtual object is focused according to the intersection state of the target sight line and the bounding box of the virtual object, and perform visual marking, where the virtual object is a virtual object within a view angle range corresponding to the current position and orientation information of the rear camera.
By applying the embodiment of the invention, according to the crossing state of the target sight line and the bounding box of the virtual object, whether the virtual object is positioned to cross the target sight line can be judged by using a geometric algorithm, and whether the virtual object is selected can be further judged according to the result
In a specific implementation manner of the embodiment of the present invention, the determining module 203 is configured to:
acquiring all virtual objects in a visual angle range according to the current position and the visual angle range corresponding to the orientation information of the rear camera, taking the rear camera as a starting point, taking the extending direction of a target sight line of the rear camera as a direction, emitting virtual rays outwards, and judging which bounding box of the virtual objects the virtual rays intersect;
when the virtual ray intersects with the bounding box of the virtual object, determining that the virtual object is focused, and displaying a focusing mark;
and when the virtual ray does not intersect with the bounding box of the virtual object, determining that the virtual object is not focused and does not display any mark.
In a specific implementation manner of the embodiment of the present invention, the determining module 203 is configured to:
and forming point clouds on the surfaces of the virtual objects according to the positions of the points on the bounding boxes of the virtual objects irradiated by the virtual rays, and acquiring the parts of the virtual objects in the view range of the rear camera according to the coordinates of the points on the bounding boxes corresponding to the point clouds.
In a specific implementation manner of the embodiment of the present invention, the apparatus further includes:
the marking module is used for shooting a user-defined gesture example input by a user by using a camera, acquiring a meaning corresponding to the gesture example, and marking the gesture example by using the meaning as a label to obtain a plurality of gesture samples;
a first training module to: training a first machine learning model using the gesture samples,
in a specific implementation manner of the embodiment of the present invention, the gesture example includes:
one or a combination of gestures demonstrated by the user with the hand, gestures drawn by the user on paper, and gestures printed by the user on paper.
In a specific implementation manner of the embodiment of the present invention, the gesture sample corresponding to each tag includes: gesture instances taken from at least two angles.
In a specific implementation manner of the embodiment of the present invention, the apparatus further includes:
the sensing module is used for sensing a touch track input by a user by using a screen, acquiring a meaning corresponding to the touch track, and marking the touch track with the meaning as a label to obtain a plurality of track samples;
a second training module to: training a second machine learning model using the trajectory samples.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An interaction method for an ARKit augmented reality environment on an iOS device, the method comprising:
1) acquiring current position and orientation information of a rear camera of the iOS device;
2) acquiring a target sight of the rear camera, wherein the target sight is a sight preset by a user within a visual angle range of the rear camera;
3) and judging whether the virtual object is focused or not according to the intersection state of the target sight line and the bounding box of the virtual object, and carrying out visual marking, wherein the virtual object is a virtual object within a visual angle range corresponding to the current position and orientation information of the rear camera.
2. The method of claim 1, wherein the step 3) comprises:
acquiring all virtual objects in a visual angle range according to the current position and the visual angle range corresponding to the orientation information of the rear camera, taking the rear camera as a starting point, taking the extending direction of a target sight line of the rear camera as a direction, emitting virtual rays outwards, and judging which bounding box of the virtual objects the virtual rays intersect;
determining that the virtual object is focused, and/or displaying a "focus" marker, upon intersection of the virtual ray with a bounding box of the virtual object;
determining that the virtual object is unfocused when the virtual ray does not intersect a bounding box of the virtual object.
3. The method of claim 1, further comprising:
shooting a user-defined gesture example input by a user by using a camera, acquiring a meaning corresponding to the gesture example, and marking the gesture example with the meaning as a label to obtain a plurality of gesture samples;
training a first machine learning model using the gesture samples.
4. The method of claim 3, wherein the gesture instance comprises:
one or a combination of gestures demonstrated by the user with the hand, gestures drawn by the user on paper, and gestures printed by the user on paper.
5. The method as claimed in claim 3, wherein the gesture sample corresponding to each tag comprises: gesture instances taken from at least two angles.
6. The method of claim 1, further comprising:
sensing a touch track input by a user by using a screen, acquiring a meaning corresponding to the touch track, and marking the touch track by using the meaning as a label to obtain a plurality of track samples;
training a second machine learning model using the trajectory samples.
7. An interactive apparatus of an ARKit augmented reality environment on an iOS device, the apparatus comprising:
the acquisition module is used for acquiring the current position and orientation information of the rear camera of the iOS device;
the acquisition module is used for acquiring a target sight of the rear camera, wherein the target sight is a sight preset by a user within a visual angle range of the rear camera;
and the judging module is used for judging whether the virtual object is focused or not according to the intersection state of the target sight line and the bounding box of the virtual object, and carrying out visual marking, wherein the virtual object is a virtual object within a visual angle range corresponding to the current position and orientation information of the rear camera.
8. The apparatus as claimed in claim 7, wherein the determining module is configured to:
acquiring all virtual objects in a visual angle range according to the current position and the visual angle range corresponding to the orientation information of the rear camera, taking the rear camera as a starting point, taking the extending direction of a target sight line of the rear camera as a direction, emitting virtual rays outwards, and judging which bounding box of the virtual objects the virtual rays intersect;
when the virtual ray intersects with the bounding box of the virtual object, determining that the virtual object is focused, and displaying a focusing mark;
determining that the virtual object is unfocused when the virtual ray does not intersect a bounding box of the virtual object.
9. The interaction device of the ARKit augmented reality environment on the iOS device according to claim 8, further comprising a labeling module for capturing a user-defined gesture instance input by a user by using a camera, obtaining a meaning corresponding to the gesture instance, labeling the gesture instance with the meaning as a label, and obtaining a plurality of gesture samples;
training a first machine learning model using the gesture samples.
10. The apparatus of claim 9, wherein the gesture instance comprises:
one or a combination of gestures demonstrated by the user with the hand, gestures drawn by the user on paper, and gestures printed by the user on paper.
CN202010275962.3A 2020-04-09 2020-04-09 interaction method and device for ARKit augmented reality environment on iOS device Pending CN111522442A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010275962.3A CN111522442A (en) 2020-04-09 2020-04-09 interaction method and device for ARKit augmented reality environment on iOS device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010275962.3A CN111522442A (en) 2020-04-09 2020-04-09 interaction method and device for ARKit augmented reality environment on iOS device

Publications (1)

Publication Number Publication Date
CN111522442A true CN111522442A (en) 2020-08-11

Family

ID=71902519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010275962.3A Pending CN111522442A (en) 2020-04-09 2020-04-09 interaction method and device for ARKit augmented reality environment on iOS device

Country Status (1)

Country Link
CN (1) CN111522442A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114089836A (en) * 2022-01-20 2022-02-25 中兴通讯股份有限公司 Labeling method, terminal, server and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542611A (en) * 2010-12-27 2012-07-04 新奥特(北京)视频技术有限公司 Three-dimensional object pickup method
CN102968642A (en) * 2012-11-07 2013-03-13 百度在线网络技术(北京)有限公司 Trainable gesture recognition method and device based on gesture track eigenvalue
US20140033136A1 (en) * 2012-07-25 2014-01-30 Luke St. Clair Custom Gestures
CN105407211A (en) * 2015-10-20 2016-03-16 上海斐讯数据通信技术有限公司 Base-on-touch-button system and base-on-touch-button method for gesture identification
CN105892632A (en) * 2015-11-16 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for judging the selection of UI (User Interface) widgets of virtual reality application
CN105988583A (en) * 2015-11-18 2016-10-05 乐视致新电子科技(天津)有限公司 Gesture control method and virtual reality display output device
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN110334641A (en) * 2019-07-01 2019-10-15 安徽磐众信息科技有限公司 A kind of simple sign language real-time identifying system and method based on SSD neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102542611A (en) * 2010-12-27 2012-07-04 新奥特(北京)视频技术有限公司 Three-dimensional object pickup method
US20140033136A1 (en) * 2012-07-25 2014-01-30 Luke St. Clair Custom Gestures
CN102968642A (en) * 2012-11-07 2013-03-13 百度在线网络技术(北京)有限公司 Trainable gesture recognition method and device based on gesture track eigenvalue
CN105407211A (en) * 2015-10-20 2016-03-16 上海斐讯数据通信技术有限公司 Base-on-touch-button system and base-on-touch-button method for gesture identification
CN105892632A (en) * 2015-11-16 2016-08-24 乐视致新电子科技(天津)有限公司 Method and device for judging the selection of UI (User Interface) widgets of virtual reality application
CN105988583A (en) * 2015-11-18 2016-10-05 乐视致新电子科技(天津)有限公司 Gesture control method and virtual reality display output device
CN107957775A (en) * 2016-10-18 2018-04-24 阿里巴巴集团控股有限公司 Data object exchange method and device in virtual reality space environment
CN110334641A (en) * 2019-07-01 2019-10-15 安徽磐众信息科技有限公司 A kind of simple sign language real-time identifying system and method based on SSD neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114089836A (en) * 2022-01-20 2022-02-25 中兴通讯股份有限公司 Labeling method, terminal, server and storage medium
CN114089836B (en) * 2022-01-20 2023-02-28 中兴通讯股份有限公司 Labeling method, terminal, server and storage medium

Similar Documents

Publication Publication Date Title
US20220382379A1 (en) Touch Free User Interface
US11947729B2 (en) Gesture recognition method and device, gesture control method and device and virtual reality apparatus
TWI654539B (en) Virtual reality interaction method, device and system
EP2480955B1 (en) Remote control of computer devices
US9651782B2 (en) Wearable tracking device
RU2439653C2 (en) Virtual controller for display images
CN111766937B (en) Virtual content interaction method and device, terminal equipment and storage medium
US9122353B2 (en) Kind of multi-touch input device
US10372229B2 (en) Information processing system, information processing apparatus, control method, and program
WO2001052230A1 (en) Method and system for interacting with a display
CN114138121B (en) User gesture recognition method, device and system, storage medium and computing equipment
US10621766B2 (en) Character input method and device using a background image portion as a control region
US20120293555A1 (en) Information-processing device, method thereof and display device
CN111448542A (en) Displaying applications in a simulated reality environment
US10401947B2 (en) Method for simulating and controlling virtual sphere in a mobile device
CN111766936A (en) Virtual content control method and device, terminal equipment and storage medium
CN111522442A (en) interaction method and device for ARKit augmented reality environment on iOS device
US11314981B2 (en) Information processing system, information processing method, and program for displaying assistance information for assisting in creation of a marker
CN113963355B (en) OCR character recognition method, device, electronic equipment and storage medium
CN113256767B (en) Bare-handed interactive color taking method and color taking device
CN113434046A (en) Three-dimensional interaction system, method, computer device and readable storage medium
CN112416121A (en) Intelligent interaction method and device based on object and gesture induction and storage medium
US11869145B2 (en) Input device model projecting method, apparatus and system
CN110363161B (en) Reading assisting method and system
Yong et al. Smart Human–Computer Interaction Interactive Virtual Control with Color-Marked Fingers for Smart City

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200811