CN108121442B - Operation method and device of three-dimensional space display interface and terminal equipment - Google Patents

Operation method and device of three-dimensional space display interface and terminal equipment Download PDF

Info

Publication number
CN108121442B
CN108121442B CN201711200382.2A CN201711200382A CN108121442B CN 108121442 B CN108121442 B CN 108121442B CN 201711200382 A CN201711200382 A CN 201711200382A CN 108121442 B CN108121442 B CN 108121442B
Authority
CN
China
Prior art keywords
instruction
user
operation instruction
receiving
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201711200382.2A
Other languages
Chinese (zh)
Other versions
CN108121442A (en
Inventor
曾良军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201711200382.2A priority Critical patent/CN108121442B/en
Priority to PCT/CN2018/081355 priority patent/WO2019001060A1/en
Publication of CN108121442A publication Critical patent/CN108121442A/en
Application granted granted Critical
Publication of CN108121442B publication Critical patent/CN108121442B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides an operation method and device of a three-dimensional space display interface and terminal equipment; wherein, the method comprises the following steps: receiving an object operation instruction sent by a user; the object operation instruction comprises a voice instruction, a gesture instruction or an operation instruction; determining an object aimed by an aiming point when an object operation instruction is received from a current three-dimensional space display interface; and selecting the object and performing control operation on the object. According to the invention, the user can select the object in a mode of mutually matching the aiming point and the object operation instruction, the operation mode is quick and accurate, the conformity with the intention of the user is high, and the user experience degree of the three-dimensional space display interface operation is improved.

Description

Operation method and device of three-dimensional space display interface and terminal equipment
Technical Field
The invention relates to the technical field of three-dimensional space display, in particular to an operation method and device of a three-dimensional space display interface and terminal equipment.
Background
In the operation of the traditional three-dimensional space display interface, a user can select an object in the three-dimensional space display interface in a point-aiming mode at fixed time, however, the mode requires the user to wait for a period of time, which is easy to cause operation fatigue of the user, and is also easy to cause misjudgment of a selection instruction and violate the intention of the user; the user can also select the object by sending out the voice signal, however, if the voice signal is long or the environmental noise is large, the recognition accuracy is low, the object selection is wrong, and the user experience is poor.
Aiming at the problem that the user experience is poor due to the fact that the existing three-dimensional space display interface operation mode is low in convenience and accuracy, an effective solution is not provided yet.
Disclosure of Invention
In view of this, the present invention provides an operation method, an operation device and a terminal device for a three-dimensional space display interface, so as to enable an operation mode of the three-dimensional space display interface to be fast and accurate, and improve user experience of the three-dimensional space display interface operation.
In a first aspect, an embodiment of the present invention provides an operation method for a three-dimensional space display interface, where the method is applied to a terminal device; the method comprises the following steps: receiving an object operation instruction sent by a user; the object operation instruction comprises a voice instruction, a gesture instruction or an operation instruction; determining an object aimed by an aiming point when an object operation instruction is received from a current three-dimensional space display interface; and selecting the object and performing control operation on the object.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the receiving, when the object operation instruction includes a voice instruction, the object operation instruction sent by a user includes: receiving a voice signal sent by a user through a microphone; judging whether the voice signal contains preset keywords or not; if yes, determining the keywords contained in the voice signal as the object operation instruction.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the receiving, when the object operation instruction includes a gesture instruction, the object operation instruction sent by a user includes: receiving a gesture video signal sent by a user through a camera device; judging whether the gesture video signal contains preset gesture features or not; if yes, the gesture features contained in the gesture video signal are determined as the object operation instructions.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the receiving, when the object operation instruction includes an operation instruction, the object operation instruction sent by a user includes: and receiving an operation instruction sent by a user through the handle, and determining the operation instruction as an object operation instruction.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the method further includes: detecting a motion state of a user's head by a sensor device; and controlling the aiming point to move in the three-dimensional space display interface according to the motion state.
In a second aspect, an embodiment of the present invention provides an operating apparatus for a three-dimensional space display interface, where the apparatus is disposed in a terminal device; the device includes: the instruction receiving module is used for receiving an object operation instruction sent by a user; the object operation instruction comprises a voice instruction, a gesture instruction or an operation instruction; the object determination module is used for determining an object aimed by an aiming point when an object operation instruction is received from a current three-dimensional space display interface; and the object selection module is used for selecting the object and performing control operation on the object.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the instruction receiving module is further configured to: receiving a voice signal sent by a user through a microphone; judging whether the voice signal contains preset keywords or not; if yes, determining the keywords contained in the voice signal as the object operation instruction.
With reference to the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the instruction receiving module is further configured to: receiving a gesture video signal sent by a user through a camera device; judging whether the gesture video signal contains preset gesture features or not; if yes, the gesture features contained in the gesture video signal are determined as the object operation instructions.
With reference to the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where the instruction receiving module is further configured to: and receiving an operation instruction sent by a user through the handle, and determining the operation instruction as an object operation instruction.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions capable of being executed by the processor, and the processor executes the machine-executable instructions to implement the operation method of the three-dimensional space display interface.
The embodiment of the invention has the following beneficial effects:
according to the operation method, the operation device and the terminal equipment of the three-dimensional space display interface, when object operation instructions such as a voice instruction, a gesture instruction or an operation instruction sent by a user are received, an object aimed at by an aiming point is determined from the current three-dimensional space display interface; selecting the object and carrying out control operation on the object; in the method, the user can select the object in a mode of mutually matching the aiming point and the object operation instruction, the operation mode is quick and accurate, the conformity with the intention of the user is high, and the user experience degree of the three-dimensional space display interface operation is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention as set forth above.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of an operation method of a three-dimensional display interface according to an embodiment of the present invention;
FIG. 2 is a flowchart of another method for operating a three-dimensional display interface according to an embodiment of the present invention;
FIG. 3 is a flowchart of another method for operating a three-dimensional display interface according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an operation device of a three-dimensional display interface according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the existing operation method of the three-dimensional space display interface, an object can be selected in a way of timing an aiming point, for example, a user controls the aiming point to move to the position of an object, and when the time that the aiming point stays on the object is greater than a set time threshold, a terminal device receives a selection instruction of the object, and then the object is selected; however, the manner of pointing at a fixed time requires the user to wait for a period of time, which is prone to cause operation fatigue of the user; in addition, after the user stops moving, the aiming point of the user can stay on a certain object, and the user may not intend to select the object at the moment, so that misjudgment of the selection instruction is caused, the intention of the user is violated, and the operation experience of the user is poor.
The user can also select the object by sending out a voice signal, for example, the user can initiate 'select the pink box in the upper left corner'; the terminal equipment needs to extract the characteristics of the voice signal so as to obtain a selected instruction represented by the voice signal; however, the recognition accuracy of the terminal device for a longer voice signal is low, and the situation of object selection error often occurs; in addition, when the environmental noise of the user is high, the accuracy of recognizing the voice signal by the terminal device is lower, and thus, the user experience of operating by using the voice signal is still poor.
In consideration of the problem that the experience degree of a user is poor due to the fact that the existing three-dimensional space display interface is low in operation mode convenience and accuracy, the embodiment of the invention provides an operation method and device of a three-dimensional space display interface and terminal equipment; the technology can be applied to scenes such as VR (Virtual Reality) or AR (Augmented Reality), and particularly can be applied to terminal equipment such as VR glasses, AR helmets, mobile phones with VR or AR playing functions, tablet computers and the like; the techniques may be implemented in associated software or hardware, as described by way of example below.
Referring to fig. 1, a flowchart of an operation method of a three-dimensional display interface is shown, where the method is applied to a terminal device; the method comprises the following steps:
step S102, receiving an object operation instruction sent by a user; the object operation instruction comprises a voice instruction, a gesture instruction or an operation instruction;
step S104, determining an object aimed by an aiming point when an object operation instruction is received from a current three-dimensional space display interface;
and step S106, selecting the object and carrying out control operation on the object.
For example, after a user wears terminal devices such as VR glasses and a mobile phone, a VR virtual scene played in the terminal devices changes along with movements such as translation and rotation of the head of the user, and an object aimed at by a pointing point in a VR interface changes accordingly. When a user wants to operate an object A in a VR scene, the user preferably needs to control the head to move, so that an aiming point stays at the position of the object A, and then an object operation instruction is sent out; after receiving the object operation instruction, the terminal device selects the object a, and then performs subsequent control operations on the object a, for example, when the object a is an article, the article may be moved, opened, and the like.
Standing at the angle of the terminal equipment, and when a user stops the aiming point at the position of the object A, the terminal equipment does not know whether the user wants to select the object A or only stops the head movement; when an object operation instruction sent by a user is received, the terminal equipment selects an object where the current aiming point stays, and then subsequent control operation is carried out on the object A.
According to the operation method of the three-dimensional space display interface provided by the embodiment of the invention, when object operation instructions such as a voice instruction, a gesture instruction or an operation instruction sent by a user are received, an object aimed at by an aiming point is determined from the current three-dimensional space display interface; selecting the object and carrying out control operation on the object; in the method, the user can select the object in a mode of mutually matching the aiming point and the object operation instruction, the operation mode is quick and accurate, the conformity with the intention of the user is high, and the user experience degree of the three-dimensional space display interface operation is improved.
Referring to fig. 2, a flow chart of another operation method of a three-dimensional display interface is shown, and the method is applied to a terminal device; the method is implemented on the basis of the method shown in fig. 1, and the object operation instruction including a voice instruction is taken as an example for explanation; the method comprises the following steps:
step S202, detecting the motion state of the head of the user through a sensor device;
step S204, controlling the aiming point to move in the three-dimensional space display interface according to the motion state;
the sensor device may be a nine-axis sensor, and the nine-axis sensor detects the motion state of the head of the user, such as parallel movement, horizontal rotation, up-down movement, and the like; the aiming point in the three-dimensional space display interface can be fixed at a set position of the interface, for example, the center of the three-dimensional space display interface; when a user looks like, when the head of the user moves, the virtual image played by the three-dimensional space display interface changes along with the movement of the head of the user, and the position of the aiming point is unchanged; therefore, the virtual image and the aiming point move relatively, and the pair of objects overlapping the aiming point in the virtual image changes.
Step S206, receiving a voice signal sent by a user through a microphone;
step S208, judging whether the voice signal contains a preset keyword; if yes, go to step S210; if not, ending;
generally, in the actual implementation of step S208, the terminal device first needs to filter and denoise the received voice signal, extract feature data from the denoised voice signal, match the feature data with the keyword library, and use the keyword corresponding to the feature data successfully matched as the keyword corresponding to the voice signal.
Generally, in order to improve the accuracy of speech recognition, the keywords may be shorter and have fewer categories, such as "select", "determine", and other keywords; the method can ensure the accuracy of voice recognition, improve the efficiency of voice recognition and quickly respond to the instruction of the user.
Step S210, determining keywords contained in the voice signal as an object operation instruction;
step S212, determining the object aimed by the aiming point when the object operation instruction is received from the current three-dimensional space display interface;
and step S214, selecting the object and performing control operation on the object.
In the mode, the user can select the object in a mode of mutually matching the aiming point and the voice signal, the operation mode is quick and accurate, the conformity with the intention of the user is high, and the user experience degree of the three-dimensional space display interface operation is improved.
Referring to fig. 3, a flow chart of another operation method of a three-dimensional display interface is shown, and the method is applied to a terminal device; the method is implemented on the basis of the method shown in fig. 1, and the object operation instruction including a gesture instruction is taken as an example for explanation; the method comprises the following steps:
step S302, detecting the motion state of the head of the user through a sensor device;
step S304, controlling the aiming point to move in the three-dimensional space display interface according to the motion state;
step S306, receiving a gesture video signal sent by a user through a camera device;
step S308, judging whether the gesture video signal contains preset gesture features; if yes, go to step S310; if not, ending;
generally, in the actual implementation of step S308, the terminal device first needs to perform filtering and noise reduction on the received gesture video signal, perform edge detection and other processing on the noise-reduced gesture video signal, extract feature data, match the feature data with the gesture feature library, and use the gesture feature corresponding to the feature data that is successfully matched as the gesture feature corresponding to the gesture video signal.
Generally, in order to improve the accuracy of gesture recognition, the gesture features may be simpler, have a high recognition degree and be of fewer types, for example, a dynamic gesture such as a "v" gesture or a click gesture, and may also be a static hand gesture such as a gesture of extending a specific finger; the simple gesture features can not only ensure the accuracy of gesture recognition, but also improve the efficiency of gesture recognition and quickly respond to the instruction of a user.
Step S310, determining the gesture characteristics contained in the gesture video signal as an object operation instruction;
step S312, determining the object aimed by the aiming point when the object operation instruction is received from the current three-dimensional space display interface;
in step S314, the object is selected, and a control operation is performed on the object.
In the mode, the user can select the object in a mode of mutually matching the aiming point and the sent gesture video signal, the operation mode is quick and accurate, the conformity with the intention of the user is high, and the user experience degree of the three-dimensional space display interface operation is improved.
In addition, another operation method for a three-dimensional space display interface is provided in the embodiments of the present invention, where an object operation instruction includes an operation instruction as an example for explanation; the step of receiving an object operation instruction sent by a user specifically includes: and receiving an operation instruction sent by a user through the handle, and determining the operation instruction as an object operation instruction.
Specifically, the handle is provided with a key capable of sending out a 'confirm' or 'select' instruction, and when a user presses the key, the user sends out an operation instruction, and further selects an object aimed at by the sighting point. The handle can be a VR handle or an AR operating handle.
In the mode, the user can select the object in a mode of mutually matching the aiming point and the operation instruction, the operation mode is quick and accurate, the conformity with the intention of the user is high, and the user experience degree of the three-dimensional space display interface operation is improved.
Corresponding to the above method embodiment, refer to a schematic structural diagram of an operation device of a three-dimensional space display interface shown in fig. 4; the device is arranged on the terminal equipment; the device comprises the following parts:
an instruction receiving module 40, configured to receive an object operation instruction sent by a user; the object operation instruction comprises a voice instruction, a gesture instruction or an operation instruction;
an object determining module 41, configured to determine, from a current three-dimensional space display interface, an object aimed at by an aiming point when an object operation instruction is received;
and the object selection module 42 is used for selecting the object and performing control operation on the object.
Further, the instruction receiving module is further configured to: receiving a voice signal sent by a user through a microphone; judging whether the voice signal contains preset keywords or not; if yes, determining the keywords contained in the voice signal as the object operation instruction.
Further, the instruction receiving module is further configured to: receiving a gesture video signal sent by a user through a camera device; judging whether the gesture video signal contains preset gesture features or not; if yes, the gesture features contained in the gesture video signal are determined as the object operation instructions.
Further, the instruction receiving module is further configured to: and receiving an operation instruction sent by a user through the handle, and determining the operation instruction as an object operation instruction.
According to the operating device of the three-dimensional space display interface provided by the embodiment of the invention, when object operating instructions such as a voice instruction, a gesture instruction or an operating instruction sent by a user are received, an object aimed at by an aiming point is determined from the current three-dimensional space display interface; selecting the object and carrying out control operation on the object; in the method, the user can select the object in a mode of mutually matching the aiming point and the object operation instruction, the operation mode is quick and accurate, the conformity with the intention of the user is high, and the user experience degree of the three-dimensional space display interface operation is improved.
Referring to fig. 5, a schematic structural diagram of a terminal device is shown; the device comprises a memory 100 and a processor 101; the memory 100 is used for storing one or more computer instructions, and the one or more computer instructions are executed by the processor to implement the operation method of the three-dimensional space display interface, which may include one or more of the above methods.
Further, the network management device shown in fig. 5 further includes a bus 102 and a communication interface 103, and the processor 101, the communication interface 103 and the memory 100 are connected through the bus 102.
The Memory 100 may include a high-speed Random Access Memory (RAM) and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
The processor 101 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 101. The Processor 101 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present disclosure may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present disclosure may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 100, and the processor 101 reads the information in the memory 100, and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The operation method and apparatus for a three-dimensional space display interface and the computer program product of the terminal device provided in the embodiments of the present invention include a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. An operation method of a three-dimensional space display interface is characterized in that the method is applied to terminal equipment; the method comprises the following steps:
receiving an object operation instruction sent by a user; wherein the object operation instruction comprises a voice instruction, a gesture instruction or an operation instruction;
determining an object aimed by an aiming point when the object operation instruction is received from a current three-dimensional space display interface;
selecting the object, and performing control operation on the object;
when the object operation instruction comprises a gesture instruction, the step of receiving the object operation instruction sent by the user comprises:
receiving a gesture video signal sent by a user through a camera device;
judging whether the gesture video signal contains preset gesture features or not;
if yes, determining the gesture features contained in the gesture video signal as the object operation instructions;
the method further comprises the following steps: detecting, by a sensor device, a motion state of the user's head; and controlling the aiming point to move in a three-dimensional space display interface according to the motion state.
2. The method according to claim 1, wherein when the object manipulation instruction comprises a voice instruction, the step of receiving the object manipulation instruction issued by the user comprises:
receiving a voice signal sent by a user through a microphone;
judging whether the voice signal contains preset keywords or not;
if yes, the keywords contained in the voice signal are determined as the object operation instruction.
3. The method according to claim 1, wherein when the object operation instruction comprises an operation instruction, the step of receiving the object operation instruction issued by the user comprises:
and receiving an operation instruction sent by a user through a handle, and determining the operation instruction as the object operation instruction.
4. An operation device of a three-dimensional space display interface is characterized in that the device is arranged on a terminal device; the device comprises:
the instruction receiving module is used for receiving an object operation instruction sent by a user; wherein the object operation instruction comprises a voice instruction, a gesture instruction or an operation instruction;
the object determination module is used for determining an object aimed by an aiming point when the object operation instruction is received from the current three-dimensional space display interface;
the object selection module is used for selecting the object and performing control operation on the object;
the instruction receiving module is further configured to: receiving a gesture video signal sent by a user through a camera device; judging whether the gesture video signal contains preset gesture features or not; if yes, determining the gesture features contained in the gesture video signal as the object operation instructions;
the apparatus further comprises a movement module to: detecting, by a sensor device, a motion state of the user's head; and controlling the aiming point to move in a three-dimensional space display interface according to the motion state.
5. The apparatus of claim 4, wherein the instruction receiving module is further configured to:
receiving a voice signal sent by a user through a microphone;
judging whether the voice signal contains preset keywords or not;
if yes, the keywords contained in the voice signal are determined as the object operation instruction.
6. The apparatus of claim 4, wherein the instruction receiving module is further configured to:
and receiving an operation instruction sent by a user through a handle, and determining the operation instruction as the object operation instruction.
7. A terminal device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor to perform the method of any one of claims 1 to 3.
CN201711200382.2A 2017-06-26 2017-11-24 Operation method and device of three-dimensional space display interface and terminal equipment Expired - Fee Related CN108121442B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711200382.2A CN108121442B (en) 2017-11-24 2017-11-24 Operation method and device of three-dimensional space display interface and terminal equipment
PCT/CN2018/081355 WO2019001060A1 (en) 2017-06-26 2018-03-30 Application display method and device, three-dimensional space display interface operation method and device, display method and device, application content display method and device, and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711200382.2A CN108121442B (en) 2017-11-24 2017-11-24 Operation method and device of three-dimensional space display interface and terminal equipment

Publications (2)

Publication Number Publication Date
CN108121442A CN108121442A (en) 2018-06-05
CN108121442B true CN108121442B (en) 2020-05-08

Family

ID=62227873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711200382.2A Expired - Fee Related CN108121442B (en) 2017-06-26 2017-11-24 Operation method and device of three-dimensional space display interface and terminal equipment

Country Status (1)

Country Link
CN (1) CN108121442B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060537A (en) * 2019-03-22 2019-07-26 珠海超凡视界科技有限公司 A kind of virtual reality drives training device and its man-machine interaction method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9904055B2 (en) * 2014-07-25 2018-02-27 Microsoft Technology Licensing, Llc Smart placement of virtual objects to stay in the field of view of a head mounted display
CN105955470A (en) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 Control method and device of helmet display
CN106980383A (en) * 2017-03-31 2017-07-25 哈尔滨工业大学 A kind of dummy model methods of exhibiting, module and the virtual human body anatomical model display systems based on the module

Also Published As

Publication number Publication date
CN108121442A (en) 2018-06-05

Similar Documents

Publication Publication Date Title
CN109409277B (en) Gesture recognition method and device, intelligent terminal and computer storage medium
CN109240576B (en) Image processing method and device in game, electronic device and storage medium
KR102078427B1 (en) Augmented reality with sound and geometric analysis
US20170192500A1 (en) Method and electronic device for controlling terminal according to eye action
US10438086B2 (en) Image information recognition processing method and device, and computer storage medium
CN107818301B (en) Method and device for updating biological characteristic template and electronic equipment
KR101486177B1 (en) Method and apparatus for providing hand detection
CN104239879B (en) The method and device of separating character
CN108596079B (en) Gesture recognition method and device and electronic equipment
EP2998960A1 (en) Method and device for video browsing
CN112445341B (en) Keyboard perspective method and device of virtual reality equipment and virtual reality equipment
CN112818909A (en) Image updating method and device, electronic equipment and computer readable medium
CN106258009A (en) A kind of gather the method for fingerprint, fingerprint capturer and terminal
CN106648042B (en) Identification control method and device
US20170192653A1 (en) Display area adjusting method and electronic device
CN107291238B (en) Data processing method and device
CN112462937A (en) Local perspective method and device of virtual reality equipment and virtual reality equipment
CN108121442B (en) Operation method and device of three-dimensional space display interface and terminal equipment
CN109413470B (en) Method for determining image frame to be detected and terminal equipment
WO2020001016A1 (en) Moving image generation method and apparatus, and electronic device and computer-readable storage medium
CN112990197A (en) License plate recognition method and device, electronic equipment and storage medium
CN107155002B (en) Cursor moving method and system
CN106650727B (en) Information display method and AR equipment
CN115883959A (en) Picture content control method for privacy protection and related product
US20210304452A1 (en) Method and system for providing avatar service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200508

Termination date: 20201124