WO2018076927A1 - 一种适用于空间系统的操作方法及装置、存储介质 - Google Patents

一种适用于空间系统的操作方法及装置、存储介质 Download PDF

Info

Publication number
WO2018076927A1
WO2018076927A1 PCT/CN2017/100230 CN2017100230W WO2018076927A1 WO 2018076927 A1 WO2018076927 A1 WO 2018076927A1 CN 2017100230 W CN2017100230 W CN 2017100230W WO 2018076927 A1 WO2018076927 A1 WO 2018076927A1
Authority
WO
WIPO (PCT)
Prior art keywords
input operation
attribute information
determining
operation object
tracking
Prior art date
Application number
PCT/CN2017/100230
Other languages
English (en)
French (fr)
Inventor
刘阳
Original Assignee
中国移动通信有限公司研究院
中国移动通信集团公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国移动通信有限公司研究院, 中国移动通信集团公司 filed Critical 中国移动通信有限公司研究院
Priority to US16/314,425 priority Critical patent/US20190325657A1/en
Priority to JP2019503449A priority patent/JP2019527435A/ja
Priority to EP17866241.7A priority patent/EP3470959A4/en
Publication of WO2018076927A1 publication Critical patent/WO2018076927A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present invention relates to electronic technologies, and in particular, to an operating method and apparatus, and a storage medium suitable for a space system.
  • the user interface is further extended to three-dimensional.
  • Some of the original operations in a two-dimensional interface such as a mobile phone need to be further expanded in three-dimensional operation to be applicable.
  • users need to operate freely, and at the same time meet the user's spatial awareness and usage habits.
  • the existing solution is mainly a method of dragging and selecting a commonly used operation object in a two-dimensional interface, and relies on a drag operation of a gesture in a two-dimensional interface.
  • This single manipulation mode allows the user to have less during the interface operation. The choice and control of fun.
  • the manipulation in the three-dimensional space needs to be more in line with the user's manipulation experience. For example, the positions of the three axes x, y, and z in the three-dimensional space are changed, accompanied by the change of the angular velocity. How to make the position and angular velocity changes in the three-dimensional space is also one of the user's requirements for the three-dimensional interaction.
  • the embodiment of the present invention provides an operation method and apparatus applicable to a space system, and a storage medium, which can provide an interaction suitable for object operations in a three-dimensional space, in order to solve at least one problem existing in the prior art. the way.
  • An embodiment of the present invention provides an operation method applicable to a space system, where the method includes:
  • the target object is processed according to the attribute information of the input operation.
  • An embodiment of the present invention provides an operating device applicable to a space system, where the device includes a detecting unit, a first determining unit, a determining unit, a second determining unit, and a processing unit, where:
  • the detecting unit is configured to track, by the tracking component, the input operation of the operation object to detect the operation object;
  • the first determining unit is configured to determine attribute information of the input operation
  • the determining unit is configured to determine whether the attribute information of the input operation meets a preset condition
  • the second determining unit is configured to determine a current location of the operation object represented by three-dimensional coordinates if the attribute information of the input operation satisfies the preset condition;
  • the processing unit is configured to: if the current location of the operation object corresponds to a target object represented by a three-dimensional contour point, the target pair according to the attribute information of the input operation Like processing.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute an operation method applicable to a space system provided by an embodiment of the present invention.
  • An embodiment of the present invention provides a method and apparatus for operating a space system, and a storage medium, wherein an operation operation of detecting an operation object by tracking the operation object is tracked by a tracking component; determining attribute information of the input operation; Whether the attribute information of the input operation satisfies a preset condition; if the attribute information of the input operation satisfies the preset condition, determining a current position of the operation object represented by three-dimensional coordinates; if a current position of the operation object Corresponding to the target object represented by the three-dimensional contour point, the target object is processed according to the attribute information of the input operation; thus, an interaction manner suitable for the object operation in the three-dimensional space can be provided.
  • FIG. 1 is a schematic flowchart of an implementation method of an operation method applicable to a space system according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a computing device according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of an implementation process of an operation device applicable to a space system according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of a hardware entity of a computing device according to an embodiment of the present invention.
  • Embodiments of the present invention provide an operation method applicable to a space system, and the method is applied to a computing device, and the functions implemented by the method may be implemented by a processor calling program code in a computing device, and of course, the program code may be saved in a computer storage.
  • the computing device includes at least a processor and a storage medium.
  • the computing device is in the process of a particular embodiment
  • the electronic devices may include mobile phones, tablets, desktops, personal digital assistants, navigators, digital phones, video phones, television sets, and the like.
  • FIG. 1 is a schematic diagram of an implementation process of an operation method applicable to a space system according to an embodiment of the present invention. As shown in FIG. 1 , the method includes:
  • Step S101 Tracking, by the tracking component, the operation object to detect an input operation of the operation object
  • the operation object includes a finger, a hand, and an eyeball.
  • the tracking component is a component of a computing device, which may be a camera during implementation.
  • Step S102 determining attribute information of the input operation
  • Step S103 determining whether the attribute information of the input operation meets a preset condition
  • the attribute information of the input operation includes a duration of the operation object corresponding to the input operation, a posture of the input operation, an operation distance, an operation direction, an acceleration a of the operation object, and a direction of the acceleration.
  • the duration of the operation object corresponding to the input operation and the posture of the input operation are mainly used.
  • the preset condition includes a preset time threshold or a posture of the operation object. For example, it is determined whether the duration of the input operation is greater than a preset time threshold. If it is greater than, the preset condition is met.
  • the preset condition is not met; and if the preset condition is a double-click gesture or the like A gesture of judging whether the gesture of the input operation is double-clicked, and if so, the preset condition is satisfied, and if not, the preset condition is not satisfied.
  • Step S104 if the attribute information of the input operation satisfies the preset condition, determining a current location of the operation object represented by using three-dimensional coordinates;
  • Step S105 If the current position of the operation object corresponds to a target object represented by a three-dimensional contour point, the target object is processed according to the attribute information of the input operation.
  • the method further includes: step S106, if the operation The current position of the object does not correspond to a target object represented by a three-dimensional contour point, and the tracking operation traces the operation object to detect an input operation of the operation object.
  • the method further includes: outputting prompt information, where the prompt information is used to indicate that the target object is currently not determined.
  • the method further includes: determining whether the current location of the operation object corresponds to a target object represented by a three-dimensional contour point, if the current location of the operation object corresponds to using a three-dimensional The target object represented by the contour point, and the target object is processed according to the attribute information of the input operation.
  • the determining whether the current location of the operation object corresponds to a target object represented by a three-dimensional contour point further includes:
  • Step S111 using a three-dimensional contour point to represent an interaction object in the current scene
  • the scene may be any scene displayed on the computing device, such as a game playing scene, and in each scene, an object for interacting with the user's input operation is provided.
  • Step S112 determining whether the current location of the operation object is within a range represented by a three-dimensional contour point of the interaction object;
  • Step S113 If the current position of the operation object is within a range represented by the three-dimensional contour point of the interaction object, the interaction object is determined as the target object.
  • Step S114 if the current position of the operation object is not within the range represented by the three-dimensional contour point of the interaction object, it is determined that the current position of the operation object does not correspond to the target object represented by the three-dimensional contour point.
  • the method further includes: determining a current location of the operation object, and determining the current location of the operation object further includes: determining, by the tracking component, an initial location of the operation object represented by the three-dimensional coordinates; Moving an operation cursor in the space system to the initial position; tracking, by the tracking component, the operation object to determine a target position of the operation object represented by three-dimensional coordinates, and moving the operation cursor to the Determining a target position; determining an initial position or a target position of the operation object as a current bit of the operation object Set.
  • processing the target object including:
  • Step S141A tracking, by the tracking component, the direction of the acceleration a and the acceleration of the operation object by the operation object;
  • Step S142A acquiring a damping coefficient z set in the space system
  • Step S143A moving the target object in the space system according to the acceleration a, the direction of the acceleration, and the damping coefficient z.
  • the interaction object has a certain mass m
  • f is A certain coefficient, which can be a variable, converts the acceleration into a "force" f acting on the module.
  • the damping coefficient is z.
  • processing the target object including:
  • Step S141B projecting an operation surface of the input operation onto the target object
  • Step S142B determining an operation direction and an operation distance of the input operation
  • Step S143B moving the target object according to the operation direction and the operation distance.
  • the operation object is a hand
  • the input operation is a gesture
  • the interactive object has a certain mass m
  • the user can have a virtual action surface on the module by the gesture (this face can be the facet of the hand). Through this action surface, the hand can be in virtual contact with the module to form push, shoot, pull, dial and other effects.
  • This action surface can be displayed on the module to provide accurate feedback to the user.
  • the operation object is an eyeball
  • the operation object is a hand. It is similar.
  • the tracking operation component traces the input operation of the operation object to detect the operation object; determines attribute information of the input operation; determines whether the attribute information of the input operation satisfies a preset condition; if the input The attribute information of the operation satisfies the preset condition, and determines a current position of the operation object represented by three-dimensional coordinates; if the current position of the operation object corresponds to a target object represented by a three-dimensional contour point, according to the The attribute information of the input operation processes the target object; thus, it is possible to provide an interaction manner that is simultaneously applicable to the object operation in the three-dimensional space.
  • This proposal provides an operation method suitable for a space system, which is a method for natural interaction between human and machine based on natural gestures and eye tracking in a three-dimensional space, and includes the following steps:
  • step S201 the system in the three-dimensional interface where the user is located is three-dimensionally coordinated, that is, a certain spatial point has a specific x, y, and z coordinate value, and the values of the points that do not coincide are different.
  • the method of digitization can be user-centric or self-set a coordinate system.
  • Step S202 digitizing the coordinates of all the operation objects (setting the center of gravity to be (x1, y1, z1) and the contour value of the entire object);
  • step S203 the user selects the operated object through some natural interaction manners, including but not limited to gestures, eye trackers (ie, the selected objects by the focus point of the eyes), finger curvature, and the like.
  • gestures including but not limited to gestures, eye trackers (ie, the selected objects by the focus point of the eyes), finger curvature, and the like.
  • the process of implementation for example, by tracking the motion of the user's hand, moving the cursor in each of the interactive objects in the three-dimensional interface, the left and right swings of the hand, the cursor moves left and right; when the hand swings back and forth, the cursor moves back and forth; When the hand swings down, the cursor moves up and down.
  • the cursor After the cursor is moved to the target object, confirm the selected object by double-clicking the gesture or the dwell time.
  • the second embodiment using the eye tracking technology to track the user's eyeball and pupil, to determine the user needs To select an interactive object in 3D space, confirm the object's selection by double-clicking the gesture or dwell time.
  • the position of the xy direction of the cursor in the three-dimensional space is determined by the position of the hand, and the movement of the front and rear (z-axis) of the cursor is determined by the curvature of the hand or the finger; and the interactive object in the three-dimensional space that the user needs to select is determined. Confirm the selection of the interactive object by double-clicking the gesture or the dwell time.
  • Step S204 the user swings the gesture, and sets the interaction object to have a certain mass m (this quality can be related to the volume of the interaction object), according to the motion of the user's hand, analyzes the acceleration a, and converts the acceleration into an interaction object.
  • the force "f k*m*a (k is a certain coefficient, which can be a variable).
  • k is a certain coefficient, which can be a variable).
  • the set space has a certain damping effect and the damping coefficient is z.
  • Step S205 according to the gesture, projecting the motion of the hand onto the interactive object, and according to the moving direction of the hand, forming a working surface on the interactive object (this surface may be the cutting plane of the projection of the hand) respectively forms a push, a beat, and a pull , dialing and other effects.
  • This action surface can be displayed on the interactive object to provide accurate feedback to the user.
  • step S206 according to the position of the active surface, the interactive object may be pushed flat, or may be rotated around the center of gravity to advance in the direction of the force, which is in accordance with the effect of the force field (which may be without gravity). Its moving direction, velocity, acceleration and direction of rotation, angular velocity, angular acceleration and active surface position, m, a, interactive object shape and damping coefficient z are related.
  • Step S207 according to the rules, the user can control the motion of the interactive object in the interface space freely.
  • Step S208 at the same time, the system and method are suitable for a two-dimensional interface, that is, reducing the field of view of one dimension and projecting the entire activity onto a plane.
  • the computing device includes: a spatial three-dimensional system module 201, an identification tracking system 202, an execution module 203, and Display module 204, wherein:
  • the spatial three-dimensional system module 201 is configured to structure and coordinate the user and all interface elements to determine the spatial location and relationship of the person and each module;
  • the identification tracking system 202 is configured to analyze the user's operational intent for the spatial interface by tracking the output of the user's various natural interactions and communicate this information to the execution module.
  • the execution module 203 is configured to generate a movement command according to the identification tracking system and deliver the process and the result to the display module.
  • a display module 204 is configured to display the results in an entire spatial three-dimensional system.
  • the hardware portion of the identification tracking system can be implemented using a tracking component such as a camera, which can be implemented using a display screen of a computing device.
  • the spatial three-dimensional system module, the software part of the identification tracking system, and the execution module can all constitute the device in the third embodiment of the present invention, that is, implemented by a processor in the computing device.
  • the user can have a virtual action surface on the interaction object by using a gesture (this face can be a cut surface of the projection of the hand), through which the hand can be in virtual contact with the module to form a push, respectively.
  • Shoot, pull, dial and other effects This action surface can be displayed on the module to provide accurate feedback to the user.
  • an embodiment of the present invention provides an operating device suitable for a space system, where each unit included in the device and each module included in each unit can be implemented by a processor in a computing device; It can also be implemented by a specific logic circuit; in the process of implementation, the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA). .
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • the device 300 includes a detecting unit 301, a first determining unit 302, a determining unit 303, a second determining unit 304, and Processing unit 305, wherein:
  • the detecting unit 301 is configured to detect the operation object by tracking the operation object by using a tracking component The input operation of the object;
  • the first determining unit 302 is configured to determine attribute information of the input operation
  • the determining unit 303 is configured to determine whether the attribute information of the input operation meets a preset condition
  • the second determining unit 304 is configured to determine, if the attribute information of the input operation satisfies the preset condition, a current location of the operation object represented by using three-dimensional coordinates;
  • the processing unit 305 is configured to process the target object according to the attribute information of the input operation if the current position of the operation object corresponds to a target object represented by a three-dimensional contour point.
  • the attribute information of the input operation includes a duration of the operation object corresponding to the input operation, a posture of the input operation, an operation distance, an operation direction, an acceleration a of the operation object, and The direction of acceleration.
  • the processing unit is configured to trigger the detecting unit to track the operation by using a tracking component if a current location of the operation object does not correspond to a target object represented by a three-dimensional contour point.
  • the object detects an input operation of the operation object.
  • the processing unit includes a representation module, a determination module, a first determination module, and a processing module, wherein:
  • the representation module is configured to use a three-dimensional contour point to represent an interaction object in the current scene
  • the determining module is configured to determine whether a current location of the operation object is within a range represented by a three-dimensional contour point of the interaction object;
  • the first determining module is configured to determine the interactive object as a target object if a current location of the operation object is within a range represented by a three-dimensional contour point of the interaction object;
  • the processing module is configured to process the target object according to the attribute information of the input operation.
  • the apparatus further includes a third determining unit configured to be configured as If the current position of the operation object is not within the range represented by the three-dimensional contour point of the interaction object, it is determined that the current position of the operation object does not correspond to the target object represented by the three-dimensional contour point.
  • the apparatus further includes a fourth determining unit including a second determining module, a first moving module, a third determining module, and a fourth determining module, wherein:
  • the second determining module is configured to determine, by using the tracking component, an initial position of the operation object represented by the three-dimensional coordinates;
  • the first moving module is configured to move an operation cursor in the space system to the initial position
  • the third determining module is configured to track, by the tracking component, the target object of the operation object determined by using the three-dimensional coordinates, and move the operation cursor to the target position;
  • the fourth determining module is configured to determine an initial position or a target position of the operation object as a current position of the operation object.
  • the processing unit includes a fifth determining module, an obtaining module, and a second moving module, where:
  • the fifth determining module is configured to determine, if the current position of the operation object corresponds to a target object represented by a three-dimensional contour point, tracking the operation object by the tracking component to determine an acceleration a and an acceleration of the operation object Direction
  • the acquiring module is configured to acquire a damping coefficient z set in the space system
  • the second movement module is configured to move the target object in the space system according to the acceleration a, the direction of the acceleration, and the damping coefficient z.
  • the processing unit includes a projection module, a sixth determining module, and a third moving module, where:
  • the projection module is configured to adopt three-dimensional if the current position of the operation object corresponds to a target object represented by a contour point, and an operation surface of the input operation is projected onto the target object;
  • the sixth determining module is configured to determine an operation direction and an operation distance of the input operation
  • the third movement module is configured to move the target object according to the operation direction and the operation distance.
  • the operational object includes a finger, a hand, an eyeball.
  • the above-mentioned operation method applicable to the space system is implemented in the form of a software function module and sold or used as a stand-alone product, it may also be stored in a computer readable storage medium.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • program codes such as a USB flash drive, a mobile hard disk, a read only memory (ROM), a magnetic disk, or an optical disk.
  • the embodiment of the invention provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute an operation method applicable to a space system provided by an embodiment of the invention.
  • FIG. 4 is a schematic diagram of a hardware entity of the computing device in the embodiment of the present invention, as shown in FIG.
  • the hardware entity of 400 includes: a processor 401, a communication interface 402, and a memory 403, wherein
  • Processor 401 typically controls the overall operation of computing device 400.
  • Communication interface 402 can enable a computing device to communicate with other terminals or servers over a network.
  • the memory 403 is configured to store instructions and applications executable by the processor 401, and may also cache data to be processed or processed by the processor 401 and various modules in the computing device 400 (eg, image data, audio data, voice communication data, and Video communication data) can be realized by flash memory (FLASH) or random access memory (RAM).
  • FLASH flash memory
  • RAM random access memory
  • Embodiments of the subject matter described in the specification can be implemented in digital electronic circuits or in computer software, firmware or hardware, including the structures disclosed in the specification and their structural equivalents, or A combination of one or more of its structural equivalents.
  • Embodiments of the subject matter described in the specification can be implemented as one or more computer programs, ie, one or more computer program instructions modules, encoded onto one or more computer storage media for execution or control of data by a data processing device The operation of the processing device.
  • computer instructions can be encoded onto an artificially generated propagating signal (eg, a machine-generated electrical, optical, or electromagnetic signal) that is generated to encode the information for transmission to a suitable receiver.
  • the device is executed by a data processing device.
  • the computer storage medium can be, or be included in, a computer readable storage device, a computer readable storage medium, a random or sequential access memory array or device, or a combination of one or more of the above.
  • the computer storage medium is not a propagated signal, the computer storage medium can be a source or a target of computer program instructions that are encoded in a manually generated propagated signal.
  • the computer storage medium can also be or be included in one or more separate components or media (eg, multiple CDs, disks, or other storage devices).
  • computer storage media can be tangible.
  • the operations described in the specification can be implemented as operations by data processing apparatus on data stored on or received from one or more computer readable storage devices.
  • the term “client” or “server” includes all types of devices, devices, and machines for processing data, including, for example, a programmable processor, a computer, a system on a chip, or the like. Multiple or combined.
  • the device can include dedicated logic circuitry, such as a Field Programmable Gate Array (FPGA) or an Application Specific Integrated Circuit (ASIC).
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • the apparatus can also include code to create an execution environment for the computer program of interest, for example, to constitute processor firmware, a protocol stack, a database management system, an operating system, a cross-platform operating environment, a virtual machine, or one or Multiple combinations.
  • the device and execution environment enables a variety of different computing model infrastructures, such as network services, distributed computing, and grid computing infrastructure.
  • a computer program (also referred to as a program, software, software application, script, or code) can be written in any programming language, including assembly or interpreted language, descriptive language, or procedural language, and can be in any form (including as an independent A program, or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
  • a computer program can, but does not necessarily, correspond to a file in a file system.
  • the program can be stored in a portion of the file that holds other programs or data (eg, one or more scripts stored in the markup language document), in a single file dedicated to the program of interest, or in multiple collaborative files ( For example, storing one or more modules, submodules, or files in a code section).
  • the computer program can be deployed to be executed on one or more computers located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in the specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating input data and generating output.
  • the above described processes and logic flows can also be performed by dedicated logic circuitry, and the apparatus can also be implemented as dedicated logic circuitry, such as an FPGA or ASIC.
  • processors suitable for the execution of a computer program include, for example, a general purpose microprocessor and a special purpose microprocessor, and any one or more processors of any type of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both. The main elements of the calculation are the processor for performing the actions in accordance with the instructions and one or more memories for storing the instructions and data.
  • a computer will also include one or more data for storing data.
  • a mass storage device eg, a magnetic disk, a magneto-optical disk, or an optical disk
  • the computer does not need to have such a device.
  • the computer can be embedded in another device, such as a mobile phone, a personal digital assistant (PDA), a mobile audio player or mobile video player, a game console, a global positioning system (GPS) receiver, or a mobile storage device.
  • PDA personal digital assistant
  • GPS global positioning system
  • Suitable devices for storing computer program instructions and data include all forms of non-volatile memory, media and storage devices, including, for example, semiconductor storage devices (eg, EPROM, EEPROM, and flash memory devices), magnetic disks (eg, internal hard drives or removable hard drives). ), magneto-optical disks, and CD-ROM and DVD-ROM discs.
  • the processor and memory can be supplemented by or included in dedicated logic circuitry.
  • a computer including a display device, a keyboard, a pointing device (eg, a mouse, trackball, etc., or a touch screen, touch pad, etc.).
  • Display devices are, for example, cathode ray tubes (CRTs), liquid crystal displays (LCDs), organic light emitting diodes (OLEDs), thin film transistors (TFTs), plasma, other flexible configurations, or any other monitor for displaying information to a user.
  • CTRs cathode ray tubes
  • LCDs liquid crystal displays
  • OLEDs organic light emitting diodes
  • TFTs thin film transistors
  • plasma other flexible configurations, or any other monitor for displaying information to a user.
  • the user is able to provide input to the computer through the keyboard and pointing device.
  • feedback provided to the user can be any form of sensory feedback, such as visual feedback, audible feedback, or haptic feedback; and input from the user can be in any form Received, including acoustic input, voice input, or touch input.
  • the computer can interact with the user by transmitting and receiving documents from the device used by the user; for example, transmitting the web page to a web browser on the user's client in response to a request received from the web browser.
  • Embodiments of the subject matter described in the specification can be implemented in a computing system.
  • the computing system includes a backend component (eg, a data server), or includes a middleware component (eg, an application server), or includes a front end component (eg, a client with a graphical user interface or web browser)
  • the end computer through which the user can interact with embodiments of the subject matter described herein, or any combination of one or more of the back end components, middleware components, or front end components described above.
  • the components of the system can be interconnected by any form of digital data communication or medium (e.g., a communication network). Examples of communication networks include local area networks (LANs) and wide area networks (WANs), interconnected networks (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks).
  • LANs local area networks
  • WANs wide area networks
  • interconnected networks e.g., the Internet
  • end-to-end networks e
  • the features described in this application are implemented on a smart television module (or connected to a television module, hybrid television module, etc.).
  • the smart TV module can include processing circuitry configured to integrate more traditional television program sources (eg, program sources received via cable, satellite, air, or other signals) with Internet connectivity.
  • the smart TV module can be physically integrated into a television set or can include stand-alone devices such as set top boxes, Blu-ray or other digital media players, game consoles, hotel television systems, and other ancillary equipment.
  • the smart TV module can be configured to enable viewers to search for and find videos, movies, pictures or other content on the network, on local cable channels, on satellite television channels, or on local hard drives.
  • a set top box (STB) or set top box unit (STU) may include an information-applicable device that includes a tuner and is coupled to the television set and an external source to tune the signal to be displayed on a television screen or other playback device.
  • the smart TV module can be configured to provide a home screen or a top screen including icons for a variety of different applications (eg, web browsers and multiple streaming services, connecting cable or satellite media sources, other network "channels", etc.).
  • the smart TV module can also be configured to provide electronic programming to the user.
  • a companion application of the smart television module can be run on the mobile computing device to provide the user with additional information related to the available programs, thereby enabling the user to control the smart television module and the like.
  • this feature can be implemented on a portable computer or other personal computer (PC), smart phone, other mobile phone, handheld computer, tablet PC, or other computing device.
  • the tracking operation component traces the input operation of the operation object to detect the operation object; determines the attribute information of the input operation; and if the attribute information of the input operation satisfies the preset condition, determines to adopt a three-dimensional The current position of the operation object represented by the coordinate; if the current position of the operation object corresponds to a target object represented by a three-dimensional contour point, the target object is processed according to the attribute information of the input operation; Can provide an interactive way for both object operations in 3D space.

Abstract

一种适用于空间系统的操作方法及装置、存储介质,所述方法还包括:通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作(S101);确定所述输入操作的属性信息(S102);判断所述输入操作的属性信息是否满足预设条件(S103);如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置(S104);如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理(S105)。

Description

一种适用于空间系统的操作方法及装置、存储介质
相关申请的交叉引用
本申请基于申请号为201610939290.5、申请日为2016年10月24日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本申请。
技术领域
本发明涉及电子技术,尤其涉及一种适用于空间系统的操作方法及装置、存储介质。
背景技术
随着三维技术的商用化,尤其是VR(Virtual Reality,虚拟现实)和AR(Augmented Reality,增强现实)等技术的日益成熟,用户界面在向三维化进一步延伸。原有的一些在手机等二维界面中的操作,在三维操作中需要进一步扩展才能适用。在三维体系中,需要让用户自如的进行操作,同时符合用户的空间认知和使用习惯。
现有的解决方案,主要还是二维界面中常用的选择操作对象进行拖拽的方式,依赖于二维界面中手势的拖拽操作,这种单一的操控方式,让用户在界面操作过程中少了选择和操控乐趣。同时,二维界面对于用户进行操作时,在三维空间中的操控需要更加的符合用户的操控经验,例如,三维空间的三个轴x、y和z的位置都发生变动,同时伴随角速度的变化,如何让位置和角速度变化体现在三维空间中也是用户对于三维空间交互的需求之一。同时,虽然适用于三维空间的手势操作强劲的需求,但是二维界面还会在很长时间内使用,因此,如何提供一种手势操作的交互方式能够 兼顾三维空间和二维界面成为亟需解决的问题。
发明内容
有鉴于此,本发明实施例为解决现有技术中存在的至少一个问题而提供一种适用于空间系统的操作方法及装置、存储介质,能够提供一种同时适用于三维空间中对象操作的交互方式。
本发明实施例的技术方案是这样实现的:
本发明实施例提供一种适用于空间系统的操作方法,所述方法包括:
通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作;
确定所述输入操作的属性信息;
判断所述输入操作的属性信息是否满足预设条件;
如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;
如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理。
本发明实施例提供一种适用于空间系统的操作装置,所述装置包括检测单元、第一确定单元、判断单元、第二确定单元和处理单元,其中:
所述检测单元,配置为通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作;
所述第一确定单元,配置为确定所述输入操作的属性信息;
所述判断单元,配置为判断所述输入操作的属性信息是否满足预设条件;
所述第二确定单元,配置为如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;
所述处理单元,配置为如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对 象进行处理。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行本发明实施例提供的一种适用于空间系统的操作方法。
本发明实施例提供一种适用于空间系统的操作方法及装置、存储介质,其中,通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作;确定所述输入操作的属性信息;判断所述输入操作的属性信息是否满足预设条件;如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理;如此,能够提供一种同时适用于三维空间中对象操作的交互方式。
附图说明
图1为本发明实施例适用于空间系统中的操作方法的实现流程示意图;
图2为本发明实施例计算设备的组成结构示意图;
图3为本发明实施例适用于空间系统的操作装置的实现流程示意图;
图4为本发明实施例中计算设备的一种硬件实体示意图。
具体实施方式
下面结合附图和具体实施例对本发明的技术方案进一步详细阐述。
实施例一
本发明实施例提供一种适用于空间系统的操作方法,该方法应用于计算设备,该方法所实现的功能可以通过计算设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,该计算设备至少包括处理器和存储介质。这里,所述计算设备在具体实施例的过程中 可以为各种类型的具有信息处理能力的电子设备,例如所述电子设备可以包括手机、平板电脑、台式机、个人数字助理、导航仪、数字电话、视频电话、电视机等。
图1为本发明实施例适用于空间系统的操作方法的实现流程示意图,如图1所示,该方法包括:
步骤S101,通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作;
这里,所述操作对象包括手指、手、眼球。
这里,所述跟踪部件为计算设备的部件,在实现的过程中,所述跟踪部件可以为摄像头。
步骤S102,确定所述输入操作的属性信息;
步骤S103,判断所述输入操作的属性信息是否满足预设条件;
这里,所述输入操作的属性信息包括所述输入操作对应的操作对象的持续时间、所述输入操作的姿态、操作距离、操作方向、所述操作对象的加速度a和加速度的方向。步骤S103中主要利用所述输入操作对应的操作对象的持续时间、所述输入操作的姿态,对应地,所述预设条件包括预设的时间阈值或操作对象的姿态。例如,判断所述输入操作的持续时间是否大于预设的时间阈值,如果大于,则满足所述预设条件,如果小于,则不满足所述预设条件;再如预设条件为双击等手势,判断输入操作的姿态是否双击等手势,如果是,则满足预设的条件,反之,则不满足预设的条件。
步骤S104,如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;
步骤S105,如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理。
在本发明的其他实施例中,所述方法还包括:步骤S106,如果所述操 作对象的当前位置未对应有采用三维的轮廓点来表示的目标对象,通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作。在实现的过程中,所述方法还包括:输出提示信息,所述提示信息用于表明当前未能确定出目标对象。
在本发明的其他实施例中,所述方法还包括:判断所述操作对象的当前位置是否对应有采用三维的轮廓点来表示的目标对象,如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理。所述判断所述操作对象的当前位置是否对应有采用三维的轮廓点来表示的目标对象,进一步包括:
步骤S111,采用三维轮廓点来表示当前场景下的交互对象;
这里,所述场景可以为计算设备上显示的任何场景,例如打游戏的场景,在每一场景下,都设置有用于与用户的输入操作进行交互的对象。
步骤S112,判断所述操作对象的当前位置是否在所述交互对象的三维轮廓点所表示的范围内;
步骤S113,如果所述操作对象的当前位置在所述交互对象的三维轮廓点所表示的范围内,将所述交互对象确定为目标对象。
步骤S114,如果所述操作对象的当前位置未在所述交互对象的三维轮廓点所表示的范围内,确定所述操作对象的当前位置未对应有采用三维的轮廓点来表示的目标对象。
在实现的过程中,所述方法还包括:确定所述操作对象的当前位置,所述确定所述操作对象的当前位置进一步包括:通过跟踪部件确定采用三维坐标来表示的操作对象的初始位置;将所述空间系统中的操作光标移动至所述初始位置;通过所述跟踪部件跟踪所述操作对象确定采用三维坐标来表示的所述操作对象的目标位置,并将所述操作光标移动至所述目标位置;将所述操作对象的初始位置或目标位置确定为所述操作对象的当前位 置。
在本发明实施例中,所述根据所述输入操作的属性信息对所述目标对象进行处理,包括:
步骤S141A,通过所述跟踪部件跟踪所述操作对象确定所述操作对象的加速度a和加速度的方向;
步骤S142A,获取所述空间系统中设置的阻尼系数z;
步骤S143A,根据所述加速度a、加速度的方向和阻尼系数z在所述空间系统移动所述目标对象。
这里,如果操作对象为手,那么输入操作则为手势,先假设交互对象有一定的质量m,计算设备根据用户手的运动分析出手的加速度a,根据公式f=k*m*a(k为某一个系数,可以为变量),将该加速度转变为作用于模块的“力”f。另外,还可以假设定空间有一定的阻尼效果,阻尼系数为z。那么交互对象收到的力F=f-mz=k*m*a-mz=m(ka-z),一般情况下下,k可以设置为1。
本发明实施例中,所述根据所述输入操作的属性信息对所述目标对象进行处理,包括:
步骤S141B,将所述输入操作的操作面投影到所述目标对象上;
步骤S142B,确定所述输入操作的操作方向和操作距离;
步骤S143B,根据所述操作方向和操作距离移动所述目标对象。
这里,如果操作对象为手,那么输入操作则为手势,设定了交互对象有一定的质量m,用户可以通过手势在模块上有一个虚拟的作用面(这个面可以是手的投影的切面),通过这个作用面,手可以和模块虚拟接触,分别形成推、拍、拉、拨等效果。可以在模块上显示该作用面,便于给用户提供确切的反馈。需要说明的是,如果操作对象为眼球,与操作对象为手 是类似的。
本发明实施例中,通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作;确定所述输入操作的属性信息;判断所述输入操作的属性信息是否满足预设条件;如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理;如此,能够提供一种同时适用于三维空间中对象操作的交互方式。
实施例二
本提案提供一种适用于空间系统的操作方法,该方法是在三维空间中基于自然手势和眼球跟踪的人机自然交互的方法,包括以下步骤:
步骤S201,将用户所在三维界面中的系统进行三维坐标化,即某一个空间点位有特定的x、y、z坐标数值,不重合的点的数值不一样。
这里,所数据化的方法可以以用户为中心,也可以自设定一个坐标体系。
步骤S202,数字化所有操作对象的坐标(设定重心为(x1、y1、z1)以及整个对象的轮廓数值);
步骤S203,用户通过一些自然交互方式选择操作的对象,这些方式包括但不限于手势、眼球追踪(eye tracker,即通过眼睛的聚焦点来判断选择的对象)、手指弯曲度等。
这里,在实现的过程中,例如一,通过跟踪用户的手的动作,在三维界面中的各个交互对象中移动光标,手的左右摆动,则光标左右移动;手的前后摆动,则光标前后移动;手上下摆动,则光标上下移动。光标移动到目标对象后,通过双击等手势或停留时间来确认选中对象。例如二,实施例二:使用眼球跟踪技术通过对用户的眼球和瞳孔的跟踪,确定用户需 要选中的三维空间中的交互对象,通过双击等手势或停留时间来确认对象的选中。例如三:通过手的位置来进行三维空间中的光标xy方向的选择,通过手或手指的弯曲度来判断光标前后(z轴)方向的移动;确定用户需要选中的三维空间中的交互对象,通过双击等手势或停留时间来确认交互对象的选中。
步骤S204,用户摆动手势,设定交互对象有一定的质量m(这个质量可以和交互对象的体积有关)根据用户手的运动,分析出其加速度a,将该加速度转变为作用于交互对象的“力”f=k*m*a(k为某一个系数,可以为变量),另外,还可以假设设定空间有一定的阻尼效果,阻尼系数为z。那么,那么交互对象收到的力F=f-mz=k*m*a-mz=m(ka-z),一般情况下下,k可以设置为1。
步骤S205,根据手势,将手的动作投影到交互对象上,根据手的运动方向,将手在交互对象上形成一个作用面(这个面可以是手的投影的切面)分别形成推、拍、拉、拨等效果。可以在交互对象上显示该作用面,便于给用户提供确切的反馈。
步骤S206,根据作用面的位置,交互对象可以被平推,也可以绕着重心进行旋转着向作用力方向前进,符合力场中(可以无重力)的效果。其移动方向、速度、加速度以及转动方向、角速度、角加速度和作用面位置、m、a、交互对象形状以及阻尼系数z相关。
步骤S207,依据这些规则,用户可以比较自如的操控界面空间中的交互对象的运动。
步骤S208,同时,这套系统和方法适合于二维界面,即减少一个维度的视野,将整个活动投影到一个平面上。
图2为本发明实施例计算设备的组成结构示意图,如图3所示,该计算设备包括:空间三维系统模块201、识别跟踪系统202、执行模块203和 显示模块204,其中:
空间三维系统模块201,配置为将用户和所有的界面元素进行结构化和坐标化,确定人和各个模块的空间位置和相互关系;
识别跟踪系统202,配置为通过跟踪用户的各种自然交互的输出,来分析用户对于空间界面的操作意图,并将这些信息传送给执行模块。
执行模块203,配置为根据识别跟踪系统生成移动命令,并将过程和结果传递给显示模块。
显示模块204,配置为将这些结果显示在整个空间三维系统中。
在实现的过程中,识别跟踪系统的硬件部分可以采用跟踪部件如摄像头来实现,所述显示模块可以采用计算设备的显示屏来实现。空间三维系统模块、识别跟踪系统的软件部分和执行模块都可以组成本发明实施例三中的装置,即通过计算设备中的处理器来实现。通过以上实施例可以看出,用户可以通过手势在交互对象上有一个虚拟的作用面(这个面可以是手的投影的切面),通过这个作用面,手可以和模块虚拟接触,分别形成推、拍、拉、拨等效果。可以在模块上显示该作用面,便于给用户提供确切的反馈。
实施例三
基于前述的实施例,本发明实施例提供一种适用于空间系统的操作装置,所述装置所包括的各单元以及各单元所包括的各模块都可以通过计算设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图3为本发明实施例适用于空间系统的操作装置的组成结构示意图,如图3所示,所述装置300包括检测单元301、第一确定单元302、判断单元303、第二确定单元304和处理单元305,其中:
所述检测单元301,配置为通过跟踪部件跟踪所述操作对象检测所述操 作对象的输入操作;
所述第一确定单元302,配置为确定所述输入操作的属性信息;
所述判断单元303,配置为判断所述输入操作的属性信息是否满足预设条件;
所述第二确定单元304,配置为如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;
所述处理单元305,配置为如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理。
在本发明的其他实施例中,所述输入操作的属性信息包括所述输入操作对应的操作对象的持续时间、所述输入操作的姿态、操作距离、操作方向、所述操作对象的加速度a和加速度的方向。
在本发明的其他实施例中,所述处理单元,配置为如果所述操作对象的当前位置未对应有采用三维的轮廓点来表示的目标对象,触发所述检测单元通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作。
在本发明的其他实施例中,所述处理单元包括表示模块、判断模块、第一确定模块和处理模块,其中:
所述表示模块,配置为采用三维轮廓点来表示当前场景下的交互对象;
所述判断模块,配置为判断所述操作对象的当前位置是否在所述交互对象的三维轮廓点所表示的范围内;
所述第一确定模块,配置为如果所述操作对象的当前位置在所述交互对象的三维轮廓点所表示的范围内,将所述交互对象确定为目标对象;
所述处理模块,配置为根据所述输入操作的属性信息对所述目标对象进行处理。
在本发明的其他实施例中,所述装置还包括第三确定单元,配置为如 果所述操作对象的当前位置未在所述交互对象的三维轮廓点所表示的范围内,确定所述操作对象的当前位置未对应有采用三维的轮廓点来表示的目标对象。
在本发明的其他实施例中,所述装置还包括第四确定单元包括第二确定模块、第一移动模块、第三确定模块和第四确定模块,其中:
所述第二确定模块,配置为通过跟踪部件确定采用三维坐标来表示的操作对象的初始位置;
所述第一移动模块,配置为将所述空间系统中的操作光标移动至所述初始位置;
所述第三确定模块,配置为通过所述跟踪部件跟踪所述操作对象确定采用三维坐标来表示的所述操作对象的目标位置,并将所述操作光标移动至所述目标位置;
所述第四确定模块,配置为将所述操作对象的初始位置或目标位置确定为所述操作对象的当前位置。
在本发明的其他实施例中,所述处理单元包括第五确定模块、获取模块和第二移动模块,其中:
所述第五确定模块,配置为如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,通过所述跟踪部件跟踪所述操作对象确定所述操作对象的加速度a和加速度的方向;
所述获取模块,配置为获取所述空间系统中设置的阻尼系数z;
所述第二移动模块,配置为根据所述加速度a、加速度的方向和阻尼系数z在所述空间系统移动所述目标对象。
在本发明的其他实施例中,所述处理单元,包括投影模块、第六确定模块和第三移动模块,其中:
所述投影模块,配置为如果所述操作对象的当前位置对应有采用三维 的轮廓点来表示的目标对象,将所述输入操作的操作面投影到所述目标对象上;
所述第六确定模块,配置为确定所述输入操作的操作方向和操作距离;
所述第三移动模块,配置为根据所述操作方向和操作距离移动所述目标对象。
在本发明的其他实施例中,所述操作对象包括手指、手、眼球。
这里需要指出的是:以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本发明装置实施例中未披露的技术细节,请参照本发明方法实施例的描述而理解。
本发明实施例中,如果以软件功能模块的形式实现上述的适用于空间系统的操作方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read Only Memory)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本发明实施例不限制于任何特定的硬件和软件结合。
本发明实施例提供一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行本发明实施例提供的适用于空间系统的操作方法。
需要说明的是,为广告区分配广告主的设备可以采用如计算机、服务器等计算设备实现,图4为本发明实施例中计算设备的一种硬件实体示意图,如图4所示,该计算设备400的硬件实体包括:处理器401、通信接口402和存储器403,其中
处理器401通常控制计算设备400的总体操作。
通信接口402可以使计算设备通过网络与其他终端或服务器通信。
存储器403配置为存储由处理器401可执行的指令和应用,还可以缓存待处理器401以及计算设备400中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
需要说明的是,说明书中描述的主题的实施方式和操作能够以数字电子电路或者以计算机软件、固件或硬件实现,其中包括本说明书中所公开的结构及其结构等效,或者采用这些结构及其结构等效中的一个或多个的结合。说明书中所描述的主题的实施方式能够被实现为一个或多个计算机程序,即一个或多个计算机程序指令模块,其编码到一个或多个计算机存储介质上以由数据处理装置执行或者控制数据处理装置的操作。替选地或附加地,计算机指令能够被编码到人工生成的传播信号(例如机器生成的电信号、光信号或电磁信号)上,该信号被生成用于对信息编码以发送到合适的接收机装置由数据处理装置执行。计算机存储介质能够是或包含在计算机可读存储设备、计算机可读存储载体,随机或顺序访问存储阵列或设备、或者以上各项中的一个或多个的结合之中。而且,虽然计算机存储介质不是传播信号,但是计算机存储介质能够是被编码在人工生成的传播信号中的计算机程序指令的源或目标。计算机存储介质还能够是或者包含在一个或多个独立的组件或媒体(例如,多个CD、磁盘或其它存储设备)中。因此,计算机存储介质可以是有形的。
说明书中描述的操作能够被实现为由数据处理装置对存储在一个或多个计算机可读存储设备上或从其它源接收的数据进行的操作。
术语“客户端”或“服务器”包括用于处理数据的所有类型的装置、设备和机器,例如包括可编程处理器、计算机、片上系统或前述各项中的 多个或结合。装置能够包括专用逻辑电路,例如,现场可编程门阵列(FPGA)或专用集成电路(ASIC)。除硬件之外,装置还能够包括创建用于所关注计算机程序的执行环境的代码,例如,构成处理器固件、协议栈、数据库管理系统、操作系统、跨平台运行环境、虚拟机或其一个或多个的结合。装置和执行环境能够实现各种不同的计算模型基础架构,诸如网络服务、分布式计算和网格计算基础架构。
计算机程序(也被称为程序、软件、软件应用、脚本或代码)能够以任何编程语言形式(包括汇编语言或解释语言、说明性语言或程序语言)书写,并且能够以任何形式(包括作为独立程序,或者作为模块、组件、子程序、对象或其它适用于计算环境中的单元)部署。计算机程序可以但非必要地对应于文件系统中的文件。程序能够被存储在文件的保存其它程序或数据(例如,存储在标记语言文档中的一个或多个脚本)的部分中,在专用于所关注程序的单个文件中,或者在多个协同文件(例如,存储一个或多个模块、子模块或代码部分的文件)中。计算机程序能够被部署为在一个或多个计算机上执行,该一个或多个计算机位于一个站点处,或者分布在多个站点中且通过通信网络互连。
说明书中描述的过程和逻辑流能够由一个或多个可编程处理器执行,该一个或多个可编程处理器执行一个或多个计算机程序以通过操作输入数据和生成输出来执行动作。上述过程和逻辑流还能够由专用逻辑电路执行,并且装置还能够被实现为专用逻辑电路,例如,FPGA或ASIC。
适用于执行计算机程序的处理器例如包括通用微处理器和专用微处理器,以及任何数字计算机类型的任何一个或多个处理器。通常来说,处理器会从只读存储器或随机访问存储器或以上两者接收指令和数据。计算的主要元件是用于按照指令执行动作的处理器以及一个或多个用于存储指令和数据的存储器。通常来说,计算机还会包括一个或多个用于存储数据的 大容量存储设备(例如,磁盘、磁光盘、或光盘),或者操作地耦接以从其接收数据或向其发送数据,或者两者均是。然而,计算机不需要具有这样的设备。而且,计算机能够被嵌入在另一设备中,例如,移动电话、个人数字助手(PDA)、移动音频播放器或移动视频播放器、游戏控制台、全球定位系统(GPS)接收机或移动存储设备(例如,通用串行总线(USB)闪盘),以上仅为举例。适用于存储计算机程序指令和数据的设备包括所有形式的非易失性存储器、媒体和存储设备,例如包括半导体存储设备(例如,EPROM、EEPROM和闪存设备)、磁盘(例如,内部硬盘或移动硬盘)、磁光盘、以及CD-ROM和DVD-ROM盘。处理器和存储器能够由专用逻辑电路补充或者包含到专用逻辑电路中。
为了提供与用户的交互,说明书中描述的主题的实施方式能够在计算机上实现,该计算机包括显示设备、键盘、指向设备(例如,鼠标、轨迹球等,或触摸屏、触摸板等)。显示设备例如为阴极射线管(CRT)、液晶显示器(LCD)、有机发光二极管(OLED)、薄膜晶体管(TFT)、等离子、其它柔性配置、或者用于向用户显示信息的任何其它监视器。用户能够过键盘和指向设备向计算机提供输入。其它类型的设备也能够用于提供与用户的交互;例如,提供给用户的反馈能够是任何形式的感官反馈,例如,视觉反馈、听觉反馈、或触觉反馈;并且来自用户的输入能够以任何形式被接收,包括声学输入、语音输入或触摸输入。此外,计算机能够通过向用户使用的设备发送文档以及从该设备接收文档来与用户交互;例如,响应于从网页浏览器接收的请求将网页发送到用户的客户端上的网页浏览器。
说明书中描述的主题的实施方式能够以计算系统来实现。该计算系统包括后端组件(例如,数据服务器),或者包括中间件组件(例如,应用服务器),或者包括前端组件(例如,具有图形用户接口或网页浏览器的客户 端计算机,用户通过该客户端计算机能够与本申请描述的主题的实施方式交互),或者包括上述后端组件、中间件组件或前端组件中的一个或多个的任何结合。系统的组件能够通过任何数字数据通信形式或介质(例如,通信网络)来互连。通信网络的示例包括局域网(LAN)和广域网(WAN)、互连网络(例如,互联网)以及端对端网络(例如,自组织端对端网络)。
本申请中描述的特征在智能电视模块上(或连接电视模块、混合电视模块等)实现。智能电视模块可以包括被配置成为互联网连接性集成更多传统电视节目源(例如,经由线缆、卫星、空中或其它信号接收的节目源)的处理电路。智能电视模块可以被物理地集成到电视机中或者可以包括独立的设备,例如,机顶盒、蓝光或其它数字媒体播放器、游戏控制台、酒店电视系统以及其它配套设备。智能电视模块可以被配置为使得观看者能够搜索并找到在网络上、当地有线电视频道上、卫星电视频道上或存储在本地硬盘上的视频、电影、图片或其它内容。机顶盒(STB)或机顶盒单元(STU)可以包括信息适用设备,信息适用设备包括调谐器并且连接到电视机和外部信号源上,从而将信号调谐成之后将被显示在电视屏幕或其它播放设备上的内容。智能电视模块可以被配置成为多种不同的应用(例如,网页浏览器和多个流媒体服务、连接线缆或卫星媒体源、其它网络“频道”等)提供家用屏幕或包括图标的顶级屏幕。智能电视模块还可以被配置成为用户提供电子节目。智能电视模块的配套应用可以在移动计算设备上运行以向用户提供与可用节目有关的附加信息,从而使得用户能够控制智能电视模块等。在替选实施例中,该特征可以被实现在便携式计算机或其它个人计算机(PC)、智能手机、其它移动电话、手持计算机、平板PC或其它计算设备上。
虽然说明书包含许多具体的实施细节,但是这些实施细节不应当被解释为对任何权利要求的范围的限定,而是对专用于特定实施方式的特征的 描述。说明书中在独立实施方式前后文中描述的特定的特征同样能够以单个实施方式的结合中实现。相反地,单个实施方式的上下文中描述的各个特征同样能够在多个实施方式中单独实现或者以任何合适的子结合中实现。而且,尽管特征可以在上文中描述为在特定结合中甚至如最初所要求的作用,但是在一些情况下所要求的结合中的一个或多个特征能够从该结合中去除,并且所要求的结合可以为子结合或者子结合的变型。
类似地,虽然在附图中以特定次序描绘操作,但是这不应当被理解为要求该操作以所示的特定次序或者以相继次序来执行,或者所示的全部操作都被执行以达到期望的结果。在特定环境下,多任务处理和并行处理可以是有利的。此外,上述实施方式中各个系统组件的分离不应当被理解为要求在全部实施方式中实现该分离,并且应当理解的是所描述的程序组件和系统通常能够被共同集成在单个软件产品中或被封装为多个软件产品。
因此,已经对主题的特定实施方式进行了描述。其它实施方式在以下权利要求的范围内。在一些情况下,权利要求中所限定的动作能够以不同的次序执行并且仍能够达到期望的结果。此外,附图中描绘的过程并不必须采用所示出的特定次序、或相继次序来达到期望的结果。在特定实施方式中,可以使用多任务处理或并行处理。
工业实用性
本发明实施例中,通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作;确定所述输入操作的属性信息;如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理;如此,能够提供一种同时适用于三维空间中对象操作的交互方式。

Claims (11)

  1. 一种适用于空间系统的操作方法,所述方法包括:
    通过跟踪部件跟踪操作对象检测所述操作对象的输入操作;
    确定所述输入操作的属性信息;
    判断所述输入操作的属性信息是否满足预设条件;
    如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;
    如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理。
  2. 根据权利要求1所述的方法,所述输入操作的属性信息包括所述输入操作对应的操作对象的持续时间、所述输入操作的姿态、操作距离、操作方向、所述操作对象的加速度a和加速度的方向。
  3. 根据权利要求1所述的方法,所述方法还包括:
    如果所述操作对象的当前位置未对应有采用三维的轮廓点来表示的目标对象,通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作。
  4. 根据权利要求1至3任一项所述的方法,如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理,包括:
    采用三维轮廓点来表示当前场景下的交互对象;
    判断所述操作对象的当前位置是否在所述交互对象的三维轮廓点所表示的范围内;
    如果所述操作对象的当前位置在所述交互对象的三维轮廓点所表示的范围内,将所述交互对象确定为目标对象;
    根据所述输入操作的属性信息对所述目标对象进行处理。
  5. 根据权利要求4所述的方法,所述方法还包括:
    如果所述操作对象的当前位置未在所述交互对象的三维轮廓点所表示的范围内,确定所述操作对象的当前位置未对应有采用三维的轮廓点来表示的目标对象;
    通过跟踪部件跟踪所述操作对象检测所述操作对象的输入操作。
  6. 根据权利要求1至3任一项所述的方法,确定所述操作对象的当前位置包括:
    通过跟踪部件确定采用三维坐标来表示的操作对象的初始位置;
    将所述空间系统中的操作光标移动至所述初始位置;
    通过所述跟踪部件跟踪所述操作对象确定采用三维坐标来表示的所述操作对象的目标位置,并将所述操作光标移动至所述目标位置;
    将所述操作对象的初始位置或目标位置确定为所述操作对象的当前位置。
  7. 根据权利要求1至3任一项所述的方法,所述根据所述输入操作的属性信息对所述目标对象进行处理,包括:
    通过所述跟踪部件跟踪所述操作对象确定所述操作对象的加速度a和加速度的方向;
    获取所述空间系统中设置的阻尼系数z;
    根据所述加速度a、加速度的方向和阻尼系数z在所述空间系统移动所述目标对象。
  8. 根据权利要求1至3任一项所述的方法,所述根据所述输入操作的属性信息对所述目标对象进行处理,包括:
    将所述输入操作的操作面投影到所述目标对象上;
    确定所述输入操作的操作方向和操作距离;
    根据所述操作方向和操作距离移动所述目标对象。
  9. 根据权利要求1至3任一项所述的方法,所述操作对象包括手指、手、眼球。
  10. 一种适用于空间系统的操作装置,所述装置包括检测单元、第一确定单元、判断单元、第二确定单元和处理单元,其中:
    所述检测单元,配置为通过跟踪部件跟踪操作对象检测所述操作对象的输入操作;
    所述第一确定单元,配置为确定所述输入操作的属性信息;
    所述判断单元,配置为判断所述输入操作的属性信息是否满足预设条件;
    所述第二确定单元,配置为如果所述输入操作的属性信息满足所述预设条件,确定采用三维坐标来表示的所述操作对象的当前位置;
    所述处理单元,配置为如果所述操作对象的当前位置对应有采用三维的轮廓点来表示的目标对象,根据所述输入操作的属性信息对所述目标对象进行处理。
  11. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,该计算机可执行指令用于执行权利要求1至9任一项所述的适用于空间系统的操作方法。
PCT/CN2017/100230 2016-10-24 2017-09-01 一种适用于空间系统的操作方法及装置、存储介质 WO2018076927A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/314,425 US20190325657A1 (en) 2016-10-24 2017-09-01 Operating method and device applicable to space system, and storage medium
JP2019503449A JP2019527435A (ja) 2016-10-24 2017-09-01 空間システムに適用できる操作方法及び装置、記憶媒体
EP17866241.7A EP3470959A4 (en) 2016-10-24 2017-09-01 OPERATING METHOD AND DEVICE APPLICABLE TO A SPACE SYSTEM, AND STORAGE MEDIUM

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610939290.5 2016-10-24
CN201610939290.5A CN107977071B (zh) 2016-10-24 2016-10-24 一种适用于空间系统的操作方法及装置

Publications (1)

Publication Number Publication Date
WO2018076927A1 true WO2018076927A1 (zh) 2018-05-03

Family

ID=62004085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/100230 WO2018076927A1 (zh) 2016-10-24 2017-09-01 一种适用于空间系统的操作方法及装置、存储介质

Country Status (5)

Country Link
US (1) US20190325657A1 (zh)
EP (1) EP3470959A4 (zh)
JP (1) JP2019527435A (zh)
CN (1) CN107977071B (zh)
WO (1) WO2018076927A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982557A (zh) * 2012-11-06 2013-03-20 桂林电子科技大学 基于深度相机的空间手势姿态指令处理方法
CN103064514A (zh) * 2012-12-13 2013-04-24 航天科工仿真技术有限责任公司 沉浸式虚拟现实系统中的空间菜单的实现方法
US20140125678A1 (en) * 2012-07-11 2014-05-08 GeriJoy Inc. Virtual Companion
CN104423578A (zh) * 2013-08-25 2015-03-18 何安莉 交互式输入系统和方法

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2558984B2 (ja) * 1991-03-12 1996-11-27 松下電器産業株式会社 3次元情報会話システム
US6176782B1 (en) * 1997-12-22 2001-01-23 Philips Electronics North America Corp. Motion-based command generation technology
US6147674A (en) * 1995-12-01 2000-11-14 Immersion Corporation Method and apparatus for designing force sensations in force feedback computer applications
US6972734B1 (en) * 1999-06-11 2005-12-06 Canon Kabushiki Kaisha Mixed reality apparatus and mixed reality presentation method
CN201025527Y (zh) * 2007-03-02 2008-02-20 吴常熙 整合型输入装置
US20080266323A1 (en) * 2007-04-25 2008-10-30 Board Of Trustees Of Michigan State University Augmented reality user interaction system
US8321075B2 (en) * 2008-02-25 2012-11-27 Sri International Mitigating effects of biodynamic feedthrough on an electronic control device
US20110066406A1 (en) * 2009-09-15 2011-03-17 Chung Yuan Christian University Method for Generating Real-Time Haptic Response Information for a Haptic Simulating Device
US8457353B2 (en) * 2010-05-18 2013-06-04 Microsoft Corporation Gestures and gesture modifiers for manipulating a user-interface
JP5597837B2 (ja) * 2010-09-08 2014-10-01 株式会社バンダイナムコゲームス プログラム、情報記憶媒体、及び、画像生成装置
JP5738569B2 (ja) * 2010-10-15 2015-06-24 任天堂株式会社 画像処理プログラム、装置、システムおよび方法
US9201467B2 (en) * 2011-01-26 2015-12-01 Sony Corporation Portable terminal having user interface function, display method, and computer program
US9857868B2 (en) * 2011-03-19 2018-01-02 The Board Of Trustees Of The Leland Stanford Junior University Method and system for ergonomic touch-free interface
JP5840399B2 (ja) * 2011-06-24 2016-01-06 株式会社東芝 情報処理装置
JP2013069224A (ja) * 2011-09-26 2013-04-18 Sony Corp 動作認識装置、動作認識方法、操作装置、電子機器、及び、プログラム
JP6360050B2 (ja) * 2012-07-13 2018-07-18 ソフトキネティック ソフトウェア 手の上の特異な注目すべき点を使用した人間−コンピュータ・ジェスチャ・ベース同時相互作用のための方法及びシステム
US8970479B1 (en) * 2012-07-31 2015-03-03 Rawles Llc Hand gesture detection
CN202854704U (zh) * 2012-08-20 2013-04-03 深圳市维尚视界立体显示技术有限公司 一种3d显示的人机交互的设备
US9552673B2 (en) * 2012-10-17 2017-01-24 Microsoft Technology Licensing, Llc Grasping virtual objects in augmented reality
US9459697B2 (en) * 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
CN113568506A (zh) * 2013-01-15 2021-10-29 超级触觉资讯处理有限公司 用于显示器控制和定制姿势解释的动态用户交互
US20150277699A1 (en) * 2013-04-02 2015-10-01 Cherif Atia Algreatly Interaction method for optical head-mounted display
US20140354602A1 (en) * 2013-04-12 2014-12-04 Impression.Pi, Inc. Interactive input system and method
US9245388B2 (en) * 2013-05-13 2016-01-26 Microsoft Technology Licensing, Llc Interactions of virtual objects with surfaces
US9645654B2 (en) * 2013-12-04 2017-05-09 Leap Motion, Inc. Initializing predictive information for free space gesture control and communication
US9978202B2 (en) * 2014-02-14 2018-05-22 Igt Canada Solutions Ulc Wagering gaming apparatus for detecting user interaction with game components in a three-dimensional display
JP6282188B2 (ja) * 2014-07-04 2018-02-21 クラリオン株式会社 情報処理装置
US9989352B2 (en) * 2015-03-20 2018-06-05 Chuck Coleman Playing surface collision detection system
WO2017024142A1 (en) * 2015-08-04 2017-02-09 Google Inc. Input via context sensitive collisions of hands with objects in virtual reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140125678A1 (en) * 2012-07-11 2014-05-08 GeriJoy Inc. Virtual Companion
CN102982557A (zh) * 2012-11-06 2013-03-20 桂林电子科技大学 基于深度相机的空间手势姿态指令处理方法
CN103064514A (zh) * 2012-12-13 2013-04-24 航天科工仿真技术有限责任公司 沉浸式虚拟现实系统中的空间菜单的实现方法
CN104423578A (zh) * 2013-08-25 2015-03-18 何安莉 交互式输入系统和方法

Also Published As

Publication number Publication date
EP3470959A1 (en) 2019-04-17
JP2019527435A (ja) 2019-09-26
CN107977071B (zh) 2020-02-28
EP3470959A4 (en) 2019-08-14
CN107977071A (zh) 2018-05-01
US20190325657A1 (en) 2019-10-24

Similar Documents

Publication Publication Date Title
KR102278822B1 (ko) 가상 현실 입력의 수행
US10606609B2 (en) Context-based discovery of applications
CN109313812B (zh) 具有上下文增强的共享体验
KR101925658B1 (ko) 용적 방식 비디오 표현 기법
US9672660B2 (en) Offloading augmented reality processing
US10803642B2 (en) Collaborative virtual reality anti-nausea and video streaming techniques
US20170084084A1 (en) Mapping of user interaction within a virtual reality environment
AU2018203946B2 (en) Collaborative review of virtual reality video
US20150185825A1 (en) Assigning a virtual user interface to a physical object
US20150187137A1 (en) Physical object discovery
US20160034058A1 (en) Mobile Device Input Controller For Secondary Display
EP3286601B1 (en) A method and apparatus for displaying a virtual object in three-dimensional (3d) space
US10846901B2 (en) Conversion of 2D diagrams to 3D rich immersive content
WO2018076927A1 (zh) 一种适用于空间系统的操作方法及装置、存储介质
US11695843B2 (en) User advanced media presentations on mobile devices using multiple different social media apps
GB2566142A (en) Collaborative virtual reality anti-nausea and video streaming techniques
CN113039508A (zh) 评估虚拟环境的输入和输出的对准
Budhiraja Interaction Techniques using Head Mounted Displays and Handheld Devices for Outdoor Augmented Reality
CN117075770A (zh) 基于扩展现实的交互控制方法、装置、电子设备和存储介质
Yamanaka et al. Web-and mobile-based environment for designing and presenting spatial audiovisual content

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17866241

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017866241

Country of ref document: EP

Effective date: 20190109

ENP Entry into the national phase

Ref document number: 2019503449

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE