WO2020154971A1 - Electronic device and control method therefor - Google Patents

Electronic device and control method therefor Download PDF

Info

Publication number
WO2020154971A1
WO2020154971A1 PCT/CN2019/073976 CN2019073976W WO2020154971A1 WO 2020154971 A1 WO2020154971 A1 WO 2020154971A1 CN 2019073976 W CN2019073976 W CN 2019073976W WO 2020154971 A1 WO2020154971 A1 WO 2020154971A1
Authority
WO
WIPO (PCT)
Prior art keywords
graphic object
active area
gesture
operating environment
user
Prior art date
Application number
PCT/CN2019/073976
Other languages
French (fr)
Inventor
Xiao Song ZHANG
Original Assignee
Siemens Aktiengesellschaft
Siemens Ltd., China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Aktiengesellschaft, Siemens Ltd., China filed Critical Siemens Aktiengesellschaft
Priority to PCT/CN2019/073976 priority Critical patent/WO2020154971A1/en
Publication of WO2020154971A1 publication Critical patent/WO2020154971A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels

Definitions

  • the present invention relates to an electronic device and a control method therefor.
  • Augmented reality (AR) technology may realize real-time determination of shooting positions and angles of a camera and real-time addition of a graphic object to an image captured by the camera.
  • Electronic devices such as Microsoft’s HoloLens may realize such AR technology.
  • the user may view a graphical operating interface provided by HoloLens and may control a cursor in the graphical operating interface by moving the head.
  • the present invention is intended to address the foregoing and/or other problems and provide an electronic device and a control method therefor.
  • an electronic device includes: a control unit and an operating environment generation unit configured to generate an operating environment for a user to operate the electronic device, where the operating environment generation unit includes: a tag generation unit configured to generate a tag in the operating environment; a graphic object generation unit configured to generate a graphic object in the operating environment; an active area generation unit configured to generate an active area in the operating environment corresponding to the graphic object, where an area in the operating environment occupied by the active area overlaps an area in the operating environment occupied by the graphic object corresponding to the active area, where the control unit is configured to activate the graphic object corresponding to the active area when the tag in the operating environment is positioned in the occupied area of the active area in the operating environment and outside the occupied area of the graphic object corresponding to the active area in the operating environment. Therefore, the graphic object may be activated even if the user is unable to accurately control the tag to be positioned at the graphic object he/she desires to activate.
  • the electronic device further includes a display unit configured to display the operating environment generated by the operating environment generation unit, the tag generated by the tag generation unit, and the graphic object generated by the graphic object generation unit.
  • the display unit may not display the active area. In other words, the active area may be invisible to the user.
  • the active area generation unit is configured to generate the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion, where the main active area portion has a shape the same as that of the corresponding graphic object, and an area in the operating environment occupied by the main active area portion overlaps an area in the operating environment occupied by the corresponding graphic object, and the area in the operating environment occupied by the peripheral active area portion is at the periphery of the area in the operating environment occupied by the main active area portion.
  • the control unit is configured to determine whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and control the active area generation unit to generate the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state. Accordingly, generation of the active area, e.g., the peripheral active area portion may be controlled according to the operating logic. Therefore, it is possible to avoid faulty operations caused by the peripheral active area portion while saving computing power.
  • the electronic device further includes a gesture sensing unit configured to sense a gesture of the user operating the electronic device and send gesture information on the sensed gesture of the user to the control unit.
  • the control unit is configured to determine the gesture of the user according to the gesture information sensed by the gesture sensing unit and determine a position of the tag in the operating environment according to the gesture of the user.
  • the control unit determines that the gesture of the user is a deactivation gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the graphic object, where the deactivation gesture includes a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment. Therefore, the graphic object may be deactivated even if the gesture of the user is such that the tag leaves the graphic object but remains in the peripheral active area portion corresponding to the graphic object.
  • the graphic object generation unit is configured to generate a first graphic object and a second graphic object adjacent to each other in the operating environment, where in the case that the first graphic object is activated, when the control unit determines that the gesture of the user is an object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the first graphic object and activates the second graphic object, where the object switching gesture includes a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
  • the control unit determines that the gesture of the user is a gesture other than the object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit maintains the activation of the first graphic object.
  • the electronic device is an augmented reality (AR) equipment.
  • the electronic device includes a body including the operating environment generation unit and the control unit; and a head-mounted component which is configured to house the body and may be worn on a head of the user operating the electronic device.
  • AR augmented reality
  • a method for controlling an electronic device including: generating, in an operating environment for a user to operate the electronic device, an active area corresponding to a graphic object in the operating environment, where the active area overlaps the graphic object corresponding to the active area; and activating the graphic object corresponding to the active area when a tag is positioned in the active area and outside the graphic object corresponding to the active area.
  • the step of generating the active area includes generating the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion, where the main active area portion has a shape the same as that of the corresponding graphic object, and an area in the operating environment occupied by the main active area portion overlaps an area in the operating environment occupied by the corresponding graphic object, and the area in the operating environment occupied by the peripheral active area portion is at the periphery of the area in the operating environment occupied by the main active area portion.
  • the step of generating the active area includes determining whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and generating the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
  • the method further includes: sensing a gesture of the user operating the electronic device to obtain gesture information on the sensed gesture; and determining the gesture of the user according to the gesture information and determining a position of the tag in the operating environment according to the gesture of the user.
  • the method further includes: in the case that the position of the tag in the operating environment is in an area of the graphic object in the operating environment and thus the graphic object is activated, when the gesture of the user is determined to be a deactivation gesture according to the gesture information sensed by the gesture sensing unit, deactivating the graphic object, where the deactivation gesture includes a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment.
  • the step of generating the graphic object includes generating a first graphic object and a second graphic object adjacent to each other in the operating environment; and the method further includes: in the case that the first graphic object is activated, when the gesture of the user is determined to be an object switching gesture according to the gesture information sensed by the gesture sensing unit, deactivating the first graphic object and activating the second graphic object, where the object switching gesture includes a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
  • the method further includes generating the operating environment for the user to operate the electronic device; generating the tag in the operating environment; and generating the graphic object in the operating environment.
  • an electronic equipment including at least one processor; and a memory connected to the at least one processor, the memory having instructions stored therein which when executed by the at least one processor cause the electronic equipment to perform the method as described above.
  • a non-transitory machine readable medium where the non-transitory machine readable medium has computer executable instructions stored thereon which when executed cause at least one processor to perform the method as described above.
  • a computer program includes computer executable instructions which when executed cause at least one processor to perform the method as described above.
  • the graphic object may be activated even if the user is unable to accurately control the tag to be positioned at the graphic object he/she desires to activate.
  • FIG. 1 is a schematic diagram of an electronic device according to an exemplary embodiment
  • FIG. 2 is a schematic diagram of operations of an electronic device according to an exemplary embodiment
  • FIG. 3 is a schematic diagram of operations of an electronic device according to an exemplary embodiment
  • FIG. 4 is a flowchart illustrating a method for controlling an electronic device according to an exemplary embodiment
  • FIG. 5 is a block diagram of an electronic equipment according to an exemplary embodiment.
  • FIG. 1 is a schematic diagram of an electronic device according to an exemplary embodiment. As shown in FIG. 1, the electronic device according to the exemplary embodiment includes a control unit 100 and an operating environment generation unit 300.
  • the control unit 100 may be implemented to include a general-purpose or special-purpose processing device, e.g., a central processing unit (CPU) , a programmable logic controller (PLC) , etc.
  • the control unit 100 may implement functions and operations that will be described in detail below by execution of particular programs or codes.
  • the operating environment generation unit 300 may generate an operating environment for a user to operate the electronic device, e.g., a virtual operating environment. Such operating environment may provide a virtual operating space for the user, may include a preset operating logic, and may incorporate functions of the electronic device. Therefore, the user may control and use the electronic device by performing operations in such operating environment.
  • the operating environment generation unit 300 may be implemented to include a general-purpose or special-purpose processing device, e.g., a central processing unit (CPU) , a graphics processing unit (GPU) , a programmable logic controller (PLC) , etc.
  • the operating environment generation unit 300 may be implemented with the control unit 100 by one or more general-purpose or special-purpose processing devices.
  • the operating environment generation unit 300 may implement functions and operations that will be described in detail below by execution of particular programs or codes.
  • the electronic device may be implemented as an augmented reality (AR) device.
  • the augmented reality device may include a camera (not shown) to capture surroundings around the user and may include a display unit 500 (which will be described in detail below) to display the captured surroundings in real time.
  • the augmented reality device may display the operating environment generated by the operating environment generation unit 300 and the captured surroundings together on the display unit 500.
  • the electronic device may be implemented as a wearable electronic device.
  • the electronic device may be worn by the user on his/her head.
  • the electronic device may include a body 10 and a head-mounted component 30.
  • the body 10 may include the control unit 100, the operating environment generation unit 300, and/or the display unit 500.
  • the head-mounted component 30 may house the body and may be worn on a head of the user operating the electronic device.
  • the head-mounted component 30 may include a headband that may encircle the head.
  • the operating environment generated by the operating environment generation unit 300 may include a tag and a graphic object that may be activated by the tag.
  • the operating environment generation unit 300 may include a tag generation unit 310, a graphic object generation unit 330, and an active area generation unit 350.
  • the tag generation unit 310 may generate a tag in the operating environment, e.g., a cursor as shown in FIG. 1.
  • a tag generated by the tag generation unit 310 its position in the operating environment may be changed according to a gesture of the user, which will be described in more detail below.
  • the graphic object generation unit 330 may generate the graphic object in the operating environment, as shown in A and B in FIG. 1.
  • the graphic object may be defined to incorporate predetermined functions or purposes according to a predetermined operating logic of the operating environment.
  • a graphic object A and a graphic object B may be defined as “buttons” , and thus triggering (e.g., "pressing” ) of the button A and the button B may be defined to perform different functions.
  • the button B may be defined as a "next step” button. That is, when the button B is “pressed” , a next step of an ongoing program in the operating environment is performed.
  • the button A may be defined as a "cancel” button. That is, when the button A is “pressed” , an ongoing program in the operating environment is canceled or stopped.
  • the active area generation unit 350 may generate an active area in the operating environment corresponding to the graphic object.
  • the active area may correspond to the graphic object.
  • the active area may overlap the graphic object corresponding to the active area.
  • the control unit 100 may activate the graphic object B and may control the electronic device to provide a feedback for the user to inform the user of activation of the graphic object B (e.g., being "selected” ) .
  • the feedback may include a visual feedback (e.g., a change in a shape and/or color of the graphic object B) , an audible feedback (e.g., a warning tone) and/or a tactile feedback (e.g., a vibration) and/or a combination thereof.
  • a visual feedback e.g., a change in a shape and/or color of the graphic object B
  • an audible feedback e.g., a warning tone
  • a tactile feedback e.g., a vibration
  • the electronic device may further include a display unit 500 to display the operating environment generated by the operating environment generation unit 300, the tag generated by the tag generation unit 310 and the graphic object generated by the graphic object generation unit 330.
  • the active area generated by the active area generation unit 350 may be invisible to the user. For example, the active area may not be displayed on the display unit 500.
  • FIG. 2 is a schematic diagram of operations of an electronic device according to an exemplary embodiment.
  • the active area generation unit 350 may generate the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion.
  • the main active area portion may have a shape the same as that of the corresponding graphic object and may overlap the corresponding graphic object.
  • the peripheral active area portion may be located at the periphery of the main active area portion.
  • the main active area corresponding to the graphic object B may be the same as the graphic object B, and is therefore represented by the solid line.
  • the peripheral active area portion may be located between the solid line and the dashed line.
  • the active area is described as including the main active area portion and the peripheral active area portion, it may be understood by a person skilled in the art that the main active area portion and the peripheral active area portion may be mutually independent active areas respectively corresponding to the graphic object, and alternatively, they may be a portion overlapping the graphic object and a portion at the periphery of the overlapped portion in the same active area.
  • the active area is a specific area in the operating environment. That is, when the tag is in the active area, the graphic object corresponding to the active area is activated. Therefore, in the case that the active area generated by the active area generation unit 350 includes the main active area portion and the peripheral active area portion, when the tag is at the graphic object, the tag may be in the main active area portion and therefore may activate the graphic object; and when the tag is outside the graphic object but remains in the peripheral active area portion as shown in FIG. 2, the graphic object may also be activated. In other words, when the control unit 100 determines that the tag in the operating environment is positioned in the active area and outside the graphic object corresponding to the active area, the control unit 100 my activate the graphic object corresponding to the active area.
  • the active area including the peripheral active area portion may allow the user to activate the graphic object by only positioning the tag close to the graphic object without necessarily positioning the tag at the graphic object. Therefore, when it is difficult for the user to accurately control the tag to arrive at and/or keep at the graphic object as a result of a relatively small size of the graphic object or a disease or for other reasons, the user may still easily activate the graphic object.
  • control unit 100 may control the active area generation unit 330 to generate the active area.
  • the control unit 100 may determine whether the graphic object is in a to-be-activated state according to a current operating logic of the operating environment.
  • the control unit 100 may control the active area generation unit 300 to generate the active area, e.g., the peripheral active area portion for the graphic object in the to-be-activated state.
  • the control unit 100 may determine that the graphic object B (i.e., the "next step" button) may be in the to-be-activated state according to an operating logic of the installation program being executed.
  • the control unit 100 may control the active area generation unit 330 to generate the active area including the peripheral active area portion for the graphic object B.
  • the electronic device may further include a gesture sensing unit 700.
  • the gesture sensing unit 700 may sense a gesture of the user operating the electronic device.
  • the gesture of the user may include motions of various body parts of the user.
  • the gesture sensing unit 700 may sense a motion of the electronic equipment worn on the user’s head that moves with the user’s head, as the gesture of the user.
  • the gesture sensing unit 700 may include various sensors, such as an acceleration sensor, a geomagnetic sensor, a gyroscope, etc.
  • the gesture sensing unit 700 may send the sensed gesture information on the gesture of the user to the control unit 100.
  • the control unit 100 may determine the gesture of the user according to the sensed gesture information and may determine the position of the tag in the operating environment according to the determined gesture of the user.
  • the control unit 100 may generate a tag position control command according to the determined gesture of the user, and send the tag position control command to the tag generation unit 310.
  • the tag generation unit 310 may change the position of the tag in the operating environment according to the tag position control command.
  • the gesture sensing unit 700 may sense a motion of the electronic equipment worn on the user’s head that moves as the user turns his/her head to the left and send the sensed gesture information on the left turning of the user’s head to the control unit.
  • the gesture information includes an acceleration of the motion of the user’s head, a duration of the motion, etc.
  • the control unit 100 may determine that the gesture of the user is left turning of the head according to the received gesture information, and therefore may generate a tag position control command for controlling the tag to move to the left. Then, the control unit 100 may send the command to the tag generation unit 310.
  • the tag generation unit 310 may control the tag to move to the left in the operating environment according to the command, thus for example position the tag at the peripheral active area portion of the active area corresponding to a graphic object.
  • the control unit 100 may activate the graphic object corresponding to the peripheral active area based on the fact that the tag is positioned at the peripheral active area portion, as shown in FIG. 2.
  • the control unit 100 may control the operating environment according to the specific gesture of the user sensed by the gesture sensing unit 700.
  • the specific gesture of the user may include a deactivation gesture. That is, when the tag is positioned at the graphic object and thus the graphic object is activated, the user may perform a first motion within a time less than a first predetermined time, such that the gesture sensing unit 700 may sense such gesture of the user and send gesture information on such sensed deactivation gesture to the control unit 100, and the control unit 100 may thus control the tag generation unit 310 to position the tag outside the graphic object.
  • the specific gesture of the user may include a deactivation gesture. That is, when the tag is positioned at the graphic object and thus the graphic object is activated, the user may perform a first motion within a time less than a first predetermined time, such that the gesture sensing unit 700 may sense such gesture of the user and send gesture information on such sensed deactivation gesture to the control unit 100, and the control unit 100 may thus control the tag generation unit 310 to position the tag
  • the control unit 100 may determine that the gesture of the user is the deactivation gesture according to information corresponding to the motion of the user sensed by the gesture sensing unit 700 and information which indicates that the tag generation unit 310 causes the tag to change its position as a function of the gesture. Therefore, the control unit 100 may deactivate the graphic object according to the user’s deactivation gesture even if the tag is positioned in the peripheral active area portion of the active area.
  • FIG. 3 is a schematic diagram of operations of an electronic device according to an exemplary embodiment.
  • the control unit 100 may switch activation of one graphic object to activation of another graphic object according to a specific object switching gesture of a user.
  • a graphic object generation unit 330 may generate a first graphic object A and a second graphic object B adjacent to each other, and the second graphic object B is initially in an activated state.
  • the user may perform a second motion within a time less than a second predetermined time in a direction towards the first graphic object with an amplitude greater than a predetermined amplitude, and perform a third motion immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
  • the second predetermined time and the third predetermined time may be 0.5 seconds
  • the second motion may be left turning of a head with an amplitude of approximately 30° in a direction towards the first graphic object A
  • the third motion may be right turning of the head with an amplitude of approximately 30° in a direction opposed to the direction of the second motion.
  • the control unit 100 may determine that the user’s gesture is the object switching gesture according to information corresponding to the motion of the user sensed by the gesture sensing unit 700, and thus may deactivate the second graphic object B and may activate the first graphic object A.
  • the control unit 100 may deactivate the current graphic object and activate the graphic object the user desires to activate.
  • the control unit 100 may maintain the activated state of the currently activated graphic object.
  • a method for controlling an electronic device according to an exemplary embodiment will be described below. Such control method may be performed by the electronic device as described in the above exemplary embodiments, and therefore, repetitive descriptions of the same technical features are omitted here.
  • FIG. 4 is a flowchart illustrating a method for controlling an electronic device according to an exemplary embodiment.
  • an active area may be generated for a graphic object in an operating environment of the electronic device.
  • the operating environment may be accessed by a user so that the user operates the electronic equipment.
  • the operating environment may be a visualized operating environment, and in such example, the electronic device may be implemented as an augmented reality (AR) device.
  • the control method may further include steps of generating the operating environment for the user to operate the electronic device, generating a tag (e.g., a cursor) in the operating environment, and generating a graphic object in the operating environment.
  • a tag e.g., a cursor
  • the active area may overlap the graphic object corresponding to the active area.
  • the active area corresponding to the graphic object may be generated as one that includes a main active area portion and a peripheral active area portion.
  • the main active area portion may have a shape the same as that of the corresponding graphic object and may overlap the corresponding graphic object.
  • the peripheral active area portion may be positioned at the periphery of the main active area portion, and therefore does not overlap the corresponding graphic object.
  • the graphic object corresponding to the active area may be activated when the tag (e.g., a cursor) in the operating environment is positioned in the active area and outside the graphic object corresponding to the active area.
  • the tag e.g., a cursor
  • the graphic object may still be activated even if the tag is not positioned at the graphic object at this time.
  • the active area including the peripheral active area portion may be generated for the graphic object in a specific situation. For example, whether the graphic object is in a to-be-activated state may be determined according to an operating logic of the operating environment, and the peripheral active area portion may be generated for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
  • the position of the tag in the operating environment may be changed according to a gesture of the user operating the electronic device.
  • the control method according to the exemplary embodiment may further include sensing the gesture of the user operating the electronic device to obtain gesture information on the sensed gesture, and may determine the gesture of the user according to the gesture information and therefore determine the position of the tag in the operating environment according to the gesture of the user.
  • activation of the graphic object may be controlled according to the gesture of the user.
  • the graphic object when the gesture of the user is determined to be a deactivation gesture according to the gesture information, the graphic object may be deactivated.
  • the deactivation gesture may include a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment.
  • the first predetermined time may be 1 second.
  • activation of different graphic objects may be switched according to the gesture of the user.
  • the gesture of the user when the gesture of the user is determined to be an object switching gesture according to the gesture information sensed by the gesture sensing unit, the first graphic object may be deactivated, and a second graphic object may be activated.
  • the object switching gesture may include a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude, and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
  • FIG. 5 is a block diagram of an electronic equipment according to an exemplary embodiment.
  • the electronic equipment 1000 may include at least one processor 1010 and a memory 1030.
  • the processor 1010 may execute at least one computer readable instruction (i.e., an element implemented in the form of software as described above) stored or encoded in a computer readable storage medium (i.e., the memory 1030) .
  • computer executable instructions are stored in the memory 1030 which when executed cause the at least one processor 1010 to implement or perform the method described above with reference to FIG. 4.
  • a program product such as a non-transitory machine readable medium.
  • the non-transitory machine readable medium may have instructions (i.e., elements implemented in the form of software as described above) which when executed by a machine causes the machine to execute various operations and functions described above in various embodiments of the present application with reference to FIGS. 1-4.
  • a computer program including computer executable instructions which when executed cause at least one processor to execute various operations and functions as described above in various embodiments of the present application with reference to FIGS. 1-4.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An electronic device and a control method. The electronic device includes a control unit (100) and an operating environment generation unit (300) configured to generate an operating environment for a user to operate the electronic device. The operating environment generation unit (300) includes a tag generation unit (310) configured to generate a tag in the operating environment; a graphic object generation unit (330) configured to generate a graphic object in the operating environment, and an active area generation unit (350)configured to generate an active area in the operating environment corresponding to the graphic object, where the active area overlaps the graphic object corresponding to the active area, where the control unit (100) is configured to activate the graphic object corresponding to the active area when the tag is positioned in the active area and outside the graphic object corresponding to the active area. Therefore, the graphic object may be activated even if the user is unable to accurately control the tag to be positioned at the graphic object he/she desires to activate.

Description

ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR BACKGROUND Technical Field
The present invention relates to an electronic device and a control method therefor.
Related Art
Augmented reality (AR) technology may realize real-time determination of shooting positions and angles of a camera and real-time addition of a graphic object to an image captured by the camera. Electronic devices such as Microsoft’s HoloLens may realize such AR technology. For example, when a user wears an electronic device on his/her body, e.g., when the user wears HoloLens on his/her head, the user may view a graphical operating interface provided by HoloLens and may control a cursor in the graphical operating interface by moving the head. Therefore, when the user desires to select a very small object such as a button in the graphical operating interface, he/she may need to accurately control movement of the head so as to move the cursor to the very small object he/she desires to select and maintain the cursor on the object. In addition, for some users who are unable to restrain trembles of their bodies (including heads) due to diseases or for other reasons, it may be difficult to perform such accurate control.
SUMMARY
The present invention is intended to address the foregoing and/or other problems and provide an electronic device and a control method therefor.
According to an exemplary embodiment, an electronic device includes: a control unit and an operating environment generation unit configured to generate an operating environment for a user to operate the electronic device, where the operating environment generation unit includes: a tag generation unit configured to generate a tag in the operating environment; a graphic object generation unit configured to generate a graphic object in the operating environment; an active area generation unit configured to generate an active area in the operating environment corresponding to the graphic object, where an area in the  operating environment occupied by the active area overlaps an area in the operating environment occupied by the graphic object corresponding to the active area, where the control unit is configured to activate the graphic object corresponding to the active area when the tag in the operating environment is positioned in the occupied area of the active area in the operating environment and outside the occupied area of the graphic object corresponding to the active area in the operating environment. Therefore, the graphic object may be activated even if the user is unable to accurately control the tag to be positioned at the graphic object he/she desires to activate.
The electronic device further includes a display unit configured to display the operating environment generated by the operating environment generation unit, the tag generated by the tag generation unit, and the graphic object generated by the graphic object generation unit. For example, the display unit may not display the active area. In other words, the active area may be invisible to the user.
The active area generation unit is configured to generate the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion, where the main active area portion has a shape the same as that of the corresponding graphic object, and an area in the operating environment occupied by the main active area portion overlaps an area in the operating environment occupied by the corresponding graphic object, and the area in the operating environment occupied by the peripheral active area portion is at the periphery of the area in the operating environment occupied by the main active area portion.
The control unit is configured to determine whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and control the active area generation unit to generate the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state. Accordingly, generation of the active area, e.g., the peripheral active area portion may be controlled according to the operating logic. Therefore, it is possible to avoid faulty operations caused by the peripheral active area portion while saving computing power.
The electronic device further includes a gesture sensing unit configured to sense a gesture of the user operating the electronic device and send gesture information on the sensed gesture of the user to the control unit. The control unit is configured to determine the gesture of the user according to the gesture information sensed by the gesture sensing unit and determine a position of the tag in the operating environment according to the gesture of the user.
In the case that the position of the tag in the operating environment is in an area of the graphic object in the operating environment and thus the graphic object is activated, when the control unit determines that the gesture of the user is a deactivation gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the graphic object, where the deactivation gesture includes a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment. Therefore, the graphic object may be deactivated even if the gesture of the user is such that the tag leaves the graphic object but remains in the peripheral active area portion corresponding to the graphic object.
The graphic object generation unit is configured to generate a first graphic object and a second graphic object adjacent to each other in the operating environment, where in the case that the first graphic object is activated, when the control unit determines that the gesture of the user is an object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the first graphic object and activates the second graphic object, where the object switching gesture includes a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion. In addition, in the case that the first image object is activated, when the control unit determines that the  gesture of the user is a gesture other than the object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit maintains the activation of the first graphic object.
The electronic device is an augmented reality (AR) equipment. For example, the electronic device includes a body including the operating environment generation unit and the control unit; and a head-mounted component which is configured to house the body and may be worn on a head of the user operating the electronic device.
According to another exemplary embodiment, a method for controlling an electronic device is provided, the method including: generating, in an operating environment for a user to operate the electronic device, an active area corresponding to a graphic object in the operating environment, where the active area overlaps the graphic object corresponding to the active area; and activating the graphic object corresponding to the active area when a tag is positioned in the active area and outside the graphic object corresponding to the active area.
The step of generating the active area includes generating the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion, where the main active area portion has a shape the same as that of the corresponding graphic object, and an area in the operating environment occupied by the main active area portion overlaps an area in the operating environment occupied by the corresponding graphic object, and the area in the operating environment occupied by the peripheral active area portion is at the periphery of the area in the operating environment occupied by the main active area portion.
The step of generating the active area includes determining whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and generating the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
The method further includes: sensing a gesture of the user operating the electronic device to obtain gesture information on the sensed gesture; and determining the gesture of  the user according to the gesture information and determining a position of the tag in the operating environment according to the gesture of the user.
The method further includes: in the case that the position of the tag in the operating environment is in an area of the graphic object in the operating environment and thus the graphic object is activated, when the gesture of the user is determined to be a deactivation gesture according to the gesture information sensed by the gesture sensing unit, deactivating the graphic object, where the deactivation gesture includes a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment.
The step of generating the graphic object includes generating a first graphic object and a second graphic object adjacent to each other in the operating environment; and the method further includes: in the case that the first graphic object is activated, when the gesture of the user is determined to be an object switching gesture according to the gesture information sensed by the gesture sensing unit, deactivating the first graphic object and activating the second graphic object, where the object switching gesture includes a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
The method further includes generating the operating environment for the user to operate the electronic device; generating the tag in the operating environment; and generating the graphic object in the operating environment.
According to another exemplary embodiment, an electronic equipment is provided, the electronic equipment including at least one processor; and a memory connected to the at least one processor, the memory having instructions stored therein which when executed by the at least one processor cause the electronic equipment to perform the method as  described above.
According to another exemplary embodiment, a non-transitory machine readable medium is provided, where the non-transitory machine readable medium has computer executable instructions stored thereon which when executed cause at least one processor to perform the method as described above.
According to another exemplary embodiment, a computer program is provided, where the computer program includes computer executable instructions which when executed cause at least one processor to perform the method as described above.
According to the exemplary embodiments, therefore, the graphic object may be activated even if the user is unable to accurately control the tag to be positioned at the graphic object he/she desires to activate.
BRIEF DESCRIPTION OF THE DRAWINGS
The following figures are intended to give schematic illustrations and explanations of the present invention but are not intended to limit the scope of the present invention. In the drawings:
FIG. 1 is a schematic diagram of an electronic device according to an exemplary embodiment;
FIG. 2 is a schematic diagram of operations of an electronic device according to an exemplary embodiment;
FIG. 3 is a schematic diagram of operations of an electronic device according to an exemplary embodiment;
FIG. 4 is a flowchart illustrating a method for controlling an electronic device according to an exemplary embodiment;
FIG. 5 is a block diagram of an electronic equipment according to an exemplary embodiment.
Reference signs in the drawings:
100 control unit;
300 operating environment generation unit;
500 display unit;
700 gesture sensing unit.
DETAILED DESCRIPTION
Specific embodiments of the present invention are now described with reference to the drawings for a clearer understanding of the technical features, objectives and effects of the present invention.
FIG. 1 is a schematic diagram of an electronic device according to an exemplary embodiment. As shown in FIG. 1, the electronic device according to the exemplary embodiment includes a control unit 100 and an operating environment generation unit 300.
The control unit 100 may be implemented to include a general-purpose or special-purpose processing device, e.g., a central processing unit (CPU) , a programmable logic controller (PLC) , etc. The control unit 100 may implement functions and operations that will be described in detail below by execution of particular programs or codes.
The operating environment generation unit 300 may generate an operating environment for a user to operate the electronic device, e.g., a virtual operating environment. Such operating environment may provide a virtual operating space for the user, may include a preset operating logic, and may incorporate functions of the electronic device. Therefore, the user may control and use the electronic device by performing operations in such operating environment. Here, the operating environment generation unit 300 may be implemented to include a general-purpose or special-purpose processing device, e.g., a central processing unit (CPU) , a graphics processing unit (GPU) , a programmable logic controller (PLC) , etc. In one exemplary embodiment, the operating environment generation unit 300 may be implemented with the control unit 100 by one or more general-purpose or special-purpose processing devices. The operating environment generation unit 300 may  implement functions and operations that will be described in detail below by execution of particular programs or codes.
In one exemplary embodiment, the electronic device may be implemented as an augmented reality (AR) device. As such, the augmented reality device may include a camera (not shown) to capture surroundings around the user and may include a display unit 500 (which will be described in detail below) to display the captured surroundings in real time. Also, the augmented reality device may display the operating environment generated by the operating environment generation unit 300 and the captured surroundings together on the display unit 500.
Additionally, in another exemplary embodiment, the electronic device may be implemented as a wearable electronic device. For example, the electronic device may be worn by the user on his/her head. As shown in FIG. 1, the electronic device may include a body 10 and a head-mounted component 30. The body 10 may include the control unit 100, the operating environment generation unit 300, and/or the display unit 500. The head-mounted component 30 may house the body and may be worn on a head of the user operating the electronic device. For example, as shown in FIG. 1, the head-mounted component 30 may include a headband that may encircle the head.
The operating environment generated by the operating environment generation unit 300 may include a tag and a graphic object that may be activated by the tag. To this end, the operating environment generation unit 300 may include a tag generation unit 310, a graphic object generation unit 330, and an active area generation unit 350.
The tag generation unit 310 may generate a tag in the operating environment, e.g., a cursor as shown in FIG. 1. In one exemplary embodiment, for the tag generated by the tag generation unit 310, its position in the operating environment may be changed according to a gesture of the user, which will be described in more detail below.
The graphic object generation unit 330 may generate the graphic object in the operating environment, as shown in A and B in FIG. 1. The graphic object may be defined to incorporate predetermined functions or purposes according to a predetermined operating  logic of the operating environment. In one exemplary embodiment, a graphic object A and a graphic object B may be defined as "buttons" , and thus triggering (e.g., "pressing" ) of the button A and the button B may be defined to perform different functions. For example, the button B may be defined as a "next step" button. That is, when the button B is "pressed" , a next step of an ongoing program in the operating environment is performed. Similarly, the button A may be defined as a "cancel" button. That is, when the button A is "pressed" , an ongoing program in the operating environment is canceled or stopped.
The active area generation unit 350 may generate an active area in the operating environment corresponding to the graphic object. The active area may correspond to the graphic object. For example, the active area may overlap the graphic object corresponding to the active area. Here, when the user desires to perform a function corresponding to one graphic object (e.g., graphic object B) , he/she may control the cursor to move to the graphic object B. At this time, the cursor may be in the active area because the active area overlaps the graphic object B. When the control unit 100 determines that the cursor is in the active area, the control unit 100 may activate the graphic object B and may control the electronic device to provide a feedback for the user to inform the user of activation of the graphic object B (e.g., being "selected" ) . Here, the feedback may include a visual feedback (e.g., a change in a shape and/or color of the graphic object B) , an audible feedback (e.g., a warning tone) and/or a tactile feedback (e.g., a vibration) and/or a combination thereof. In this manner, when it is determined that the user has selected the graphic object B according to the feedback, the user may further trigger (e.g., "press" ) the graphic object B to perform a function corresponding to the graphic object. In other words, when the tag is in the active area, the graphic object corresponding to the active area may be activated, which will be described in detail below with reference to FIG. 2.
In addition, the electronic device according to the exemplary embodiment may further include a display unit 500 to display the operating environment generated by the operating environment generation unit 300, the tag generated by the tag generation unit 310 and the graphic object generated by the graphic object generation unit 330. Moreover, the active area generated by the active area generation unit 350 may be invisible to the user. For  example, the active area may not be displayed on the display unit 500.
FIG. 2 is a schematic diagram of operations of an electronic device according to an exemplary embodiment. In FIG. 2, what is shown by a solid line is a graphic object B in an operating environment, and what is shown by a dashed line is an active area in the operating environment corresponding to the graphic object B. In particular, the active area generation unit 350 may generate the active area corresponding to the graphic object as one that includes a main active area portion and a peripheral active area portion. The main active area portion may have a shape the same as that of the corresponding graphic object and may overlap the corresponding graphic object. The peripheral active area portion may be located at the periphery of the main active area portion. For example, with reference to FIG. 2, the main active area corresponding to the graphic object B may be the same as the graphic object B, and is therefore represented by the solid line. In addition, the peripheral active area portion may be located between the solid line and the dashed line. Here, while the active area is described as including the main active area portion and the peripheral active area portion, it may be understood by a person skilled in the art that the main active area portion and the peripheral active area portion may be mutually independent active areas respectively corresponding to the graphic object, and alternatively, they may be a portion overlapping the graphic object and a portion at the periphery of the overlapped portion in the same active area.
As stated above, the active area is a specific area in the operating environment. That is, when the tag is in the active area, the graphic object corresponding to the active area is activated. Therefore, in the case that the active area generated by the active area generation unit 350 includes the main active area portion and the peripheral active area portion, when the tag is at the graphic object, the tag may be in the main active area portion and therefore may activate the graphic object; and when the tag is outside the graphic object but remains in the peripheral active area portion as shown in FIG. 2, the graphic object may also be activated. In other words, when the control unit 100 determines that the tag in the operating environment is positioned in the active area and outside the graphic object corresponding to the active area, the control unit 100 my activate the graphic object corresponding to the  active area. Therefore, the active area including the peripheral active area portion may allow the user to activate the graphic object by only positioning the tag close to the graphic object without necessarily positioning the tag at the graphic object. Therefore, when it is difficult for the user to accurately control the tag to arrive at and/or keep at the graphic object as a result of a relatively small size of the graphic object or a disease or for other reasons, the user may still easily activate the graphic object.
In another exemplary embodiment, the control unit 100 may control the active area generation unit 330 to generate the active area. In particular, the control unit 100 may determine whether the graphic object is in a to-be-activated state according to a current operating logic of the operating environment. When the graphic object is determined to be in the to-be-activated state, the control unit 100 may control the active area generation unit 300 to generate the active area, e.g., the peripheral active area portion for the graphic object in the to-be-activated state. For example, in the case that an installation program is being executed in the operating environment, the control unit 100 may determine that the graphic object B (i.e., the "next step" button) may be in the to-be-activated state according to an operating logic of the installation program being executed. In this case, the control unit 100 may control the active area generation unit 330 to generate the active area including the peripheral active area portion for the graphic object B.
Returning to FIG. 1, the electronic device may further include a gesture sensing unit 700. The gesture sensing unit 700 may sense a gesture of the user operating the electronic device. Here, the gesture of the user may include motions of various body parts of the user. For example, when the electronic device is implemented as a device for wearing on the user’s head, such as Microsoft’s HoloLens, the gesture sensing unit 700 may sense a motion of the electronic equipment worn on the user’s head that moves with the user’s head, as the gesture of the user. To this end, the gesture sensing unit 700 may include various sensors, such as an acceleration sensor, a geomagnetic sensor, a gyroscope, etc.
The gesture sensing unit 700 may send the sensed gesture information on the gesture of the user to the control unit 100. Upon receiving the gesture information sensed by the gesture sensing unit 700, the control unit 100 may determine the gesture of the user  according to the sensed gesture information and may determine the position of the tag in the operating environment according to the determined gesture of the user. For example, the control unit 100 may generate a tag position control command according to the determined gesture of the user, and send the tag position control command to the tag generation unit 310. The tag generation unit 310 may change the position of the tag in the operating environment according to the tag position control command. For example, when the electronic device is implemented as a device for wearing on the user’s head, such as Microsoft’s HoloLens, the gesture sensing unit 700 may sense a motion of the electronic equipment worn on the user’s head that moves as the user turns his/her head to the left and send the sensed gesture information on the left turning of the user’s head to the control unit. Here, the gesture information includes an acceleration of the motion of the user’s head, a duration of the motion, etc. Then, the control unit 100 may determine that the gesture of the user is left turning of the head according to the received gesture information, and therefore may generate a tag position control command for controlling the tag to move to the left. Then, the control unit 100 may send the command to the tag generation unit 310. The tag generation unit 310 may control the tag to move to the left in the operating environment according to the command, thus for example position the tag at the peripheral active area portion of the active area corresponding to a graphic object. At this time, the control unit 100 may activate the graphic object corresponding to the peripheral active area based on the fact that the tag is positioned at the peripheral active area portion, as shown in FIG. 2.
According to another exemplary embodiment, the control unit 100 may control the operating environment according to the specific gesture of the user sensed by the gesture sensing unit 700. Here, the specific gesture of the user may include a deactivation gesture. That is, when the tag is positioned at the graphic object and thus the graphic object is activated, the user may perform a first motion within a time less than a first predetermined time, such that the gesture sensing unit 700 may sense such gesture of the user and send gesture information on such sensed deactivation gesture to the control unit 100, and the control unit 100 may thus control the tag generation unit 310 to position the tag outside the graphic object. For example, in the exemplary embodiment described above with reference to FIG. 2, the user may move his/her head to the left quickly within a time less than 1  second, such that the tag is moved out of the graphic object with such motion of the user. Here, the first predetermined time may be 1 second, and the first motion may be left turning of the head with an amplitude sufficient to move the tag out of the graphic object. At this time, the control unit 100 may determine that the gesture of the user is the deactivation gesture according to information corresponding to the motion of the user sensed by the gesture sensing unit 700 and information which indicates that the tag generation unit 310 causes the tag to change its position as a function of the gesture. Therefore, the control unit 100 may deactivate the graphic object according to the user’s deactivation gesture even if the tag is positioned in the peripheral active area portion of the active area.
FIG. 3 is a schematic diagram of operations of an electronic device according to an exemplary embodiment. According to the exemplary embodiment as shown in FIG. 3, the control unit 100 may switch activation of one graphic object to activation of another graphic object according to a specific object switching gesture of a user. As shown in FIG. 3, a graphic object generation unit 330 may generate a first graphic object A and a second graphic object B adjacent to each other, and the second graphic object B is initially in an activated state. At this time, the user may perform a second motion within a time less than a second predetermined time in a direction towards the first graphic object with an amplitude greater than a predetermined amplitude, and perform a third motion immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion. Here, the second predetermined time and the third predetermined time may be 0.5 seconds, the second motion may be left turning of a head with an amplitude of approximately 30° in a direction towards the first graphic object A, and the third motion may be right turning of the head with an amplitude of approximately 30° in a direction opposed to the direction of the second motion. The control unit 100 may determine that the user’s gesture is the object switching gesture according to information corresponding to the motion of the user sensed by the gesture sensing unit 700, and thus may deactivate the second graphic object B and may activate the first graphic object A. In other words, when the user quickly turns his/her head from a currently activated graphic object in a direction towards a graphic object he/she desires to activate and quickly resume an initial position of  the head immediately after this, the control unit 100 may deactivate the current graphic object and activate the graphic object the user desires to activate. Alternatively, if the control unit 100 determines that the gesture of the user is a gesture other than the object switching gesture according to the gesture information sensed by the gesture sensing unit 700, the control unit 100 may maintain the activated state of the currently activated graphic object.
A method for controlling an electronic device according to an exemplary embodiment will be described below. Such control method may be performed by the electronic device as described in the above exemplary embodiments, and therefore, repetitive descriptions of the same technical features are omitted here.
FIG. 4 is a flowchart illustrating a method for controlling an electronic device according to an exemplary embodiment. As shown in FIG. 4, at operation S410, an active area may be generated for a graphic object in an operating environment of the electronic device. Here, the operating environment may be accessed by a user so that the user operates the electronic equipment. For example, the operating environment may be a visualized operating environment, and in such example, the electronic device may be implemented as an augmented reality (AR) device. As such, the control method may further include steps of generating the operating environment for the user to operate the electronic device, generating a tag (e.g., a cursor) in the operating environment, and generating a graphic object in the operating environment.
In particular, the active area may overlap the graphic object corresponding to the active area. For example, the active area corresponding to the graphic object may be generated as one that includes a main active area portion and a peripheral active area portion. In such case, the main active area portion may have a shape the same as that of the corresponding graphic object and may overlap the corresponding graphic object. The peripheral active area portion may be positioned at the periphery of the main active area portion, and therefore does not overlap the corresponding graphic object.
Then, at operation S430, the graphic object corresponding to the active area may be activated when the tag (e.g., a cursor) in the operating environment is positioned in the  active area and outside the graphic object corresponding to the active area. In other words, when the tag is positioned in the peripheral active area portion of the active area, the graphic object may still be activated even if the tag is not positioned at the graphic object at this time.
In another exemplary embodiment, the active area including the peripheral active area portion may be generated for the graphic object in a specific situation. For example, whether the graphic object is in a to-be-activated state may be determined according to an operating logic of the operating environment, and the peripheral active area portion may be generated for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
In the exemplary embodiment described above, the position of the tag in the operating environment may be changed according to a gesture of the user operating the electronic device. As such, the control method according to the exemplary embodiment may further include sensing the gesture of the user operating the electronic device to obtain gesture information on the sensed gesture, and may determine the gesture of the user according to the gesture information and therefore determine the position of the tag in the operating environment according to the gesture of the user.
In addition, activation of the graphic object may be controlled according to the gesture of the user. For example, in the case that the graphic object is activated, when the gesture of the user is determined to be a deactivation gesture according to the gesture information, the graphic object may be deactivated. Here, the deactivation gesture may include a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the position of the tag in the operating environment is changed from a position inside the area of the graphic object in the operating environment to a position outside the area of the graphic object in the operating environment. In one example, the first predetermined time may be 1 second.
In addition, activation of different graphic objects may be switched according to the gesture of the user. For example, in the case that a first graphic object is activated, when the gesture of the user is determined to be an object switching gesture according to the gesture  information sensed by the gesture sensing unit, the first graphic object may be deactivated, and a second graphic object may be activated. Here, the object switching gesture may include a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude, and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to the direction of the second motion with an amplitude the same as the amplitude of the second motion.
An electronic device and a control method therefor are described above with reference to FIGS. 1-4, and the method may be implemented in hardware, software or a combination of hardware and software. FIG. 5 is a block diagram of an electronic equipment according to an exemplary embodiment. According to a current exemplary embodiment, the electronic equipment 1000 may include at least one processor 1010 and a memory 1030. The processor 1010 may execute at least one computer readable instruction (i.e., an element implemented in the form of software as described above) stored or encoded in a computer readable storage medium (i.e., the memory 1030) .
In one embodiment, computer executable instructions are stored in the memory 1030 which when executed cause the at least one processor 1010 to implement or perform the method described above with reference to FIG. 4.
It should be understood that the computer executable instructions stored in the memory 1030 when executed cause the at least one processor 1010 to perform various operations and functions described above in various embodiments with reference to FIGS. 1-4.
According to one embodiment, a program product such as a non-transitory machine readable medium is provided. The non-transitory machine readable medium may have instructions (i.e., elements implemented in the form of software as described above) which when executed by a machine causes the machine to execute various operations and functions described above in various embodiments of the present application with reference to FIGS. 1-4.
According to one embodiment, a computer program is provided, including computer executable instructions which when executed cause at least one processor to execute various operations and functions as described above in various embodiments of the present application with reference to FIGS. 1-4.
It should be understood that while the present Description is illustrated according to various embodiments, not every embodiment merely includes one independent technical solution, and such manner of illustration in the Description is only for clarity. A person skilled in the art should consider the Description as a whole, and technical solutions in various embodiments may also be combined as appropriate to form other implementations that may be understood by a person skilled in the art.
The foregoing is merely exemplary implementations of the present invention, but is not intended to limit the scope of the present invention. All equivalent variations, modifications and combinations made by any person skilled in the art without departing from concepts and principles of the present invention should fall within the protection scope of the present invention.

Claims (20)

  1. An electronic device, comprising a control unit (100) and an operating environment generation unit (300) configured to generate an operating environment for a user to operate the electronic device, wherein
    the operating environment generation unit comprises:
    a tag generation unit (310) configured to generate a tag in the operating environment;
    a graphic object generation unit (330) configured to generate a graphic object in the operating environment;
    an active area generation unit (350) configured to generate an active area in the operating environment corresponding to the graphic object, wherein the active area overlaps the graphic object corresponding to the active area,
    wherein the control unit is configured to activate the graphic object corresponding to the active area when the tag is positioned in the active area and outside the graphic object corresponding to the active area.
  2. The electronic device according to claim 1, further comprising:
    a display unit (500) configured to display the operating environment generated by the operating environment generation unit and the graphic object generated by the graphic object generation unit.
  3. The electronic device according to claim 1, wherein the active area generation unit is configured to generate the active area corresponding to the graphic object as one that comprises a main active area portion and a peripheral active area portion, wherein the main active area portion has a shape the same as that of the corresponding graphic object and overlaps the corresponding graphic object, and the peripheral active area portion is located at the periphery of the main active area portion.
  4. The electronic device according to claim 3, wherein the control unit is configured to determine whether the graphic object is in a to-be-activated state according to an operating  logic of the operating environment, and control the active area generation unit to generate the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
  5. The electronic device according to claim 1, further comprising:
    a gesture sensing unit (700) configured to sense a gesture of the user operating the electronic device and send gesture information on the sensed gesture of the user to the control unit, wherein
    the control unit is configured to determine the gesture of the user according to the gesture information sensed by the gesture sensing unit and determine a position of the tag in the operating environment according to the gesture of the user.
  6. The electronic device according to claim 5, wherein in the case that the tag is in the graphic object and thus the graphic object is activated, when the control unit determines that the gesture of the user is a deactivation gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the graphic object, wherein the deactivation gesture comprises a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the tag is changed from being positioned in the graphic object to being positioned outside the graphic object.
  7. The electronic device according to claim 5, wherein
    the graphic object generation unit is configured to generate a first graphic object and a second graphic object adjacent to each other in the operating environment, wherein
    in the case that the first graphic object is activated, when the control unit determines that the gesture of the user is an object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit deactivates the first graphic object and activates the second graphic object, wherein the object switching gesture comprises a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to  the direction of the second motion with an amplitude the same as the amplitude of the second motion.
  8. The electronic device according to claim 7, wherein in the case that the first image object is activated, when the control unit determines that the gesture of the user is a gesture other than the object switching gesture according to the gesture information sensed by the gesture sensing unit, the control unit maintains the activation of the first graphic object.
  9. The electronic device according to claim 1, wherein the electronic device is an augmented reality equipment.
  10. The electronic device according to claim 1, wherein the electronic device comprises:
    a body (10) comprising the operating environment generation unit and the control unit; and
    a head-mounted component (30) which is configured to house the body and is capable of being worn on a head of the user operating the electronic device.
  11. A method for controlling an electronic device, comprising:
    generating, in an operating environment for a user to operate the electronic device, an active area corresponding to a graphic object in the operating environment, wherein the active area overlaps the graphic object corresponding to the active area;
    activating the graphic object corresponding to the active area when a tag is positioned in the active area and outside the graphic object corresponding to the active area.
  12. The method according to claim 11, wherein the step of generating the active area comprises generating the active area corresponding to the graphic object as one that comprises a main active area portion and a peripheral active area portion, wherein the main active area portion has a shape the same as that of the corresponding graphic object and overlaps the corresponding graphic object, and the peripheral active area portion is located at the periphery of the main active area portion.
  13. The method according to claim 12, wherein the step of generating the active area  comprises determining whether the graphic object is in a to-be-activated state according to an operating logic of the operating environment, and generating the peripheral active area portion for the graphic object in the to-be-activated state upon determining that the graphic object is in the to-be-activated state.
  14. The method according to claim 11, further comprising:
    sensing a gesture of the user operating the electronic device to obtain gesture information on the sensed gesture;
    determining the gesture of the user according to the gesture information and determining a position of the tag in the operating environment according to the gesture of the user.
  15. The method according to claim 14, further comprising:
    in the case that the tag is in the graphic object and thus the graphic object is activated, when the gesture of the user is determined to be a deactivation gesture according to the sensed gesture information, deactivating the graphic object, wherein the deactivation gesture comprises a first motion performed by the user within a time less than a first predetermined time with an amplitude such that the tag is changed from being positioned in the graphic object to being positioned outside the graphic object.
  16. The method according to claim 14, wherein
    the step of generating the graphic object comprises generating a first graphic object and a second graphic object adjacent to each other in the operating environment;
    the method further comprises: in the case that the first graphic object is activated, when the gesture of the user is determined to be an object switching gesture according to the gesture information sensed by the gesture sensing unit, deactivating the first graphic object and activating the second graphic object, wherein the object switching gesture comprises a second motion performed by the user within a time less than a second predetermined time in a direction towards the second graphic object with an amplitude greater than a predetermined amplitude and a third motion performed immediately after the second motion within a time less than a third predetermined time in a direction opposed to  the direction of the second motion with an amplitude the same as the amplitude of the second motion.
  17. The method according to claim 11, further comprising:
    generating the operating environment for the user to operate the electronic device;
    generating the tag in the operating environment;
    generating the graphic object in the operating environment.
  18. An electronic equipment, comprising:
    at least one processor; and
    a memory connected to the at least one processor, the memory having instructions stored therein which when executed by the at least one processor cause the electronic equipment to perform the method according to any of claims 11-17.
  19. A non-transitory machine readable medium, having computer executable instructions stored thereon which when executed cause at least one processor to perform the method according to any of claims 11-17.
  20. A computer program, comprising computer executable instructions which when executed cause at least one processor to perform the method according to any of claims 11-17.
PCT/CN2019/073976 2019-01-30 2019-01-30 Electronic device and control method therefor WO2020154971A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/073976 WO2020154971A1 (en) 2019-01-30 2019-01-30 Electronic device and control method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/073976 WO2020154971A1 (en) 2019-01-30 2019-01-30 Electronic device and control method therefor

Publications (1)

Publication Number Publication Date
WO2020154971A1 true WO2020154971A1 (en) 2020-08-06

Family

ID=71840759

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073976 WO2020154971A1 (en) 2019-01-30 2019-01-30 Electronic device and control method therefor

Country Status (1)

Country Link
WO (1) WO2020154971A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113535064A (en) * 2021-09-16 2021-10-22 北京亮亮视野科技有限公司 Virtual label marking method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103309608A (en) * 2012-03-14 2013-09-18 索尼公司 Visual feedback for highlight-driven gesture user interfaces
CN103593876A (en) * 2012-08-17 2014-02-19 北京三星通信技术研究有限公司 Electronic device, and method for controlling object in virtual scene in the electronic device
US20170090566A1 (en) * 2012-01-04 2017-03-30 Tobii Ab System for gaze interaction
CN108008873A (en) * 2017-11-10 2018-05-08 亮风台(上海)信息科技有限公司 A kind of operation method of user interface of head-mounted display apparatus
CN109144598A (en) * 2017-06-19 2019-01-04 天津锋时互动科技有限公司深圳分公司 Electronics mask man-machine interaction method and system based on gesture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170090566A1 (en) * 2012-01-04 2017-03-30 Tobii Ab System for gaze interaction
CN103309608A (en) * 2012-03-14 2013-09-18 索尼公司 Visual feedback for highlight-driven gesture user interfaces
CN103593876A (en) * 2012-08-17 2014-02-19 北京三星通信技术研究有限公司 Electronic device, and method for controlling object in virtual scene in the electronic device
CN109144598A (en) * 2017-06-19 2019-01-04 天津锋时互动科技有限公司深圳分公司 Electronics mask man-machine interaction method and system based on gesture
CN108008873A (en) * 2017-11-10 2018-05-08 亮风台(上海)信息科技有限公司 A kind of operation method of user interface of head-mounted display apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113535064A (en) * 2021-09-16 2021-10-22 北京亮亮视野科技有限公司 Virtual label marking method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11397463B2 (en) Discrete and continuous gestures for enabling hand rays
KR102098316B1 (en) Teleportation in an augmented and/or virtual reality environment
JP6093473B1 (en) Information processing method and program for causing computer to execute information processing method
JP2022535316A (en) Artificial reality system with sliding menu
KR20220040493A (en) Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
TW202105133A (en) Virtual user interface using a peripheral device in artificial reality environments
CN109690447B (en) Information processing method, program for causing computer to execute the information processing method, and computer
JP2022535315A (en) Artificial reality system with self-tactile virtual keyboard
US20230315197A1 (en) Gaze timer based augmentation of functionality of a user input device
US11194400B2 (en) Gesture display method and apparatus for virtual reality scene
EP3639120A1 (en) Displacement oriented interaction in computer-mediated reality
JP2022534639A (en) Artificial Reality System with Finger Mapping Self-Tactile Input Method
WO2018020735A1 (en) Information processing method and program for causing computer to execute information processing method
JP6220937B1 (en) Information processing method, program for causing computer to execute information processing method, and computer
WO2020154971A1 (en) Electronic device and control method therefor
JP2007506165A (en) 3D space user interface for virtual reality graphic system control by function selection
KR102242703B1 (en) A smart user equipment connected to a head mounted display and control method therefor
TW202119175A (en) Human computer interaction system and human computer interaction method
JP6140871B1 (en) Information processing method and program for causing computer to execute information processing method
JP6159455B1 (en) Method, program, and recording medium for providing virtual space
JP2018110871A (en) Information processing method, program enabling computer to execute method and computer
JP2018026105A (en) Information processing method, and program for causing computer to implement information processing method
WO2020012997A1 (en) Information processing device, program, and information processing method
EP3700641B1 (en) Methods and systems for path-based locomotion in virtual reality
JP6290493B2 (en) Information processing method, program for causing computer to execute information processing method, and computer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19913783

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19913783

Country of ref document: EP

Kind code of ref document: A1