WO2022262292A1 - 引导操作体进行隔空操作的方法和装置 - Google Patents

引导操作体进行隔空操作的方法和装置 Download PDF

Info

Publication number
WO2022262292A1
WO2022262292A1 PCT/CN2022/075031 CN2022075031W WO2022262292A1 WO 2022262292 A1 WO2022262292 A1 WO 2022262292A1 CN 2022075031 W CN2022075031 W CN 2022075031W WO 2022262292 A1 WO2022262292 A1 WO 2022262292A1
Authority
WO
WIPO (PCT)
Prior art keywords
menu
screen
operating body
icon
preset
Prior art date
Application number
PCT/CN2022/075031
Other languages
English (en)
French (fr)
Inventor
龙纪舟
莫飞龙
李少朋
徐亮
孙浚凯
Original Assignee
深圳地平线机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳地平线机器人科技有限公司 filed Critical 深圳地平线机器人科技有限公司
Priority to JP2022567841A priority Critical patent/JP2023534589A/ja
Publication of WO2022262292A1 publication Critical patent/WO2022262292A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the present disclosure relates to the field of computer technology, in particular to a method, device, computer-readable storage medium, and electronic equipment for guiding an operating body to perform space operations.
  • the air operation method can be applied to AR (Augmented Reality, Augmented Reality)/VR (Virtual Reality, Virtual Reality), smart phones, smart home appliances and other scenarios.
  • AR Augmented Reality
  • VR Virtual Reality, Virtual Reality
  • smart phones smart home appliances and other scenarios.
  • the machine is controlled to facilitate people's life.
  • the existing air operation scheme based on gesture recognition usually performs gesture recognition on the hand image captured by the camera, and performs corresponding operations according to the type of gesture and the movement trajectory.
  • Embodiments of the present disclosure provide a method, an apparatus, a computer-readable storage medium, and an electronic device for guiding an operating body to perform air manipulation.
  • An embodiment of the present disclosure provides a method for guiding an operating body to perform air manipulation, the method including: displaying a graphical menu of a first preset shape on the screen, and displaying a graphical menu that is mapped to the operating body and has a second preset The shape of the icon; detecting the air operation made by the operating body relative to the screen in space; in response to the air operation, detecting the moving physical quantity of the icon on the screen; based on the moving physical quantity and the target function area corresponding to the icon in the graphic menu, Trigger the menu function corresponding to the target function area.
  • a device for guiding an operating body to perform space manipulation comprising: a display module, configured to display a graphical menu of a first preset shape on the screen, and a display and operation The body is in a mapping relationship and has an icon with a second preset shape; the first detection module is used to detect the air operation performed by the operator in space relative to the screen; the second detection module is used to respond to the air operation, detect The moving physical quantity of the icon on the screen; the triggering module, configured to trigger the menu function corresponding to the target functional area based on the moving physical quantity and the target functional area corresponding to the icon in the graphic menu.
  • a computer-readable storage medium stores a computer program, and the computer program is used to execute the above-mentioned method for guiding an operating body to perform air operations.
  • an electronic device includes: a processor; a memory for storing instructions executable by the processor; a processor for reading executable instructions from the memory, and Executing instructions to implement the above-mentioned method for guiding an operating body to perform air operations.
  • FIG. 1A is a diagram of a system to which the present disclosure applies.
  • FIG. 1B is a schematic diagram of screens included in the system architecture to which the present disclosure is applicable.
  • Fig. 2 is a schematic flowchart of a method for guiding an operating body to perform space manipulation according to an exemplary embodiment of the present disclosure.
  • Fig. 3 is a schematic flowchart of a method for guiding an operating body to perform space manipulation provided by another exemplary embodiment of the present disclosure.
  • Fig. 4 is a schematic flowchart of a method for guiding an operating body to perform space manipulation provided by another exemplary embodiment of the present disclosure.
  • Fig. 5 is a schematic flowchart of a method for guiding an operating body to perform space manipulation provided by another exemplary embodiment of the present disclosure.
  • FIG. 6A , FIG. 6B , and FIG. 6C are exemplary schematic diagrams of triggering target functional areas of the method for guiding an operating body to perform air manipulation provided by another exemplary embodiment of the present disclosure.
  • Fig. 7 is a schematic flowchart of a method for guiding an operating body to perform space manipulation according to another exemplary embodiment of the present disclosure.
  • Fig. 8 is an exemplary schematic diagram of an adjustment control interface for continuous adjustment of the method for guiding an operating body to perform space manipulation provided by another exemplary embodiment of the present disclosure.
  • Fig. 9 is a schematic structural diagram of a device for guiding an operating body to perform air-space operations according to an exemplary embodiment of the present disclosure.
  • Fig. 10 is a schematic structural diagram of an apparatus for guiding an operating body to perform space manipulation provided by another exemplary embodiment of the present disclosure.
  • Fig. 11 is a structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
  • plural may refer to two or more than two, and “at least one” may refer to one, two or more than two.
  • the term "and/or" in the present disclosure is only an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B may indicate: A exists alone, and A and B exist simultaneously , there are three cases of B alone.
  • the character "/" in the present disclosure generally indicates that the contextual objects are an "or" relationship.
  • Embodiments of the present disclosure may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well known terminal devices, computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers include, but are not limited to: personal computer systems, server computer systems, thin clients, thick client computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments including any of the foregoing, among others.
  • Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by the computer system.
  • program modules may include routines, programs, objects, components, logic and data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the computer system/server can be implemented in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network and program modules can be located in local or remote computing systems including storage devices on the storage medium.
  • FIG. 1A shows an exemplary system architecture 100 of a method for guiding an operating body to perform air manipulation or an apparatus for guiding an operating body to perform air manipulation according to an embodiment of the present disclosure
  • FIG. 1B shows a screen on a terminal device 101 Schematic diagram of 1011.
  • a system architecture 100 may include a terminal device 101 , a network 102 , a server 103 and a data collection device 104 .
  • the network 102 is a medium for providing a communication link between the terminal device 101 and the server 103 .
  • Network 102 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal device 101 to interact with the server 103 through the network 102 to receive or send messages and the like.
  • Various communication client applications may be installed on the terminal device 101, such as audio and video playback applications, navigation applications, game applications, search applications, web browser applications, or instant messaging tools.
  • the terminal device 101 can be various electronic devices, including but not limited to such as vehicle terminal, mobile phone, notebook computer, digital broadcast receiver, PDA (Personal Digital Assistant, personal digital assistant), PAD (Portable Android Device, tablet computer), Mobile terminals such as PMP (Portable Media Player, portable multimedia player) and fixed terminals such as digital television (television, TV) and desktop computers.
  • the terminal device 101 generally includes a screen 1011 as shown in FIG. 1B , on which a graphical menu and an icon corresponding to the real-time position of the operating body can be displayed. The user performs human-computer interaction with the terminal device 101 or the server 103 through the screen 1011 .
  • the server 103 may be a server that provides various services, for example, a background server that analyzes the posture and position of the operating body uploaded by the terminal device 101 in real time.
  • the background server can respond to the user's air operation, obtain the processing result (such as a command indicating to trigger the menu function), and feed back the processing result to the terminal device.
  • the data collection device 104 may be various devices for collecting data such as the position or posture of the operating body, such as a monocular camera, a binocular stereo camera, a laser radar, or a three-dimensional structured light imaging device.
  • the method for guiding the operating body to perform air operations may be executed by the server 103 or by the terminal device 101.
  • the device for guiding the operating body to perform air operations may be set to In the server 103, it may also be set in the terminal device 101.
  • terminal devices, networks and servers in FIG. 1A are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • the above system architecture may not include a network, but only include servers or terminal devices.
  • Fig. 2 is a schematic flowchart of a method for guiding an operating body to perform space manipulation according to an exemplary embodiment of the present disclosure. This embodiment can be applied to an electronic device (terminal device 101 or server 103 as shown in FIG. 1A ), as shown in FIG. 2, the method includes the following steps:
  • Step 201 displaying a graphic menu of a first preset shape on the screen, and displaying an icon having a mapping relationship with an operating body and having a second preset shape.
  • the electronic device may display a graphic menu of a first preset shape on the screen 1011 shown in FIG. 1B , and display an icon having a mapping relationship with the operating body and having a second preset shape.
  • the above-mentioned screen may be a screen of any type of electronic device, for example, it may be a central control screen on a vehicle, a smart TV and a smart phone placed indoors, and the like.
  • the first preset shape of the graphical menu can be any shape, for example, it can be a circular menu 10111 shown in FIG. 1B , a rectangular menu not shown in FIG. 1B , and so on.
  • the graphic menu includes at least one functional area, and each functional area can be triggered to execute a corresponding menu function. Examples include popping up submenus, performing volume adjustments, changing tracks, and controlling the on/off of specific devices, among others.
  • the above-mentioned operating body can be various hardware entities that perform air operations on the controlled device or a specific body part of the user.
  • a hardware entity that has a shape and can output location information to the above-mentioned electronic device in real time for example, the preset shape can be a V shape, etc., or other objects with specific shapes.
  • the electronic device can detect the position of the operating body in space in real time, map the detected position to a corresponding position on the screen, and display an icon 10112 with a second preset shape as shown in FIG. 1B at the corresponding position on the screen. , which means that the position of the icon moves with the movement of the operating body.
  • the second preset shape can be various shapes, such as a droplet with a trail as shown in FIG. 1B , and a spherical shape, a dot shape, or a pointer shape not shown in FIG. 1B .
  • the mapping relationship between the icon and the operating body refers to the position of the icon on the screen, and the position of the operating body in space is mapped to the corresponding position on the screen.
  • the position of the icon on the screen moves along with the moving of the operating body.
  • Step 202 detecting the space operation performed by the operating body relative to the screen in space.
  • the electronic device may detect in real time the space operation performed by the operating body relative to the screen in space based on various methods.
  • the air operation may be an operation in which a user interacts with a controlled device (such as a car audio-visual device, an air conditioner, and a TV, etc.) without contact with the screen by using an operating body.
  • a controlled device such as a car audio-visual device, an air conditioner, and a TV, etc.
  • air operations may include, but are not limited to: operations based on the movement trajectory of the operating body, operations based on the moving direction of the operating body, operations based on the moving distance of the operating body, and operations based on the gesture of the operating body (such as gestures), etc.
  • the electronic device can obtain the data to be recognized collected from the operator by the data collection device 104 as shown in FIG. 1A , and identify the data to be recognized, so as to determine the air-space operation performed by the operator according to the recognition result.
  • the data collection device 104 may be a monocular camera, and the electronic device may recognize the image frames collected by the monocular camera in real time, and determine the position of the operating body in the image frame or the posture of the operating body.
  • the data collection device 104 may be a binocular stereo camera, and the electronic device may recognize binocular images collected by the binocular stereo camera in real time to determine the position of the operating body in the three-dimensional space.
  • Step 203 in response to the air operation, detecting the physical quantity of movement of the icon on the screen.
  • the electronic device may detect the physical quantity of movement of the icon on the screen in response to the air operation.
  • the moving physical quantity may be a physical quantity representing a specific feature of the icon moving on the screen, such as the moving distance, moving speed, or moving direction of the icon.
  • Step 204 based on the moving physical quantity and the target function area corresponding to the icon in the graphic menu, trigger the menu function corresponding to the target function area.
  • the electronic device may trigger the menu function corresponding to the target function area based on the moving physical quantity and the target function area corresponding to the icon in the graphical menu.
  • the graphic menu may include at least one functional area, and each functional area may correspond to a menu function.
  • the menu function can be pre-set, for example, the menu function can pop up a submenu under the triggered function area, or control the controlled device to perform the corresponding function (such as playing video, adjusting the volume, adjusting the temperature of the air conditioner or controlling the window switch Wait).
  • the menu function is triggered when the icon moves to the target function area, or the menu function is triggered when the icon stays in the target function area for a preset duration. As shown in FIG. 1B , when the icon 10112 stays in the target function area marked as music for a preset duration, a submenu 10113 of the music function will pop up.
  • the target functional area may be a functional area corresponding to an icon in the at least one functional area.
  • the functional area is The target functional area, in this case, if it is determined based on the moving physical quantity that the icon moves to a certain functional area, then this functional area is the target functional area.
  • the moving trajectory of the icon conforms to the preset moving trajectory corresponding to a certain functional area, it is determined that the icon has a corresponding relationship with the functional area, and then the functional area is the target functional area. In this case, if based on the physical quantity of movement, determine If the moving track of the icon conforms to the preset moving track corresponding to a certain functional area, then the functional area is the target functional area.
  • the method may also include:
  • the menu pop-up operation performed by the operating body relative to the screen is detected, and the above step 201 is executed when the menu pop-up operation is detected. Or, detect the voice uttered by the user; if the preset wake-up words included in the voice (such as keywords such as "pop-up menu” or "hello") are detected, the above step 201 is executed.
  • the menu pop-up operation is an operation in which the operating body moves in a specific way or presents a special gesture, etc., to trigger the pop-up of a graphical menu.
  • the method of detecting the voice uttered by the user may be an existing voice recognition method.
  • a speech recognition model based on a neural network can be used to convert the user's speech signal into text, and then determine the preset wake-up word from the text.
  • the electronic device may perform the following steps to detect the menu pop-up operation:
  • Step 301 detecting preset actions made by the operating body relative to the screen.
  • the preset action may be a static action (such as maintaining a posture at a certain position), or a dynamic action (such as moving with a certain trajectory or changing posture).
  • the electronic device may detect the motion of the operating body based on an existing motion detection method (for example, an motion detection method based on deep learning).
  • Step 302 determine the duration of the preset action.
  • gesture recognition may be performed on the hand, and if the gesture is a preset gesture, the duration of the gesture is determined. For another example, it may be determined whether the moving track of the operating body is a preset track, and if it is the preset track, the duration of the moving track is determined.
  • Step 303 if the duration is greater than or equal to the first preset duration, it is determined that a menu pop-up operation performed by the operating body relative to the screen is detected.
  • the preset gesture is a V-shaped gesture
  • the user if the user maintains the V-shaped gesture for longer than or equal to the first preset duration, it is determined that a menu pop-up operation has been made, and then a graphical menu pops up on the screen.
  • This implementation method determines whether the operating body performs a menu pop-up operation by detecting the duration of the preset action performed by the operating body, which can increase the complexity of the menu pop-up operation and reduce the probability of user misoperation.
  • the operating body is a user's hand. Based on this, the electronic device can detect the menu pop-up operation performed by the operator according to the following steps:
  • the gesture indicated by the gesture recognition result is a preset gesture, and the user's hand is within the preset space range, it is determined that the user's hand performs a menu pop-up operation relative to the screen.
  • the preset spatial range may be the detection range of the data acquisition device 104 as shown in FIG. 1A , or may be a preset range in three-dimensional space corresponding to the display range of the screen.
  • the preset gesture can be a static gesture (such as a V-shaped gesture or a fist gesture, etc.), or a dynamic gesture (such as moving according to a certain trajectory with a certain gesture, or switching between several preset gestures).
  • This implementation method detects whether the operating body performs a menu pop-up operation through gesture recognition, and the implementation method and the user's operation method are simpler, which is conducive to quickly popping up a graphical menu.
  • the electronic device may detect the menu pop-up operation performed by the operator according to the following steps:
  • a first moving trajectory of the operating body in space is determined.
  • the method for determining the moving track of the operating body can be implemented based on existing technologies.
  • the data acquisition device 104 shown in FIG. 1A can be a camera, and the electronic device can use the camera to perform track recognition on multiple image frames captured by the operating body
  • the first movement track made by the operating body is determined.
  • the operating body performs a menu pop-up operation relative to the screen.
  • the first preset trajectory may be a circular or wavy trajectory in space or the like.
  • This implementation method recognizes the moving track of the operating body.
  • the moving track is the first preset track
  • a graphic menu pops up, and only needs to perform track recognition on the operating body, and does not need to perform posture recognition on the operating body.
  • the implementation method is simple and easy to recognize High accuracy is conducive to efficient recognition of menu pop-up operations.
  • step 201 may include:
  • Step 2011 determine that the current three-dimensional coordinate position of the operating body in space is mapped to the two-dimensional coordinate position on the screen.
  • the electronic device can use the existing spatial position detection method to determine the three-dimensional coordinate position of the operating body (for example, determine the three-dimensional coordinate position through the depth image recognition method or the laser point cloud recognition method), and then based on the preset three-dimensional coordinate position and the two-dimensional coordinate position on the screen The corresponding relationship between two-dimensional coordinate positions determines the two-dimensional coordinate position currently mapped to the screen by the operating body.
  • Step 2012 displaying an icon with a second preset shape at the two-dimensional coordinate position.
  • Step 2013 Determine the target position for displaying the graphic menu of the first preset shape on the screen based on the two-dimensional coordinate position.
  • the electronic device may use the above-mentioned two-dimensional coordinate position as a reference position, and use a preset correspondence between the position of the graphic menu and the reference position to determine a target position for displaying the graphic menu.
  • the aforementioned reference position may be the target position, or a fixed position corresponding to the pre-divided grid area where the reference position is located may be the target position.
  • Step 2014 displaying a graphic menu at the target position.
  • the target position may be used as the center of the circle, so as to display a circular graphic menu on the screen.
  • the two-dimensional coordinate position of the icon is determined first, and then the position of the graphic menu is determined based on the two-dimensional coordinate position of the icon, thereby realizing the association of the position of the graphic menu with the real-time space position of the operating body, so that the user can control the graphic menu
  • the display position of the menu facilitates the user's operation and improves the flexibility of remote operation.
  • step 201 may include:
  • Step 2015 determine the preset initial position mapped on the screen by the operating body.
  • the preset initial position may be a fixed position, such as the center of the screen.
  • the preset initial position may also be the first time the position of the operating body is mapped to the position on the screen when the method for guiding the operating body to perform air manipulation is started.
  • Step 2016, displaying an icon of a second preset shape at a preset initial position.
  • Step 2017, determine the menu display position based on the preset initial position.
  • the electronic device can use the preset initial position as the reference position, and use the preset correspondence between the position of the graphical menu and the reference position to determine the menu display position for displaying the graphical menu, that is, the position of the graphical menu can be preset According to the corresponding relationship with the reference position, the electronic device determines the menu display position for displaying the graphical menu according to the corresponding relationship.
  • the aforementioned reference position may be the menu display position, or a fixed position corresponding to the pre-divided grid area where the reference position is located may be the menu display position.
  • Step 2018 displaying a graphic menu of a first preset shape at the menu display position.
  • steps 2015-2018 may be combined with the menu pop-up operation described in the above optional embodiment, that is, when a menu pop-up operation is detected, step 2015-step 2018 is executed.
  • the user's gesture is a preset gesture
  • the above icon is displayed at the center of the screen (ie, the preset initial position)
  • a circular graphic menu is displayed with the center of the screen as the center of the circle.
  • This implementation method determines the display position of the menu based on the preset initial position, enriches the way of popping up the graphical menu, and at the same time can realize the display of the graphical menu at a fixed position on the screen, and the display position of the graphical menu does not change with the movement of the operating body. Look for graphical menus on the screen to make Air Actions easier for users.
  • step 203 may be performed as follows:
  • the electronic device may determine in real time the two-dimensional coordinates of the operating body currently mapped on the screen according to the preset mapping relationship between the space where the operating body is located and the display range of the screen.
  • the change amount of the two-dimensional coordinates on the screen is determined.
  • one of the coordinate axes (such as the coordinate axis parallel to the optical axis of the camera) of the three-dimensional coordinate system in the space where the operating body is located can be ignored, and the two coordinate axes can be determined according to the coordinate changes on the other two coordinate axes and the above mapping relationship. The amount of change of the dimensional coordinates on the screen.
  • the physical quantity of movement of the icon on the screen is determined.
  • the moving direction of the icon (the angle between the straight line moving track and the horizontal line can be used to indicate the moving direction), moving speed or moving track and other physical quantities can be used as moving physical quantities.
  • the three-dimensional coordinate variation of the operating body is used to determine the two-dimensional coordinate variation mapped to the screen to determine the physical quantity of movement, which can accurately track the position of the operating body, thereby accurately determining the target function area in the graphic menu. Help users perform air operations more precisely.
  • the electronic device may trigger the menu function corresponding to the target function area in any of the following three manners:
  • Mode 1 if the icon moves to the target function area in the graphic menu and the continuous stay time of the icon in the target function area is greater than or equal to the second preset duration, the menu function corresponding to the target function area is triggered.
  • the method of guiding an operating body to perform air operations is applied to a vehicle, and the graphic menu displayed on the central control screen is a disc-shaped menu, which includes four functional areas, which are respectively marked as music , Seat, Customization and Air Conditioning, the central area of the graphic menu is the initial position of the above icons, and the icon corresponding to the position of the operating body is a drop-shaped icon.
  • the icon moves into the functional area marked as music, this functional area is the target functional area, and the timing starts at the same time. If the icon stays in the target functional area for the second preset time, the corresponding menu function will be triggered.
  • the pop-up submenus for adjusting volume and switching music as shown in the figure.
  • Method 2 if the icon is located at a preset position in the target functional area, trigger the menu function corresponding to the target functional area.
  • the function area corresponding to the ring area of contact is used as the target function area (that is, the label shown in the figure is music area), and the pop-up submenu for adjusting the volume and switching music as shown in the figure will pop up.
  • Mode 3 if the second movement track of the icon in the target function area matches the second preset track, the menu function corresponding to the target function area is triggered.
  • This implementation method provides a variety of schemes for triggering the menu functions of the target functional area, which can make the user more flexible in air operations, and facilitate the user to choose a trigger method that suits his own habits.
  • step 204 may be performed as follows:
  • Step 2041 determine whether the target function area has a corresponding submenu.
  • step 2042 If there is no submenu, execute step 2042; if there is a submenu, execute step 2043.
  • Step 2042 trigger the execution of the function corresponding to the target function area based on the physical quantity of movement.
  • the method for triggering the function of the target functional area may include but not limited to any one of the three above-mentioned optional implementation manners.
  • the function corresponding to the target functional area is to start the navigation function, if the moving physical quantity meets the trigger condition, the navigation function is started.
  • Step 2043 triggering the display of a submenu on the screen based on the moving physical quantity, and executing step 2044.
  • the method for triggering the display of the submenu may include, but not limited to, any one of the three above-mentioned optional implementation manners. As shown in FIG. 6A , FIG. 6B or FIG. 6C , if the target functional area marked as music has a corresponding submenu, the submenu will pop up.
  • Step 2044 again detecting the moving physical quantity of the icon on the screen, and determining the target function area in the submenu corresponding to the icon.
  • the method of detecting the moving physical quantity is the same as the above-mentioned step 203, and the method of determining the target functional area is the same as the method of determining the target functional area described in the above-mentioned step 204, which will not be repeated here.
  • Step 2045 based on the re-detected moving physical quantity and the target function area in the submenu, continue to execute the triggering step.
  • This implementation method implements the air operation when the graphic menu is set to include multi-level submenus by cyclically executing the triggering steps, so that the functions realized by the air operation are more abundant, and it is beneficial to expand the graphic menu.
  • the above step 204 may also include the following substeps:
  • an adjustment control interface corresponding to the continuous adjustment function is displayed on the screen.
  • a bar-shaped adjustment control interface for adjusting the volume is displayed on the screen.
  • the continuous adjustment function is executed, and the movement of the control point on the adjustment control interface is controlled.
  • control point 801 shown in FIG. 8 may be moved up and down according to the amount of change in the two-dimensional coordinates mapped on the screen by the operating body to adjust the volume.
  • the adjustment control interface may be in the shape of a circular knob. When the moving track of the operating body mapped on the screen is circular, the knob may be rotated following the movement of the operating body to adjust the volume.
  • This implementation method provides an air-space operation method for continuously adjusting a specific function, thereby enabling precise adjustment for a specific application, enriching air-space operation methods, and improving the accuracy of air-space operation adjustment.
  • the electronic device may also highlight the target function area.
  • the target functional area may be highlighted by highlighting or zooming in on the target functional area. By highlighting the target functional area, the user can intuitively see the current position of the target functional area, which is beneficial to improve the accuracy of the air operation.
  • Fig. 9 is a schematic structural diagram of a device for guiding an operating body to perform air-space operations according to an exemplary embodiment of the present disclosure.
  • the device for guiding the operating body to perform space operations includes: a display module 901, which is used to display a graphical menu of a first preset shape on the screen, and a display and operation The objects are in a mapping relationship and have a second preset shape; the first detection module 902 is used to detect the air operation performed by the operator in space relative to the screen; the second detection module 903 is used to respond to the air operation , detecting the moving physical quantity of the icon on the screen; the triggering module 904 is configured to trigger the menu function corresponding to the target function area based on the moving physical quantity and the target function area corresponding to the icon in the graphic menu.
  • the display module 901 may display a graphic menu of a first preset shape on the screen, and display an icon having a mapping relationship with the operating body and having a second preset shape.
  • the above-mentioned screen may be a screen of any type of electronic device, for example, it may be a central control screen on a vehicle, a smart TV or a smart phone placed indoors, and the like.
  • the first preset shape of the graphic menu can be any shape, for example, it can be a circular menu, a rectangular menu and so on.
  • the graphic menu includes at least one functional area, and each functional area can be triggered to execute a corresponding menu function. Examples include popping up submenus, performing volume adjustments, changing tracks, or controlling the on/off of specific devices, and more.
  • the above-mentioned operating body can be various hardware entities that perform air-space operations on the controlled device or a specific body part of the user.
  • the hardware entity that outputs position information to the above-mentioned device may also be other objects with specific shapes.
  • the display module 901 can detect the position of the operating body in space in real time, map the detected position to a corresponding position on the screen, and display an icon of a second preset shape at the corresponding position on the screen, that is, the icon that presents the icon The effect that the position moves with the movement of the operating body.
  • the second preset shape can be various shapes, such as spherical, dot, or pointer.
  • the first detection module 902 may detect in real time the space operation performed by the operator relative to the screen in space based on various methods.
  • the air operation may be an operation in which a user interacts with a controlled device (such as a car audio-visual device, an air conditioner, and a TV, etc.) without contact with the screen by using an operating body.
  • a controlled device such as a car audio-visual device, an air conditioner, and a TV, etc.
  • air operations may include, but are not limited to: operations based on the movement trajectory of the operating body, operations based on the moving direction of the operating body, operations based on the moving distance of the operating body, and operations based on the gesture of the operating body (such as gestures), etc.
  • the first detection module 902 can obtain the data to be recognized collected from the operator by the data collection device 104 as shown in FIG. 1A , and identify the data to be recognized, so as to detect the air operation performed by the operator.
  • the data collection device 104 may be a monocular camera, and the first detection module 902 may identify the image frame collected by the monocular camera in real time, and determine the position of the operating body in the image frame or the posture of the operating body.
  • the data collection device 104 may be a binocular stereo camera, and the first detection module 902 may identify binocular images collected by the binocular stereo camera in real time to determine the position of the operating body in the three-dimensional space.
  • the second detection module 903 may detect the physical quantity of movement of the icon on the screen in response to the air manipulation.
  • the moving physical quantity may be a physical quantity representing a specific feature of the icon moving on the screen, such as the moving distance, moving speed, or moving direction of the icon.
  • the triggering module 904 can trigger the menu function corresponding to the target function area based on the moving physical quantity and the target function area corresponding to the icon in the graphic menu.
  • the graphic menu may include at least one functional area, and each functional area may correspond to a menu function.
  • the menu function can be pre-set, for example, the menu function can pop up a submenu under the triggered function area, or control the controlled device to perform the corresponding function (such as playing video, adjusting the volume, adjusting the temperature of the air conditioner or controlling the window switch Wait).
  • the menu function is triggered when the icon moves to the target function area, or the menu function is triggered when the icon stays in the target function area for a preset duration.
  • the target functional area may be a functional area corresponding to an icon in the at least one functional area.
  • an icon when an icon is moved into a certain functional area, it is determined that the icon has a corresponding relationship with the functional area, and the functional area is target functional area.
  • the moving track of the icon conforms to the preset moving track corresponding to a certain functional area, it is determined that the icon has a corresponding relationship with the functional area, and then the functional area is the target functional area.
  • FIG. 10 is a schematic structural diagram of an apparatus for guiding an operating body to perform air-space operations according to another exemplary embodiment of the present disclosure.
  • the device further includes: a third detection module 905, configured to detect a menu pop-up operation performed by the operator relative to the screen, and when the menu pop-up operation is detected, display the first A graphic menu of a preset shape, and a step of displaying an icon that is in a mapping relationship with the operating body and has a second preset shape; or, the fourth detection module 906 is used to detect the voice issued by the user; if it is detected that the voice includes the preset A wake-up word is set, and a step of displaying a graphic menu of a first preset shape on the screen, and displaying an icon having a mapping relationship with the operating body and having a second preset shape is performed.
  • a third detection module 905 configured to detect a menu pop-up operation performed by the operator relative to the screen, and when the menu pop-up operation is detected, display the first A graphic menu of a preset shape, and a step of displaying an icon that is in a mapping relationship with the operating body and has a second preset shape
  • the third detection module 905 includes: a first detection unit 9051, configured to detect a preset action made by the operator relative to the screen; a first determination unit 9052, configured to determine the duration of the preset action Duration; the second determination unit 9053 is configured to determine that a menu pop-up operation performed by the operating body relative to the screen is detected if the duration is greater than or equal to the first preset duration.
  • the operating body is the user's hand
  • the third detection module 905 includes: a recognition unit 9054, configured to perform gesture recognition on the user's hand to obtain a gesture recognition result; a third determination unit 9055, configured to If the gesture indicated by the gesture recognition result is a preset gesture, and the user's hand is within the preset space range, it is determined that the user's hand performs a menu pop-up operation relative to the screen.
  • the third detection module 905 includes: a fourth determination unit 9056, configured to determine the first movement trajectory of the operating body in space; a fifth determination unit 9057, configured to determine if the first movement trajectory is The first preset trajectory, and the first movement trajectory is located within the preset space range, and it is determined that the operating body performs a menu pop-up operation relative to the screen.
  • the display module 901 includes: a sixth determining unit 9011, configured to determine that the current three-dimensional coordinate position of the operating body in space is mapped to a two-dimensional coordinate position on the screen; the first display unit 9012 is used to Displaying an icon with a second preset shape at a two-dimensional coordinate position; a seventh determining unit 9013, configured to determine a target position for displaying a graphic menu of the first preset shape on the screen based on the two-dimensional coordinate position; the second display unit 9014, used to display the graphics menu at the target position.
  • the display module 901 includes: an eighth determining unit 9015, configured to determine a preset initial position mapped on the screen by the operating body; a third display unit 9016, configured to display the An icon of a preset shape; a ninth determination unit 9017, configured to determine a menu display position based on a preset initial position; a fourth display unit 9018, configured to display a graphical menu of a first preset shape at the menu display position.
  • the second detection module 903 includes: a tenth determining unit 9031, configured to determine that the current three-dimensional coordinates of the operating body in space are mapped to two-dimensional coordinates on the screen; an eleventh determining unit 9032, It is used to determine the amount of change of the two-dimensional coordinates on the screen based on the amount of change of the three-dimensional coordinates during the air-to-air operation; the twelfth determination unit 9033 is used to determine the movement of the icon on the screen based on the amount of change of the two-dimensional coordinates physical quantity.
  • the trigger module 904 includes: a first trigger unit 9041, configured to if the icon moves to the target function area in the graphic menu and the continuous stay time of the icon in the target function area is greater than or equal to the second preset Set the duration to trigger the menu function corresponding to the target function area; or, the second trigger unit 9042, used to trigger the menu function corresponding to the target function area if it is determined that the icon is located at a preset position in the target function area; or, the third trigger unit 9043, configured to trigger a menu function corresponding to the target function area if it is determined that the second movement track of the icon in the target function area matches the second preset track.
  • the triggering module 904 includes: a fourth triggering unit 9044, configured to perform the following triggering steps based on the target functional area: determine whether the target functional area has a corresponding submenu; The physical quantity triggers the execution of the function corresponding to the target function area; the fifth trigger unit 9045 is used to trigger the display of the submenu on the screen based on the moving physical quantity if there is a submenu; the second detection unit 9046 is used to detect the movement of the icon on the screen again The physical quantity, and determine the target functional area in the submenu and corresponding to the icon; based on the re-detected moving physical quantity and the target functional area in the submenu, continue to execute the triggering step.
  • a fourth triggering unit 9044 configured to perform the following triggering steps based on the target functional area: determine whether the target functional area has a corresponding submenu; The physical quantity triggers the execution of the function corresponding to the target function area; the fifth trigger unit 9045 is used to trigger the display of the submenu on the
  • the trigger module 904 includes: a fifth display unit 9047, configured to display on the screen the adjustment control corresponding to the continuous adjustment function in response to determining that the function corresponding to the target function area is a preset continuous adjustment function Interface; execution unit 9048, configured to execute the continuous adjustment function based on the movement trajectory of the two-dimensional coordinate position currently mapped to the screen by the operating body, and control the movement of the control points on the adjustment control interface.
  • the triggering module is further used to: highlight the target functional area.
  • the device for guiding the operating body to perform air-space operations displays a graphical menu on the screen and an icon in a mapping relationship with the operating body, and then detects the air-space operations performed by the operating body relative to the screen in space , while detecting the moving physical quantity of the icon on the screen, and finally based on the moving physical quantity and the target function area corresponding to the icon in the graphic menu, the menu function corresponding to the target function area is triggered, so that the user can use the screen when performing air operations.
  • the graphical menus and icons displayed on the screen intuitively guide users to use the operating body to perform air operations, so that users can clearly understand how to use the operating body to trigger various functions, which is conducive to users to perform air operations accurately.
  • the number of functions triggered by air operations can be expanded, thereby making it easier for users to perform air operations.
  • the electronic device may be any one or both of the terminal device 101 and the server 103 as shown in FIG. 1A , or a stand-alone device independent of them. Receive the acquired input signal.
  • FIG. 11 illustrates a block diagram of an electronic device according to an embodiment of the present disclosure.
  • an electronic device 1100 includes one or more processors 1101 and a memory 1102 .
  • the processor 1101 may be a central processing unit (Central Processing Unit, Cpu) or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 1100 to perform desired functions.
  • CPU Central Processing Unit
  • Cpu Central Processing Unit
  • Memory 1102 may include one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory.
  • the volatile memory may include, for example, a random access memory (Random Access Memory, Ram) and/or a cache memory (Cache).
  • the non-volatile memory may include, for example, a read-only memory (Read-Only Memory, Rom), a hard disk, a flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 1101 may execute the program instructions to implement the method for guiding an operating body to perform air operations in various embodiments of the present disclosure above and/or other expected functionality.
  • Various contents such as images, control instructions, and the like can also be stored in the computer-readable storage medium.
  • the electronic device 1100 may further include: an input device 1103 and an output device 1104, and these components are interconnected through a bus system and/or other forms of connection mechanisms (not shown).
  • the input device 1103 may be a camera, a lidar, a mouse, a keyboard, a microphone and other devices, and is used for inputting information such as images and instructions.
  • the input device 1103 may be a communication network connector for receiving input information such as images and instructions from the terminal device 101 and the server 103 .
  • the output device 1104 can output various information to the outside, including information such as graphic menus.
  • the output device 1104 may include, for example, a display, a speaker, a printer, a communication network and its connected remote output devices, and the like.
  • the electronic device 1100 may further include any other appropriate components.
  • embodiments of the present disclosure may also be computer program products, which include computer program instructions that, when executed by a processor, cause the processor to perform the above-mentioned "exemplary method" of this specification. Steps in the method for guiding an operating body to perform air manipulation according to various embodiments of the present disclosure described in the section.
  • the computer program product can be written in any combination of one or more programming languages to execute the program codes for performing the operations of the embodiments of the present disclosure, and the programming languages include object-oriented programming languages, such as Java, C++, etc. , also includes conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server to execute.
  • embodiments of the present disclosure may also be a computer-readable storage medium, on which computer program instructions are stored, and the computer program instructions, when executed by a processor, cause the processor to perform the above-mentioned "Exemplary Method" section of this specification. Steps in the method for guiding an operating body to perform air manipulation according to various embodiments of the present disclosure described in .
  • the computer readable storage medium may employ any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, but not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any combination thereof.
  • readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable Type programmable read-only memory ((Erasable Programmable Read-Only Memory, Eprom) or flash memory), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, Cd-Rom), optical storage device, magnetic storage device, Or any suitable combination of the above.
  • the methods and apparatus of the present disclosure may be implemented in many ways.
  • the methods and apparatuses of the present disclosure may be implemented by software, hardware, firmware or any combination of software, hardware, and firmware.
  • the above sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise.
  • the present disclosure can also be implemented as programs recorded in recording media, the programs including machine-readable instructions for realizing the method according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
  • each component or each step can be decomposed and/or reassembled. These decompositions and/or recombinations should be considered equivalents of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本公开实施例公开了一种引导操作体进行隔空操作的方法、装置、计算机可读存储介质及电子设备,其中,该方法包括:在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标;检测操作体在空间中相对屏幕做出的隔空操作;响应于隔空操作,检测图标在屏幕上的移动物理量;基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能。本公开实施例可以直观地引导用户利用操作体进行隔空操作,使用户明确如何利用操作体触发各种功能,有利于用户精准地进行隔空操作,同时在图形菜单的辅助下,可以拓展隔空操作触发的功能的数量,进而使用户更方便地进行隔空操作。

Description

引导操作体进行隔空操作的方法和装置 技术领域
本公开涉及计算机技术领域,尤其是一种引导操作体进行隔空操作的方法、装置、计算机可读存储介质及电子设备。
背景技术
随着人工智能技术的发展,针对屏幕显示的画面进行隔空操作的应用领域越来越多。隔空操作方法可以应用在AR(Augmented Reality,增强现实)/VR(Virtual Reality,虚拟现实)、智能手机、智能家电等场景,在人们不方便直接用手操作控制面板的时候利用隔空操作对机器进行操控,方便了人们的生活。
现有的基于手势识别的隔空操作方案,通常对摄像头拍摄手部图像进行手势识别,根据手势的类别、移动轨迹等进行相应的操作。
发明内容
本公开的实施例提供了一种引导操作体进行隔空操作的方法、装置、计算机可读存储介质及电子设备。
本公开的实施例提供了一种引导操作体进行隔空操作的方法,该方法包括:在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标;检测操作体在空间中相对屏幕做出的隔空操作;响应于隔空操作,检测图标在屏幕上的移动物理量;基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能。
根据本公开实施例的另一个方面,提供了一种引导操作体进行隔空操作的装置,该装置包括:显示模块,用于在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标;第一检测模块,用于检测操作体在空间中相对屏幕做出的隔空操作;第二检测模块,用于响应于隔空操作,检测图标在屏幕上的移动物理量;触发模块,用于基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区 域对应的菜单功能。
根据本公开实施例的另一个方面,提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序用于执行上述引导操作体进行隔空操作的方法。
根据本公开实施例的另一个方面,提供了一种电子设备,电子设备包括:处理器;用于存储处理器可执行指令的存储器;处理器,用于从存储器中读取可执行指令,并执行指令以实现上述引导操作体进行隔空操作的方法。
基于本公开上述实施例提供的引导操作体进行隔空操作的方法、装置、计算机可读存储介质及电子设备,通过在屏幕上显示图形菜单,和与操作体成映射关系的图标,再检测操作体在空间中相对屏幕做出的隔空操作,同时检测图标在屏幕上的移动物理量,最后基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能,从而实现了用户在进行隔空操作时,利用屏幕上显示的图形菜单和图标,直观地引导用户利用操作体进行隔空操作,使用户明确如何利用操作体触发各种功能,有利于用户精准地进行隔空操作,同时可以通过对图形菜单上所显示菜单项的数量进行调整,进而可以拓展隔空操作触发的菜单功能的数量,从而使用户更方便地进行隔空操作。
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。
附图说明
通过结合附图对本公开实施例进行更详细的描述,本公开的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开,并不构成对本公开的限制。在附图中,相同的参考标号通常代表相同部件或步骤。
图1A是本公开所适用的系统图。
图1B是本公开所适用的系统架构包括的屏幕的示意图。
图2是本公开一示例性实施例提供的引导操作体进行隔空操作的方法的流程示意图。
图3是本公开另一示例性实施例提供的引导操作体进行隔空操作的方法的流程示意图。
图4是本公开另一示例性实施例提供的引导操作体进行隔空操作的方法的流程示意图。
图5是本公开另一示例性实施例提供的引导操作体进行隔空操作的方法的流程示意图。
图6A、图6B、图6C是本公开另一示例性实施例提供的引导操作体进行隔空操作的方法的触发目标功能区域的示例性示意图。
图7是本公开另一示例性实施例提供的引导操作体进行隔空操作的方法的流程示意图。
图8是本公开另一示例性实施例提供的引导操作体进行隔空操作的方法的用于连续调节的调节控制界面的示例性示意图。
图9是本公开一示例性实施例提供的引导操作体进行隔空操作的装置的结构示意图。
图10是本公开另一示例性实施例提供的引导操作体进行隔空操作的装置的结构示意图。
图11是本公开一示例性实施例提供的电子设备的结构图。
具体实施方式
下面,将参考附图详细地描述根据本公开的示例实施例。显然,所描述的实施例仅仅是本公开的一部分实施例,而不是本公开的全部实施例,应理解,本公开不受这里描述的示例实施例的限制。
应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。
另外,本公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在 A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处,其相同或相似之处可以相互参考,为了简洁,不再一一赘述。
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。
本公开实施例可以应用于终端设备、计算机系统、服务器等电子设备,其可与众多其它通用或专用计算系统环境或配置一起操作。适于与终端设备、计算机系统、服务器等电子设备一起使用的众所周知的终端设备、计算系统、环境和/或配置的例子包括但不限于:个人计算机系统、服务器计算机系统、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的系统、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机系统、大型计算机系统和包括上述任何系统的分布式云计算技术环境,等等。
终端设备、计算机系统、服务器等电子设备可以在由计算机系统执行的计算机系统可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑和数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机系统/服务器可以在分布式云计算环境中实施,在分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的,并且程序模块可以位于包括存储设备的本地或远程计算系统存储介质上。
申请概述
现有的隔空操作方法,在人机交互的过程中,屏幕上不给出如何进行操作的提示,用户不清楚如何进行手势操作,也不清楚不同的操作方式(例如手型)对应哪些不同的功能,只有在盲操作成功后才会获得操作成功的提示, 缺乏适合的引导。另外,当前的隔空操作方法对操作体的速度、幅度及手型等有着很大的限制要求,用户必须以标准的动作进行操作。最后,当前的隔空操作方法,一个动作或姿态只能对应一种功能,导致隔空操作的拓展性较低。
示例性系统
图1A示出了可以应用本公开的实施例的引导操作体进行隔空操作的方法或引导操作体进行隔空操作的装置的示例性系统架构100,图1B示出了终端设备101上的屏幕1011的示意图。
如图1A所示,系统架构100可以包括终端设备101、网络102、服务器103和数据采集设备104。网络102用于在终端设备101和服务器103之间提供通信链路的介质。网络102可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101通过网络102与服务器103交互,以接收或发送消息等。终端设备101上可以安装有各种通讯客户端应用,例如音视频播放应用、导航类应用、游戏类应用、搜索类应用、网页浏览器应用或即时通信工具等。
终端设备101可以是各种电子设备,包括但不限于诸如车载终端、移动电话、笔记本电脑、数字广播接收器、PDA(Personal Digital Assistant,个人数字助理)、PAD(Portable Android Device,平板电脑)、PMP(Portable Media Player,便携式多媒体播放器)等等的移动终端以及诸如数字电视(television,TV)、台式计算机等固定终端。终端设备101通常包括如图1B所示的屏幕1011,屏幕1011上可以显示图形菜单和与操作体的实时位置相对应的图标。用户通过屏幕1011与终端设备101或服务器103进行人机交互。
服务器103可以是提供各种服务的服务器,例如对终端设备101实时上传的操作体的姿态和位置等进行分析的后台服务器。后台服务器可以对用户的隔空操作响应,得到处理结果(例如指示触发菜单功能的命令),并将处理结果反馈给终端设备。
数据采集设备104可以是各种用于采集操作体的位置或姿态等数据的设备,例如单目摄像头、双目立体摄像头、激光雷达或三维结构光成像设备等。
需要说明的是,本公开的实施例所提供的引导操作体进行隔空操作的方法可以由服务器103执行,也可以由终端设备101执行,相应地,引导操作体进行隔空操作的装置可以设置于服务器103中,也可以设置于终端设备101中。
应该理解,图1A中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。在对操作体进行识别所需的数据不需要从远程获取的情况下,上述系统架构可以不包括网络,只包括服务器或终端设备。
示例性方法
图2是本公开一示例性实施例提供的引导操作体进行隔空操作的方法的流程示意图。本实施例可应用在电子设备(如图1A所示的终端设备101或服务器103)上,如图2所示,该方法包括如下步骤:
步骤201,在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标。
在本实施例中,电子设备可以在如图1B所示的屏幕1011上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标。其中,上述屏幕可以是任意类型的电子设备的屏幕,例如可以是车辆上的中控屏、室内放置的智能电视和智能手机等等。
图形菜单的第一预设形状可以是任意形状,例如,可以是图1B所示的圆形菜单10111,以及图1B未示出的矩形菜单等等。图形菜单包括至少一个功能区域,每个功能区域可以被触发执行相应的菜单功能。例如弹出子菜单、执行音量调节、更换曲目和控制特定设备的开关等等。
上述操作体可以是对被控设备进行隔空操作的各种硬件实体或者是用户特定的身体部位,例如,操作体可以是用户的手部或头部等身体部位,还可以是手柄等具有预设形状的并且可以实时向上述电子设备输出位置信息的硬件实体,例如,预设形状可为V字形等,还可以是其他具有特定形状的物体。
电子设备可以实时地对操作体在空间中的位置进行检测,将检测的位置映射到屏幕上的相应位置,并在屏幕上的相应位置显示如图1B所示的第二预设形状的图标10112,即呈现该图标的位置随着操作体的移动而移动的效果。第二预设形状可以是各种形状,例如图1B所示的带拖尾的水滴状,以及图 1B未示出的球状、点状或指针状等。
这种情况下,图标与操作体所成的映射关系,指的是图标在屏幕上的位置,为操作体在空间中的位置映射至屏幕上的相应位置。相应的,操作体在移动过程中,图标在屏幕上的位置随操作体的移动而移动。
步骤202,检测操作体在空间中相对屏幕做出的隔空操作。
在本实施例中,电子设备可以基于各种方法实时地检测操作体在空间中相对屏幕做出的隔空操作。隔空操作可以是用户利用操作体与屏幕非接触地与被控设备(例如车载影音设备、空调和电视等)进行交互的操作。作为示例,隔空操作可以包括但不限于:基于操作体的移动轨迹进行的操作、基于操作体的移动方向进行的操作、基于操作体的移动距离进行的操作,以及基于操作体的姿态(例如手势)进行的操作等。
通常,电子设备可以获取如图1A所示的数据采集设备104对操作体采集的待识别数据,并对待识别数据进行识别,从而根据识别结果确定操作体做出的隔空操作。作为示例,数据采集设备104可以是单目摄像头,电子设备可以对单目摄像头实时采集的图像帧进行识别,确定操作体在图像帧中的位置或操作体的姿态。再例如,数据采集设备104可以是双目立体相机,电子设备可以对双目立体相机实时采集的双目图像进行识别,确定操作体在三维空间中的位置。
步骤203,响应于隔空操作,检测图标在屏幕上的移动物理量。
在本实施例中,电子设备可以响应于隔空操作,检测图标在屏幕上的移动物理量。其中,移动物理量可以是表示图标在屏幕上移动的特定特征的物理量,例如图标的移动距离、移动速度或移动方向等。
步骤204,基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能。
在本实施例中,电子设备可以基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能。其中,图形菜单可以包括至少一个功能区域,每个功能区域可以对应于一个菜单功能。菜单功能可以是预先设置的,例如,菜单功能可以为弹出被触发的功能区域下的子菜单,或控制被控设备执行相应的功能(例如播放视频、调整音量、调整空调温度或控制车窗开关等)。触发菜单功能的方式可以包括多种,例如当图标移动到目标功能区域即触发菜单功能,或图标在目标功能区域内的停留时长 达到预设时长即触发菜单功能。如图1B所示,当图标10112在标记为音乐的目标功能区域停留时长达到预设时长时,即弹出音乐功能的下级子菜单10113。
目标功能区域可以是上述至少一个功能区域中的与图标具有对应关系的功能区域,作为示例,当图标移动到某个功能区域内时,确定图标与该功能区域具有对应关系,则该功能区域为目标功能区域,这种情况下,如果基于移动物理量,确定图标移动到某一功能区域,则该功能区域为目标功能区域。再例如,当图标的移动轨迹符合某个功能区域对应的预设移动轨迹时,确定图标与该功能区域具有对应关系,则该功能区域为目标功能区域这种情况下,如果基于移动物理量,确定图标的移动轨迹符合某一功能区域对应的预设移动轨迹,则该功能区域为目标功能区域。
本公开的上述实施例提供的方法,通过在屏幕上显示图形菜单,和与操作体成映射关系的图标,再检测操作体在空间中相对屏幕做出的隔空操作,同时检测图标在屏幕上的移动物理量,最后基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能,从而实现了用户在进行隔空操作时,利用屏幕上显示的图形菜单和图标,直观地引导用户利用操作体进行隔空操作,使用户明确如何利用操作体触发各种功能,有利于用户精准地进行隔空操作,同时可以通过对图形菜单上所显示菜单项的数量进行调整,进而可以拓展隔空操作触发的菜单功能的数量,从而使用户更方便地进行隔空操作。
在一些可选的实现方式中,该方法还可以包括:
检测操作体相对于屏幕做出的菜单弹出操作,在检测到菜单弹出操作时,执行上述步骤201。或者,检测用户发出的语音;若检测到语音中包括的预设唤醒词(例如“弹出菜单”或“你好”等关键词),执行上述步骤201。
其中,菜单弹出操作是操作体进行特定方式的移动或呈现特殊的姿态等,触发弹出图形菜单的操作。检测用户发出的语音的方法可以是现有的语音识别方法。例如,可以使用基于神经网络实现的语音识别模型,将用户的语音信号转换为文本,再从文本中确定预设唤醒词。
本实现方式通过检测操作体进行的菜单弹出操作或检测用户的语音,可以实现多样化的弹出图形菜单,提高了用户进行隔空操作的灵活性,使隔空操作更加便利。
在一些可选的实现方式中,如图3所示,电子设备可以执行如下步骤检 测菜单弹出操作:
步骤301,检测操作体相对屏幕做出的预设动作。
其中,预设动作可以是静态动作(例如在某个位置保持一个姿态),也可以是动态动作(例如以一定轨迹移动或姿态变化)。电子设备可以基于现有的动作检测方法(例如基于深度学习的动作检测方法)检测操作体的动作。
步骤302,确定预设动作的持续时长。
例如,当操作体为手部时,可以对手部进行手势识别,若手势为预设手势,确定该手势的持续时长。再例如,可以确定操作体的移动轨迹是否为预设轨迹,若是预设轨迹,确定该移动轨迹的持续时长。
步骤303,若持续时长大于或等于第一预设时长,确定检测到操作体相对屏幕做出的菜单弹出操作。
作为示例,假设预设手势为V字形手势,若用户手部保持V字形手势的时长大于等于第一预设时长,则确定做出了菜单弹出操作,接着在屏幕上弹出图形菜单。
本实现方式通过检测操作体做出预设动作的持续时长来确定操作体是否做出了菜单弹出操作,可以提高菜单弹出操作的复杂度,降低用户发生误操作的几率。
在一些可选的实现方式中,操作体为用户手部。基于此,电子设备可以按照如下步骤述检测操作体做出的菜单弹出操作:
首先,对用户手部进行手势识别,得到手势识别结果。其中,对用户进行手势识别的方法可以利用现有技术实现,这里不再赘述。
然后,若手势识别结果表示的手势为预设手势,且用户手部位于预设空间范围内,确定用户手部相对于屏幕做出了菜单弹出操作。
其中,预设空间范围可以是如图1A所示的数据采集设备104的检测范围,或者可以是预先设定的、与屏幕的显示范围相对应的三维空间内的范围。预设手势可以是静态手势(例如V字形手势或握拳手势等),也可以是动态手势(例如以一定手势按一定轨迹移动,或在若干个预设手势之间切换)。
本实现方式通过手势识别检测操作体是否做出菜单弹出操作,其实现方式以及用户的操作方式更加简单,有利于快速地弹出图形菜单。
在一些可选的实现方式中,电子设备可以按照如下步骤述检测操作体做出的菜单弹出操作:
首先,确定操作体在空间中的第一移动轨迹。其中,确定操作体的移动轨迹的方法可以基于现有技术实现,例如,图1A所示的数据采集设备104可以是摄像头,电子设备可以利用摄像头对操作体拍摄的多个图像帧进行轨迹识别,从而确定操作体做出的第一移动轨迹。
然后,若第一移动轨迹为第一预设轨迹,且第一移动轨迹位于预设空间范围内,确定操作体相对于屏幕做出了菜单弹出操作。
作为示例,第一预设轨迹可以是在空间中的圆形或波浪线轨迹等。
本实现方式通过对操作体进行移动轨迹识别,在移动轨迹为第一预设轨迹时,弹出图形菜单,只需对操作体进行轨迹识别,无需对操作体进行姿态识别,其实现方式简单且识别准确性高,有利于高效地识别菜单弹出操作。
在一些可选的实现方式中,如图4所示,步骤201可以包括:
步骤2011,确定操作体当前在空间中的三维坐标位置映射到屏幕上的二维坐标位置。
电子设备可以利用现有的空间位置检测方法确定操作体的三维坐标位置(例如通过深度图像识别方法或激光点云识别方法确定三维坐标位置),然后基于预设的三维坐标位置与屏幕上的二维坐标位置的对应关系,确定操作体当前映射到屏幕上的二维坐标位置。
步骤2012,在二维坐标位置显示具有第二预设形状的图标。
步骤2013,基于二维坐标位置确定在屏幕上显示第一预设形状的图形菜单的目标位置。
通常,电子设备可以将上述二维坐标位置作为基准位置,利用预设的、图形菜单的位置与基准位置的对应关系,确定显示图形菜单的目标位置。例如,上述基准位置即可以为目标位置,或者,与基准位置所在预先划分的网格区域对应的固定位置为目标位置。
步骤2014,在目标位置显示图形菜单。
作为示例,当第一预设形状为圆形时,可以将目标位置作为圆形的圆心,从而在屏幕上显示圆形的图形菜单。
本实现方式通过首先确定图标的二维坐标位置,然后基于图标的二维坐标位置确定图形菜单的位置,从而实现了将图形菜单的位置与操作体的实时空间位置相关联,使用户可以控制图形菜单的显示位置,方便了用户的操作,提高了隔空操作的灵活性。
在一些可选的实现方式中,如图5所示,步骤201可以包括:
步骤2015,确定操作体在屏幕上映射的预设初始位置。
其中,预设初始位置可以是固定的一个位置,例如屏幕中心。预设初始位置也可以是开始执行该引导操作体进行隔空操作的方法时,首次将操作体的位置映射到屏幕上的位置。
步骤2016,在预设初始位置显示第二预设形状的图标。
步骤2017,基于预设初始位置确定菜单显示位置。
通常,电子设备可以将预设初始位置作为基准位置,利用预设的、图形菜单的位置与基准位置的对应关系,确定显示图形菜单的菜单显示位置,也就是说,可预设图形菜单的位置与基准位置的对应关系,电子设备根据该对应关系确定用于显示显示图形菜单的菜单显示位置。例如,上述基准位置即可以为菜单显示位置,或者,与基准位置所在预先划分的网格区域对应的固定位置为菜单显示位置。
步骤2018,在菜单显示位置显示第一预设形状的图形菜单。
需要说明的是,步骤2015-步骤2018可以与上述可选实施例中描述的菜单弹出操作相结合,即当检测到菜单弹出操作时,即执行步骤2015-步骤2018。例如,当检测到用户手势为预设手势时,在屏幕中心(即预设初始位置)显示上述图标,并以屏幕中心为圆心,显示圆形的图形菜单。
本实现方式通过基于预设初始位置确定菜单显示位置,丰富了弹出图形菜单的方式,同时可以实现在屏幕上的固定位置显示图形菜单,图形菜单的显示位置不随操作体的移动发生改变,无需用户在屏幕上寻找图形菜单,从而使用户进行隔空操作更加方便。
在一些可选的实现方式中,步骤203可以如下执行:
首先,确定操作体当前在空间中的三维坐标映射到屏幕上的二维坐标。
具体地,电子设备可以根据预设的、操作体所在的空间与屏幕的显示范围的映射关系,实时确定操作体当前映射到屏幕上的二维坐标。
然后,基于三维坐标在隔空操作过程中的变化量,确定二维坐标在屏幕上的变化量。
通常,可以将针对操作体所在空间的三维坐标系的其中一个坐标轴(例如与摄像头的光轴平行的坐标轴)忽略,根据其他两个坐标轴上的坐标变化量以及上述映射关系,确定二维坐标在屏幕上的变化量。
最后,基于二维坐标的变化量,确定图标在屏幕上的移动物理量。
根据二维坐标的变化量,可以确定图标的移动方向(可以用直线移动轨迹与水平线的夹角表示移动方向)、移动速度或移动轨迹等物理量作为移动物理量。
本实现方式利用操作体的三维坐标变化量确定映射到屏幕上的二维坐标变化量来确定移动物理量,可以准确地对操作体的位置进行跟踪,从而精确地确定图形菜单中的目标功能区域,有助于用户更精确地进行隔空操作。
在一些可选的实现方式中,步骤204中,电子设备可以按照如下三种方式中的任一种触发目标功能区域对应的菜单功能:
方式一,若图标移动到图形菜单中的目标功能区域且图标在目标功能区域内的连续停留时间大于或等于第二预设时长,触发目标功能区域对应的菜单功能。
作为示例,如图6A所示,该引导操作体进行隔空操作的方法应用在车辆上,显示在中控屏上的图形菜单为圆盘形菜单,其中包括四个功能区域,分别标记为音乐、座椅、自定义及空调,图形菜单的中心区域为上述图标的初始位置,与操作体的位置对应的图标为水滴形图标。当图标移动到标记为音乐的功能区域内时,该功能区域即为目标功能区域,同时开始计时,若图标在目标功能区域内的停留时间达到第二预设时长,则触发相应的菜单功能,例如图中所示的弹出用于调节音量和切换音乐的子菜单。
方式二,若图标位于目标功能区域内的预设位置,触发目标功能区域对应的菜单功能。
作为示例,如图6B所示,当图标与圆盘形的图形菜单的外围圆环区域接触时,将接触的圆环区域对应的功能区域作为目标功能区域(即图中所示的标记为音乐的区域),并弹出图中所示的弹出用于调节音量和切换音乐的子菜单。
方式三,若图标在目标功能区域内的第二移动轨迹与第二预设轨迹匹配,触发目标功能区域对应的菜单功能。
作为示例,如图6C所示,当图标在目标功能区域(即图中所示的标记为音乐的区域)内的移动轨迹如图中箭头所示做一次往返移动时,弹出图中所示的弹出用于调节音量和切换音乐的子菜单。
本实现方式提供了多种触发目标功能区域的菜单功能的方案,可以使用 户进行隔空操作的灵活性更高,便于用户选择适合自身习惯的触发方式。
在一些可选的实现方式中,如图7所示,步骤204可以如下执行:
基于目标功能区域执行如下触发步骤(包括步骤2041-步骤2044):
步骤2041,确定目标功能区域是否具有对应的子菜单。
如果不具有子菜单,执行步骤2042,如果具有子菜单,执行步骤2043。
步骤2042,基于移动物理量触发执行目标功能区域对应的功能。
其中,触发目标功能区域的功能的方法可以包括但不限于上述可选实现方式中的三种方式中的任一种。作为示例,当目标功能区域对应的功能为启动导航功能时,若移动物理量符合触发条件时,则启动导航功能。
步骤2043,基于移动物理量在屏幕上触发显示子菜单,并执行步骤2044。
其中,触发显示子菜单的方法可以包括但不限于上述可选实现方式中的三种方式中的任一种。如图6A、图6B或图6C所示,标记为音乐的目标功能区域具有对应的子菜单,则弹出该子菜单。
步骤2044,再次检测图标在屏幕上的移动物理量,并确定在子菜单中且与图标对应的目标功能区域。
其中,检测移动物理量的方法与上述步骤203相同,且确定目标功能区域的方法与上述步骤204中描述的确定目标功能区域的方法相同,这里不再赘述。
步骤2045,基于再次检测的移动物理量和子菜单中的目标功能区域,继续执行触发步骤。
当循环多次后检测到当前的目标功能区域不再具有子菜单时,循环结束。本实现方式通过循环执行触发步骤,实现了将图形菜单设置为包含多级子菜单时的隔空操作,从而使隔空操作实现的功能更丰富,有利于对图形菜单进行扩展。
在一些可选的实现方式中,上述步骤204还可以包括如下子步骤:
首先,若目标功能区域对应的功能为预设的连续调节功能,在屏幕上显示连续调节功能对应的调节控制界面。
如图8所示,当目标功能区域为连续调节音量的功能时,在屏幕上显示用于调节音量的条形的调节控制界面。
然后,基于操作体当前映射到屏幕上的二维坐标位置的移动轨迹,执行连续调节功能,并控制调节控制界面上的控制点移动。
具体地,可以根据操作体映射到屏幕上的二维坐标的变化量,使图8所示的控制点801上下移动以调节音量。再例如,调节控制界面可以是圆形旋钮形状,当操作体映射到屏幕上的移动轨迹为圆形时,可以使旋钮跟随操作体的移动旋转以调节音量。
本实现方式提供了一种连续调节特定功能的隔空操作方法,从而能够对特定的应用实现精确调节,丰富了隔空操作的方式,提高了隔空操作进行调节的精度。
在一些可选的实现方式中,上述步骤204中,电子设备还可以突出显示目标功能区域。可选的,可以通过高亮显示或放大目标功能区域等方式将目标功能区域突出显示。通过将目标功能区域突出显示,可以使用户直观地看到当前的目标功能区域的位置,从而有利于提高隔空操作的精确性。
示例性装置
图9是本公开一示例性实施例提供的引导操作体进行隔空操作的装置的结构示意图。本实施例可应用在电子设备上,如图9所示,引导操作体进行隔空操作的装置包括:显示模块901,用于在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标;第一检测模块902,用于检测操作体在空间中相对屏幕做出的隔空操作;第二检测模块903,用于响应于隔空操作,检测图标在屏幕上的移动物理量;触发模块904,用于基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能。
在本实施例中,显示模块901可以在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标。其中,上述屏幕可以是任意类型的电子设备的屏幕,例如可以是车辆上的中控屏、室内放置的智能电视或智能手机等等。
图形菜单的第一预设形状可以是任意形状,例如,可以是圆形菜单、矩形菜单等等。图形菜单包括至少一个功能区域,每个功能区域可以被触发执行相应的菜单功能。例如弹出子菜单、执行音量调节、更换曲目或控制特定设备的开关等等。
上述操作体可以是对被控设备进行隔空操作的各种硬件实体或者是用户特定的身体部位,例如,操作体可以是用户的手部或头部等身体部位,还可 以是手柄等可以实时向上述装置输出位置信息的硬件实体,还可以是其他具有特定形状的物体。
显示模块901可以实时地对操作体在空间中的位置进行检测,将检测的位置映射到屏幕上的相应位置,并在屏幕上的相应位置显示第二预设形状的图标,即呈现该图标的位置随着操作体的移动而移动的效果。第二预设形状可以是各种形状,例如球状、点状或指针状等。
在本实施例中,第一检测模块902可以基于各种方法实时地检测操作体在空间中相对屏幕做出的隔空操作。隔空操作可以是用户利用操作体与屏幕非接触地与被控设备(例如车载影音设备、空调和电视等)进行交互的操作。作为示例,隔空操作可以包括但不限于:基于操作体的移动轨迹进行的操作、基于操作体的移动方向进行的操作、基于操作体的移动距离进行的操作,以及基于操作体的姿态(例如手势)进行的操作等。
通常,第一检测模块902可以获取如图1A所示的数据采集设备104对操作体采集的待识别数据,并对待识别数据进行识别,从而检测操作体做出的隔空操作。作为示例,数据采集设备104可以是单目摄像头,第一检测模块902可以对单目摄像头实时采集的图像帧进行识别,确定操作体在图像帧中的位置或操作体的姿态。再例如,数据采集设备104可以是双目立体相机,第一检测模块902可以对双目立体相机实时采集的双目图像进行识别,确定操作体在三维空间中的位置。
在本实施例中,第二检测模块903可以响应于隔空操作,检测图标在屏幕上的移动物理量。其中,移动物理量可以是表示图标在屏幕上移动的特定特征的物理量,例如图标的移动距离、移动速度或移动方向等。
在本实施例中,触发模块904可以基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能。其中,图形菜单可以包括至少一个功能区域,每个功能区域可以对应于一个菜单功能。菜单功能可以是预先设置的,例如,菜单功能可以为弹出被触发的功能区域下的子菜单,或控制被控设备执行相应的功能(例如播放视频、调整音量、调整空调温度或控制车窗开关等)。触发菜单功能的方式可以包括多种,例如当图标移动到目标功能区域即触发菜单功能,或图标在目标功能区域内的停留时长达到预设时长即触发菜单功能。
目标功能区域可以是上述至少一个功能区域中的与图标具有对应关系的 功能区域,作为示例,当图标移动到某个功能区域内时,确定图标与该功能区域具有对应关系,则该功能区域为目标功能区域。再例如,当图标的移动轨迹符合某个功能区域对应的预设移动轨迹时,确定图标与该功能区域具有对应关系,则该功能区域为目标功能区域。
参照图10,图10是本公开另一示例性实施例提供的引导操作体进行隔空操作的装置的结构示意图。
在一些可选的实现方式中,该装置还包括:第三检测模块905,用于检测操作体相对于屏幕做出的菜单弹出操作,在检测到菜单弹出操作时,执行在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标的步骤;或者,第四检测模块906,用于检测用户发出的语音;若检测到语音中包括预设唤醒词,执行在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标的步骤。
在一些可选的实现方式中,第三检测模块905包括:第一检测单元9051,用于检测操作体相对屏幕做出的预设动作;第一确定单元9052,用于确定预设动作的持续时长;第二确定单元9053,用于若持续时长大于或等于第一预设时长,确定检测到操作体相对屏幕做出的菜单弹出操作。
在一些可选的实现方式中,操作体为用户手部;第三检测模块905包括:识别单元9054,用于对用户手部进行手势识别,得到手势识别结果;第三确定单元9055,用于若手势识别结果表示的手势为预设手势,且用户手部位于预设空间范围内,确定用户手部相对于屏幕做出了菜单弹出操作。
在一些可选的实现方式中,第三检测模块905包括:第四确定单元9056,用于确定操作体在空间中的第一移动轨迹;第五确定单元9057,用于若第一移动轨迹为第一预设轨迹,且第一移动轨迹位于预设空间范围内,确定操作体相对于屏幕做出了菜单弹出操作。
在一些可选的实现方式中,显示模块901包括:第六确定单元9011,用于确定操作体当前在空间中的三维坐标位置映射到屏幕上的二维坐标位置;第一显示单元9012,用于在二维坐标位置显示具有第二预设形状的图标;第七确定单元9013,用于基于二维坐标位置确定在屏幕上显示第一预设形状的图形菜单的目标位置;第二显示单元9014,用于在目标位置显示图形菜单。
在一些可选的实现方式中,显示模块901包括:第八确定单元9015,用 于确定操作体在屏幕上映射的预设初始位置;第三显示单元9016,用于在预设初始位置显示第二预设形状的图标;第九确定单元9017,用于基于预设初始位置确定菜单显示位置;第四显示单元9018,用于在菜单显示位置显示第一预设形状的图形菜单。
在一些可选的实现方式中,第二检测模块903包括:第十确定单元9031,用于确定操作体当前在空间中的三维坐标映射到屏幕上的二维坐标;第十一确定单元9032,用于基于三维坐标在隔空操作过程中的变化量,确定二维坐标在屏幕上的变化量;第十二确定单元9033,用于基于二维坐标的变化量,确定图标在屏幕上的移动物理量。
在一些可选的实现方式中,触发模块904包括:第一触发单元9041,用于若图标移动到图形菜单中的目标功能区域且图标在目标功能区域内的连续停留时间大于或等于第二预设时长,触发目标功能区域对应的菜单功能;或者,第二触发单元9042,用于若确定图标位于目标功能区域内的预设位置,触发目标功能区域对应的菜单功能;或者,第三触发单元9043,用于若确定图标在目标功能区域内的第二移动轨迹与第二预设轨迹匹配,触发目标功能区域对应的菜单功能。
在一些可选的实现方式中,触发模块904包括:第四触发单元9044,用于基于目标功能区域执行如下触发步骤:确定目标功能区域是否具有对应的子菜单;如果不具有子菜单,基于移动物理量触发执行目标功能区域对应的功能;第五触发单元9045,用于如果具有子菜单,基于移动物理量在屏幕上触发显示子菜单;第二检测单元9046,用于再次检测图标在屏幕上的移动物理量,并确定在子菜单中且与图标对应的目标功能区域;基于再次检测的移动物理量和子菜单中的目标功能区域,继续执行触发步骤。
在一些可选的实现方式中,触发模块904包括:第五显示单元9047,用于响应于确定目标功能区域对应的功能为预设的连续调节功能,在屏幕上显示连续调节功能对应的调节控制界面;执行单元9048,用于基于操作体当前映射到屏幕上的二维坐标位置的移动轨迹,执行连续调节功能,并控制调节控制界面上的控制点移动。
在一些可选的实现方式中,触发模块进一步用于:突出显示目标功能区域。
本公开上述实施例提供的引导操作体进行隔空操作的装置,通过在屏幕 上显示图形菜单,和与操作体成映射关系的图标,再检测操作体在空间中相对屏幕做出的隔空操作,同时检测图标在屏幕上的移动物理量,最后基于移动物理量和图形菜单中的与图标对应的目标功能区域,触发目标功能区域对应的菜单功能,从而实现了用户在进行隔空操作时,利用屏幕上显示的图形菜单和图标,直观地引导用户利用操作体进行隔空操作,使用户明确如何利用操作体触发各种功能,有利于用户精准地进行隔空操作,同时在图形菜单的辅助下,可以拓展隔空操作触发的功能的数量,进而使用户更方便地进行隔空操作。
示例性电子设备
下面,参考图11来描述根据本公开实施例的电子设备。该电子设备可以是如图1A所示的终端设备101和服务器103中的任一个或两者、或与它们独立的单机设备,该单机设备可以与终端设备101和服务器103进行通信,以从它们接收所采集到的输入信号。
图11图示了根据本公开实施例的电子设备的框图。
如图11所示,电子设备1100包括一个或多个处理器1101和存储器1102。
处理器1101可以是中央处理单元(Central Processing Unit,Cpu)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备1100中的其他组件以执行期望的功能。
存储器1102可以包括一个或多个计算机程序产品,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(Random Access Memory,Ram)和/或高速缓冲存储器(Cache)等。非易失性存储器例如可以包括只读存储器(Read-Only Memory,Rom)、硬盘、闪存等。在计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器1101可以运行程序指令,以实现上文的本公开的各个实施例的引导操作体进行隔空操作的方法以及/或者其他期望的功能。在计算机可读存储介质中还可以存储诸如图像、控制指令等各种内容。
在一个示例中,电子设备1100还可以包括:输入装置1103和输出装置1104,这些组件通过总线系统和/或其他形式的连接机构(未示出)互连。
例如,在该电子设备是终端设备101或服务器103时,该输入装置1103 可以是摄像头、激光雷达、鼠标、键盘、麦克风等设备,用于输入图像、指令等信息。在该电子设备是单机设备时,该输入装置1103可以是通信网络连接器,用于从终端设备101和服务器103接收所输入的图像、指令等信息。
该输出装置1104可以向外部输出各种信息,包括图形菜单等信息。该输出设备1104可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。
当然,为了简化,图11中仅示出了该电子设备1100中与本公开有关的组件中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备1100还可以包括任何其他适当的组件。
示例性计算机程序产品和计算机可读存储介质
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本公开各种实施例的引导操作体进行隔空操作的方法中的步骤。
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本公开各种实施例的引导操作体进行隔空操作的方法中的步骤。
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只 读存储器(ROM)、可擦式可编程只读存储器((Erasable Programmable Read-Only Memory,Eprom)或闪存)、光纤、便携式紧凑盘只读存储器(Compact Disc Read-Only Memory,Cd-Rom)、光存储器件、磁存储器件、或者上述的任意合适的组合。
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于系统实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本公开中涉及的器件、装置、设备、系统的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、系统。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易 见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。

Claims (14)

  1. 一种引导操作体进行隔空操作的方法,包括:
    在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标;
    检测所述操作体在空间中相对所述屏幕做出的隔空操作;
    响应于所述隔空操作,检测所述图标在所述屏幕上的移动物理量;
    基于所述移动物理量和所述图形菜单中的与所述图标对应的目标功能区域,触发所述目标功能区域对应的菜单功能。
  2. 根据权利要求1所述的方法,其中,所述方法还包括:
    检测所述操作体相对于所述屏幕做出的菜单弹出操作,在检测到所述菜单弹出操作时,执行所述在所述屏幕上显示第一预设形状的图形菜单,以及显示与所述操作体成映射关系并且具有第二预设形状的图标的步骤;或者,
    检测用户发出的语音;若检测到所述语音中包括预设唤醒词,执行所述在所述屏幕上显示第一预设形状的图形菜单,以及显示与所述操作体成映射关系并且具有第二预设形状的图标的步骤。
  3. 根据权利要求2所述的方法,其中,所述检测所述操作体相对于所述屏幕做出的菜单弹出操作,包括:
    检测所述操作体相对所述屏幕做出的预设动作;
    确定所述预设动作的持续时长;
    若所述持续时长大于或等于第一预设时长,确定检测到所述操作体相对所述屏幕做出的所述菜单弹出操作。
  4. 根据权利要求2所述的方法,其中,所述操作体为用户手部;
    所述检测所述操作体相对于所述屏幕做出的菜单弹出操作,包括:
    对所述用户手部进行手势识别,得到手势识别结果;
    若所述手势识别结果表示的手势为预设手势,且所述用户手部位于预设空间范围内,确定所述用户手部相对于所述屏幕做出所述菜单弹出操作。
  5. 根据权利要求2所述的方法,其中,所述检测所述操作体相对于所述屏幕做出的菜单弹出操作,包括:
    确定所述操作体在所述空间中的第一移动轨迹;
    若所述第一移动轨迹为第一预设轨迹,且所述第一移动轨迹位于预设空间范围内,确定所述操作体相对于所述屏幕做出所述菜单弹出操作。
  6. 根据权利要求1所述的方法,其中,所述在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标,包括:
    确定所述操作体当前在所述空间中的三维坐标位置映射到所述屏幕上的二维坐标位置;
    在所述二维坐标位置显示具有所述第二预设形状的图标;
    基于所述二维坐标位置确定在所述屏幕上显示所述第一预设形状的图形菜单的目标位置;
    在所述目标位置显示所述图形菜单。
  7. 根据权利要求1所述的方法,其中,所述在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标,包括:
    确定所述操作体在所述屏幕上映射的预设初始位置;
    在所述预设初始位置显示所述第二预设形状的图标;
    基于所述预设初始位置确定菜单显示位置;
    在所述菜单显示位置显示所述第一预设形状的图形菜单。
  8. 根据权利要求1所述的方法,其中,所述响应于所述隔空操作,检测所述图标在所述屏幕上的移动物理量,包括:
    确定所述操作体当前在所述空间中的三维坐标映射到所述屏幕上的二维坐标;
    基于所述三维坐标在所述隔空操作过程中的变化量,确定所述二维坐标在所述屏幕上的变化量;
    基于所述二维坐标的变化量,确定所述图标在所述屏幕上的移动物理量。
  9. 根据权利要求1所述的方法,其中,所述基于所述移动物理量和所述图形菜单中的与所述图标对应的目标功能区域,触发所述目标功能区域对应的菜单功能,包括:
    若所述图标移动到所述图形菜单中的目标功能区域且所述图标在所述目标功能区域内的连续停留时间大于或等于第二预设时长,触发所述目标功能区域对应的菜单功能;或者,
    若所述图标位于所述目标功能区域内的预设位置,触发所述目标功能区域对应的菜单功能;或者,
    若所述图标在所述目标功能区域内的第二移动轨迹与第二预设轨迹匹配,触发所述目标功能区域对应的菜单功能。
  10. 根据权利要求1所述的方法,其中,所述基于所述移动物理量和所述图形菜单中的与所述图标对应的目标功能区域,触发所述目标功能区域对应的菜单功能,包括:
    基于所述目标功能区域执行如下触发步骤:确定所述目标功能区域是否具有对应的子菜单;如果不具有子菜单,基于所述移动物理量触发执行所述目标功能区域对应的功能;
    如果具有子菜单,基于所述移动物理量在所述屏幕上触发显示所述子菜单;
    再次检测所述图标在所述屏幕上的移动物理量,并确定在所述子菜单中且与所述图标对应的目标功能区域;
    基于再次检测的移动物理量和所述子菜单中的目标功能区域,继续执行所述触发步骤。
  11. 根据权利要求1所述的方法,其中,所述触发所述目标功能区域对应的菜单功能,包括:
    若所述目标功能区域对应的功能为预设的连续调节功能,在所述屏幕上显示所述连续调节功能对应的调节控制界面;
    基于所述操作体当前映射到所述屏幕上的二维坐标位置的移动轨迹,执 行所述连续调节功能,并控制所述调节控制界面上的控制点移动。
  12. 一种引导操作体进行隔空操作的装置,包括:
    显示模块,用于在屏幕上显示第一预设形状的图形菜单,以及显示与操作体成映射关系并且具有第二预设形状的图标;
    第一检测模块,用于检测所述操作体在空间中相对所述屏幕做出的隔空操作;
    第二检测模块,用于响应于所述隔空操作,检测所述图标在所述屏幕上的移动物理量;
    触发模块,用于基于所述移动物理量和所述图形菜单中的与所述图标对应的目标功能区域,触发所述目标功能区域对应的菜单功能。
  13. 一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行上述权利要求1-11任一所述的方法。
  14. 一种电子设备,所述电子设备包括:
    处理器;
    用于存储所述处理器可执行指令的存储器;
    所述处理器,用于从所述存储器中读取所述可执行指令,并执行所述指令以实现上述权利要求1-11任一所述的方法。
PCT/CN2022/075031 2021-06-15 2022-01-29 引导操作体进行隔空操作的方法和装置 WO2022262292A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022567841A JP2023534589A (ja) 2021-06-15 2022-01-29 操作体をガイドして非接触操作を行う方法及び装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110661817.3 2021-06-15
CN202110661817.3A CN113325987A (zh) 2021-06-15 2021-06-15 引导操作体进行隔空操作的方法和装置

Publications (1)

Publication Number Publication Date
WO2022262292A1 true WO2022262292A1 (zh) 2022-12-22

Family

ID=77420791

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075031 WO2022262292A1 (zh) 2021-06-15 2022-01-29 引导操作体进行隔空操作的方法和装置

Country Status (3)

Country Link
JP (1) JP2023534589A (zh)
CN (1) CN113325987A (zh)
WO (1) WO2022262292A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113325987A (zh) * 2021-06-15 2021-08-31 深圳地平线机器人科技有限公司 引导操作体进行隔空操作的方法和装置
CN114816145A (zh) * 2022-03-08 2022-07-29 联想(北京)有限公司 设备管控方法、装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090244023A1 (en) * 2008-03-31 2009-10-01 Lg Electronics Inc. Portable terminal capable of sensing proximity touch and method of providing graphic user interface using the same
CN108536273A (zh) * 2017-03-01 2018-09-14 天津锋时互动科技有限公司深圳分公司 基于手势的人机菜单交互方法与系统
CN108594998A (zh) * 2018-04-19 2018-09-28 深圳市瀚思通汽车电子有限公司 一种车载导航系统及其手势操作方法
CN109725724A (zh) * 2018-12-29 2019-05-07 百度在线网络技术(北京)有限公司 有屏设备的手势控制方法和装置
CN113325987A (zh) * 2021-06-15 2021-08-31 深圳地平线机器人科技有限公司 引导操作体进行隔空操作的方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1803053A1 (en) * 2004-10-13 2007-07-04 Wacom Corporation Limited A hand-held electronic appliance and method of entering a selection of a menu item
CN106648330B (zh) * 2016-10-12 2019-12-17 广州视源电子科技股份有限公司 人机交互的方法及装置
WO2018082269A1 (zh) * 2016-11-04 2018-05-11 华为技术有限公司 菜单显示方法及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090244023A1 (en) * 2008-03-31 2009-10-01 Lg Electronics Inc. Portable terminal capable of sensing proximity touch and method of providing graphic user interface using the same
CN108536273A (zh) * 2017-03-01 2018-09-14 天津锋时互动科技有限公司深圳分公司 基于手势的人机菜单交互方法与系统
CN108594998A (zh) * 2018-04-19 2018-09-28 深圳市瀚思通汽车电子有限公司 一种车载导航系统及其手势操作方法
CN109725724A (zh) * 2018-12-29 2019-05-07 百度在线网络技术(北京)有限公司 有屏设备的手势控制方法和装置
CN113325987A (zh) * 2021-06-15 2021-08-31 深圳地平线机器人科技有限公司 引导操作体进行隔空操作的方法和装置

Also Published As

Publication number Publication date
JP2023534589A (ja) 2023-08-10
CN113325987A (zh) 2021-08-31

Similar Documents

Publication Publication Date Title
US10120454B2 (en) Gesture recognition control device
WO2022262292A1 (zh) 引导操作体进行隔空操作的方法和装置
US20180024643A1 (en) Gesture Based Interface System and Method
TWI524210B (zh) 基於自然姿勢之使用者介面方法及系統
US20170155831A1 (en) Method and electronic apparatus for providing video call
US20140173440A1 (en) Systems and methods for natural interaction with operating systems and application graphical user interfaces using gestural and vocal input
US20200218356A1 (en) Systems and methods for providing dynamic haptic playback for an augmented or virtual reality environments
US20140157209A1 (en) System and method for detecting gestures
US20140282283A1 (en) Semantic Gesture Processing Device and Method Providing Novel User Interface Experience
CN109725724B (zh) 有屏设备的手势控制方法和装置
JP2015520471A (ja) ジェスチャー入力のための指先の場所特定
US20200142495A1 (en) Gesture recognition control device
US10488918B2 (en) Analysis of user interface interactions within a virtual reality environment
US20130285904A1 (en) Computer vision based control of an icon on a display
US11989352B2 (en) Method display device and medium with laser emission device and operations that meet rules of common touch
US20230229283A1 (en) Dynamic display method and apparatus based on operating body, storage medium and electronic device
KR20140089858A (ko) 전자 장치 및 그의 제어 방법
KR20160133305A (ko) 제스쳐 인식 방법, 컴퓨팅 장치 및 제어 장치
Wang et al. A gesture-based method for natural interaction in smart spaces
CN109582134A (zh) 信息显示的方法、装置及显示设备
CN109753154B (zh) 有屏设备的手势控制方法和装置
CN114282544A (zh) 显示设备和控件识别方法
KR20170045101A (ko) 콘텐트를 외부 장치와 공유하는 전자 장치 및 이의 콘텐트 공유 방법
EP2611196A2 (en) Electronic apparatus and method of controlling the same
CN109857314A (zh) 有屏设备的手势控制方法和装置

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022567841

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 17998997

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823785

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE