WO2022012602A1 - 一种电子设备的屏幕交互方法及装置 - Google Patents
一种电子设备的屏幕交互方法及装置 Download PDFInfo
- Publication number
- WO2022012602A1 WO2022012602A1 PCT/CN2021/106352 CN2021106352W WO2022012602A1 WO 2022012602 A1 WO2022012602 A1 WO 2022012602A1 CN 2021106352 W CN2021106352 W CN 2021106352W WO 2022012602 A1 WO2022012602 A1 WO 2022012602A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- electronic device
- user
- target user
- users
- control authority
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV programme
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/0304—Detection arrangements using opto-electronic means
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/441—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
- H04N21/4415—Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card using biometric characteristics of the user, e.g. by voice recognition or fingerprint scanning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4516—Management of client data or end-user data involving client characteristics, e.g. Set-Top-Box type, software version or amount of memory available
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/458—Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
- H04N21/4583—Automatically resolving scheduling conflicts, e.g. when a recording by reservation has been programmed for two programmes in the same time slot
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
Definitions
- the present application relates to the field of smart large screens, and in particular, to a screen interaction method and device for electronic equipment.
- a smart TV can use human body recognition technology to recognize the user's body movements, and control the screen displayed on the screen according to the user's body movements, thereby realizing the interaction function between the screen and the user.
- identifying the target user is the key to realizing the interaction function between the screen and the user.
- large-screen electronic devices such as smart TVs are generally installed in public areas such as living rooms. There may be multiple users who make actions at the same time and want to control the display screen of the screen. At this time, it is an urgent need for the smart TV to control the screen according to which user's body movements Solve the problem.
- the present application provides a screen interaction method and device for an electronic device, which solves the problem of determining a target user who controls the screen from multiple users and controlling the screen display of the electronic device according to the action of the target user.
- the present application provides a screen interaction method of an electronic device.
- the method can be applied to an electronic device, or the method can be applied to a device that can support the electronic device to implement the method.
- the device includes a chip system, and the method includes :
- the electronic device obtains images including N users through the camera, recognizes the respective actions of the N users according to the images including the N users, compares the respective actions of the N users with the preset actions, and makes a match with the preset actions
- the user who performs the action is determined as the target user, so as to control the screen display of the electronic device according to the action of the target user.
- N is an integer greater than or equal to 2.
- the preset actions include swinging forearm, left arm akimbo, right arm akimbo, akimbo, nodding, and clenching a fist.
- An embodiment of the present application provides a screen interaction method for an electronic device, and the method can be applied to an electronic device including a display screen and a camera.
- the electronic device does not need to pay attention to the control authority of the user.
- the electronic device considers the user as the target user, so as to determine the target user as soon as possible, and respond to the user's action, that is, the electronic device.
- the device controls the screen display of the electronic device according to the action of the target user, which effectively improves the user experience when the user selects a program using the electronic device.
- the electronic device recognizes two or more target users, and the actions of the two or more users match the preset actions.
- the electronic device controls the screen display of the electronic device according to the action of the target user with control authority among the M target users, and the target user with control authority is in the preset Target users who have interacted with electronic devices over a period of time. If two or more target users do not have control authority, the electronic device can prompt the user how to control, and the electronic device controls the screen display of the electronic device according to the action of the target user instructed by the user.
- the conflict problem caused by two or more users controlling the screen display of the electronic device at the same time is solved, so that the electronic device can respond to the user's action in time, control the screen display of the electronic device according to the action of the target user, and improve the user's use.
- the screen displays a picture that controls the electronic device according to the action of the target user.
- electronic devices transferred control authority from a user with control authority to a target user indicated by the user. If the electronic device has not assigned the control authority to any user, before controlling the screen display of the electronic device according to the action of the target user, the electronic device assigns the control authority to the target user indicated by the user. Therefore, the target user indicated by the user has the control authority, and the electronic device controls the screen display of the electronic device according to the action of the target user indicated by the user.
- controlling the screen display picture of the electronic device according to the action of the target user includes: controlling the direction of the pointer displayed on the screen of the electronic device according to the swing angle of the target user's forearm , to point the pointer to an option in the menu.
- the menu displayed on the screen of the electronic device is a ring menu or a roulette menu.
- the operation mode and roulette-type user interface (User Interface, UI) based on the polar coordinate system
- the forearm is the polar axis
- the elbow is the pole
- the forearm pointing angle is used as the basic operation dimension
- the roulette UI interface and with gesture recognition, it realizes the functions of human-computer interaction such as quick and accurate selection and confirmation. Since it is difficult to define the coordinate origin and coordinate value range, this application uses polar coordinates as the basis for the mapping relationship, which is different from the interactive mapping relationship based on the Cartesian coordinate system (Cartesian coordinate system). Coordinates correspond to unnatural problems.
- the origin of coordinates (the elbow is the pole) and the range of coordinates (0-360 degrees) can be naturally defined, which makes the mapping between human movements and the screen coordinate system more natural and reduces the need for physical interaction.
- the range of interaction space reduces the fatigue of physical interaction.
- the interaction method using body recognition and gesture recognition is different from the interaction method that only uses hand information, which improves the diversity and flexibility of interactive instructions, can perform more complex operations, and improves instruction efficiency.
- the present application provides a screen interaction method for an electronic device.
- the method can be applied to an electronic device, or the method can be applied to an apparatus that can support the electronic apparatus to implement the method.
- the apparatus includes a chip system, and the method includes :
- the electronic device obtains images including N users, and judges whether the N users include users with control authority according to the images including N users; if the N users include users with control authority, compare the actions of the users with control authority with the actions of the users with control authority.
- the preset actions are compared, and if the actions of the user with control authority match the preset actions, the user with control authority is determined as the target user with control authority, and the screen of the electronic device is controlled according to the action of the target user with control authority Display the screen.
- N is an integer greater than or equal to 2. In this way, after the user with control authority is determined, the action of the user with control authority is identified, which reduces the time for judgment, thereby facilitating the electronic device to respond to the user's operation in a timely manner. If the N users do not include users with control authority, compare the respective actions of the N users with the preset actions, determine M target users among the N users, and determine one target user among the M users as a target user with control authority The target user, whose actions match the preset actions. M is an integer, 1 ⁇ M ⁇ N.
- the method further includes: M ⁇ 2, determining a target user as a target user with control authority, including: assigning control authority to a target user indicated by the user, assigning a target user indicated by the user to a target user Identify the target user with control privileges.
- the present application also provides a screen interaction device for an electronic device, and the beneficial effects can be referred to the description of the first aspect and will not be repeated here.
- the screen interaction apparatus of the electronic device has the function of implementing the behavior in the method example of the first aspect above.
- the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
- the hardware or software includes one or more modules corresponding to the above functions.
- the screen interaction apparatus of the electronic device includes a processing unit.
- a processing unit configured to acquire an image including N users, where N is an integer greater than or equal to 2; the processing unit is further configured to recognize the respective actions of the N users according to the image including the N users; the processing unit, It is also used for determining the target user according to the respective actions of the N users, and the action of the target user matches the preset action; the processing unit is also used for controlling the screen display of the electronic device according to the action of the target user.
- the present application further provides a screen interaction device for an electronic device, and the beneficial effects can be referred to the description of the second aspect and will not be repeated here.
- the screen interaction apparatus of the electronic device has the function of implementing the behavior in the method example of the second aspect above.
- the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
- the hardware or software includes one or more modules corresponding to the above functions.
- the screen interaction apparatus of the electronic device includes a processing unit.
- a processing unit configured to acquire images including N users, where N is an integer greater than or equal to 2; the processing unit is also configured to determine whether the N users include users with control authority according to the images including N users; The processing unit is further configured to, if the N users include users with control authority, determine the user with control authority as a target user with control authority, and the action of the target user with control authority matches a preset action; the The processing unit is further configured to determine a target user as a target user with control authority if the N users do not include users with control authority, and the N users include M target users, and the actions of the target users match the preset actions , M is an integer, 1 ⁇ M ⁇ N; the processing unit is further configured to control the screen display of the electronic device according to the action of the target user with control authority.
- These units may perform the corresponding functions in the method examples of the second aspect. For details, please refer to the detailed descriptions in the method examples, which will not be repeated here.
- an electronic device may include: a processor, a memory, a display screen, and a camera; the processor and the display screen, the camera, and the memory are coupled, and the memory is used for storing computer program codes, and the computer program codes include
- the computer software instruction when the computer software instruction is executed by the electronic device, causes the electronic device to perform the following operations: the electronic device obtains images including N users through a camera, recognizes the respective actions of the N users according to the images including the N users, and The respective actions of the N users determine the target user, and the action of the target user matches the preset action; the screen display picture of the electronic device is controlled according to the action of the target user.
- N is an integer greater than or equal to 2.
- a computer-readable storage medium comprising: computer software instructions; when the computer software instructions are executed in an electronic device, the electronic device is made to execute the first aspect or possible implementations of the first aspect, the second aspect Or the screen interaction method of an electronic device according to any one of the possible implementation manners of the second aspect.
- a seventh aspect provides a computer program product that, when the computer program product runs on a computer, causes the computer to execute any of the first aspect or the possible implementation manner of the first aspect, the second aspect or the possible implementation manner of the second aspect.
- a screen interaction method of an electronic device is provided.
- a chip system is provided, the chip system is applied to an electronic device; the chip system includes an interface circuit and a processor; the interface circuit and the processor are interconnected through a line; the interface circuit is used for receiving signals from a memory of the electronic device and sending signals to the electronic device.
- the processor sends a signal, and the signal includes the computer instructions stored in the memory; when the processor executes the computer instructions, the chip system executes the first aspect or a possible implementation manner of the first aspect, the second aspect or a possible implementation manner of the second aspect
- the screen interaction method of an electronic device according to any one of the above.
- FIG. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- FIG. 2 is a schematic diagram of a camera of a television set according to an embodiment of the present application
- FIG. 3 is a schematic flowchart of a screen interaction method of an electronic device provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of a smart TV collecting a frame through a camera according to an embodiment of the present application
- FIG. 5 is a schematic diagram of a human body joint point identification result provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of a preset action provided by an embodiment of the present application.
- FIG. 7 is a schematic flowchart of a screen interaction method of an electronic device provided by an embodiment of the present application.
- FIG. 8 is a schematic diagram of a process of manipulating a screen provided by an embodiment of the present application.
- FIG. 9 is a schematic diagram of control authority transfer provided by an embodiment of the present application.
- FIG. 10 is a schematic flowchart of a screen interaction method of an electronic device provided by an embodiment of the present application.
- FIG. 11 is a schematic diagram of a UI operation interface provided by an embodiment of the application.
- FIG. 12 is a schematic diagram of a process of manipulating a screen provided by an embodiment of the present application.
- FIG. 13 is a schematic diagram of a screen interaction apparatus of an electronic device according to an embodiment of the present application.
- words such as “exemplary” or “for example” are used to represent examples, illustrations or illustrations. Any embodiments or designs described in the embodiments of the present application as “exemplary” or “such as” should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as “exemplary” or “such as” is intended to present the related concepts in a specific manner.
- the electronic device in the embodiment of the present application may be a television, a tablet computer, a projector, a mobile phone, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, And personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality (virtual reality, VR) equipment including display screen and camera equipment, the specific form of the embodiment of the present application to the electronic equipment No special restrictions are imposed.
- FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- the electronic device includes: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a power management module 140, an antenna, a wireless communication module 160, an audio Module 170, speaker 170A, speaker interface 170B, microphone 170C, sensor module 180, buttons 190, indicator 191, display screen 192, camera 193 and so on.
- the aforementioned sensor module 180 may include sensors such as a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, and an ambient light sensor.
- the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device.
- the electronic device may include more or fewer components than shown, or some components may be combined, or some components may be split, or a different arrangement of components.
- the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
- application processor application processor, AP
- modem processor graphics processor
- graphics processor graphics processor
- ISP image signal processor
- controller memory
- video codec digital signal processor
- DSP digital signal processor
- NPU neural-network processing unit
- the processor 110 is configured to obtain images including N users through the camera 193, identify the respective actions of the N users according to the images including the N users, and identify the actions of the N users that match the preset actions.
- the action user is determined as the target user, and the display screen 192 of the electronic device is controlled to display a picture according to the action of the target user, and N is an integer greater than or equal to 2.
- a controller can be the nerve center and command center of an electronic device.
- the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
- the processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, and/or USB interface, etc.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous transceiver
- MIPI mobile industry processor interface
- GPIO general-purpose input/output
- USB interface etc.
- the power management module 140 is used to connect power.
- the power management module 140 may also be connected with the processor 110 , the internal memory 121 , the display screen 192 , the camera 193 , the wireless communication module 160 and the like.
- the power management module 140 receives power input, and supplies power to the processor 110 , the internal memory 121 , the display screen 192 , the camera 193 , the wireless communication module 160 , and the like.
- the power management module 140 may also be provided in the processor 110 .
- the wireless communication function of the electronic device can be implemented by the antenna and the wireless communication module 160 and the like.
- the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), global navigation, etc. applied on the electronic device.
- WLAN wireless local area networks
- WiFi wireless fidelity
- BT Bluetooth
- global navigation etc.
- Satellite system global navigation satellite system, GNSS
- frequency modulation frequency modulation, FM
- NFC near field communication technology
- infrared technology infrared, IR
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna.
- the antenna of the electronic device is coupled with the wireless communication module 160 so that the electronic device can communicate with the network and other devices through wireless communication technology.
- the electronic device realizes the display function through the GPU, the display screen 192, and the application processor.
- the GPU is a microprocessor for image processing, and is connected to the display screen 192 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
- the display screen 192 is used to display images, videos, and the like.
- the display screen 192 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
- LED liquid crystal display
- OLED organic light-emitting diode
- AMOLED organic light-emitting diode
- FLED flexible light-emitting diode
- Miniled MicroLed, Micro-oLed
- quantum dot light-emitting diode quantum dot light emitting diodes, QLED
- the electronic device can realize the shooting function through the ISP, the camera 193, the video codec, the GPU, the display screen 192 and the application processor.
- the ISP is used to process the data fed back by the camera 193 .
- the ISP may be provided in the camera 193 .
- Camera 193 is used to capture still images or video.
- the object is projected through the lens to generate an optical image onto the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
- the electronic device may include 1 or N cameras 193 , where N is a positive integer greater than 1.
- the camera 193 may be disposed at the upper edge of the display screen 192 of the TV set.
- the embodiment of the present application does not limit the position of the camera 193 on the electronic device.
- the electronic device may not include a camera, that is, the above-mentioned camera 193 is not provided in the electronic device (eg, a television).
- the electronic device can connect to the camera 193 through an interface (eg, the USB interface 130 ).
- the external camera 193 can be fixed on the electronic device by an external fixing member (such as a camera bracket with a clip).
- the external camera 193 can be fixed at the edge of the display screen 192 of the electronic device, such as the upper edge, by means of an external fixing member.
- a digital signal processor is used to process digital signals, in addition to digital image signals, it can also process other digital signals. For example, when the electronic device selects the frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy, etc.
- Video codecs are used to compress or decompress digital video.
- An electronic device may support one or more video codecs. In this way, the electronic device can play or record videos in various encoding formats, such as: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
- the NPU is a neural-network (NN) computing processor.
- NN neural-network
- applications such as intelligent cognition of electronic devices can be realized, such as image recognition, face recognition, speech recognition, text understanding, etc.
- the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device.
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
- Internal memory 121 may be used to store computer executable program code, which includes instructions.
- the processor 110 executes various functional applications and data processing of the electronic device by executing the instructions stored in the internal memory 121 .
- the internal memory 121 may include a storage program area and a storage data area.
- the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
- the storage data area can store data (such as audio data, etc.) created during the use of the electronic device.
- the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
- the electronic device may implement audio functions through an audio module 170, a speaker 170A, a microphone 170C, a speaker interface 170B, and an application processor. For example, music playback, recording, etc.
- the microphone 170C can be used to receive the user's voice command to the electronic device.
- the speaker 170A may be used to feed back the decision-making instructions of the electronic device to the user.
- the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 . Speaker 170A, also referred to as a "speaker”, is used to convert audio electrical signals into sound signals. The microphone 170C, also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
- the speaker interface 170B is used to connect a wired speaker.
- the speaker interface 170B can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
- OMTP open mobile terminal platform
- CTIA cellular telecommunications industry association of the USA
- the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
- the electronic device may receive key input and generate key signal input related to user settings and function control of the electronic device.
- the indicator 191 may be an indicator light, which may be used to indicate that the electronic device is in a power-on state, a standby state, or a power-off state, or the like. For example, if the indicator light is off, it can indicate that the electronic device is in a shutdown state; if the indicator light is green or blue, it can indicate that the electronic device is in a power-on state; if the indicator light is red, it can indicate that the electronic device is in a standby state.
- the structures illustrated in the embodiments of the present application do not constitute a specific limitation on the electronic device. It may have more or fewer components than shown in FIG. 1 , may combine two or more components, or may have a different configuration of components.
- the electronic device may also include components such as speakers.
- the various components shown in Figure 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
- the methods in the following embodiments can all be implemented in an electronic device having the above-mentioned hardware structure.
- the above-mentioned electronic device is a smart TV as an example to describe the methods of the embodiments of the present application.
- FIG. 3 is a schematic flowchart of a screen interaction method of an electronic device provided by an embodiment of the present application. As shown in Figure 3, the method may include:
- the electronic device acquires images including N users.
- the user starts the electronic device, and the electronic device displays the main interface.
- the method for the user to start the electronic device is not limited.
- the user can activate the electronic device by voice. For example, when the user says “turn on the TV", the smart TV starts the smart TV after receiving the voice command of "turn on the TV".
- the user can start the electronic device through a remote control. The user clicks the "ON/OFF” key, and the TV receives the "ON" instruction and starts the TV.
- the user can control the electronic device to activate the camera.
- the electronic device captures an image in front of the display screen of the electronic device through a camera.
- the method by which the user can control the electronic device to activate the camera is not limited.
- the user may control the electronic device to activate the camera through a voice command, or the user may use the remote control to control the electronic device to activate the camera.
- the image in front of the display screen includes the scene in part or all of the field of view (FOV) of the camera. If there is a person in the scene in part or all of the viewing angle of the camera, the image includes an image of at least one user.
- FOV field of view
- the smart TV collects images of all users in the frame (the sector-shaped range in FIG. 4 ) through the camera.
- the sector includes user 1 and user 2, user 1 is sitting on the sofa, and user 2 is the standing user.
- the smart TV collects the image of user 1 and the image of user 2 through the camera.
- the image of the user in the frame collected by the camera is a complete image of the user, and the complete image of the user includes the head, limbs and torso.
- the image of the user in the frame captured by the camera is the image of the upper body of the user, and the image of the upper body of the user includes the head, the upper limbs, and the torso.
- the image of the user captured by the camera is an incomplete image, and the incomplete image may refer to the left image of the user or the right image of the user.
- the left image includes the left part of the head, left upper extremity, left lower extremity, and left torso.
- the right image includes the right part of the head, right upper extremity, right lower extremity, and right torso. If the image of the user captured by the camera is an incomplete image, the electronic device considers that the image of the user is unavailable, and does not count the image of the user.
- the electronic device recognizes the respective actions of the N users according to the image including the N users.
- the electronic device can use a human pose estimation algorithm to identify the human pose of each of the N users in the frame.
- the human pose detection algorithm is an algorithm that detects human key points by training a neural network model, and describes the human pose according to the key points of the human body.
- the human pose detection algorithm can identify at least ten points of the human body, such as the head, shoulders, elbows, wrists, hips, knees, and ankles. As shown in Table 1, it is a description of identifiable human joint points. As shown in FIG. 5 , a schematic diagram of a human body joint point identification result provided in an embodiment of the present application is shown.
- Joint serial number joint description Joint serial number joint description 0 overhead 8 right hip 1 neck 9 right knee 2 right shoulder 10 right ankle 3 right elbow 11 left hip 4 right wrist 12 left knee 5 left shoulder 13 left ankle 6 left elbow 14 body center 7 left wrist
- Action is the process of changing the position of the character's facial features (expression change), the change of the position of the character's limbs (action change), and the change of the relative position of the character and the environment (movement distance change).
- the electronic device can use a human body gesture detection algorithm to identify the human body postures of N users in consecutive multiple frames, and determine the actions of the respective users according to the respective human body postures of the N users in the consecutive multiple frames.
- the electronic device determines a target user according to the respective actions of the N users, and the action of the target user matches a preset action.
- electronic devices are generally installed in public areas such as living rooms. If any action of the user can control the picture displayed on the display screen of the electronic device, it may go against the wishes of the user, resulting in poor user experience. For example, before the user walks past the display screen of the electronic device, the user does not want to pause the picture displayed on the display screen of the electronic device, but the electronic device pauses the picture displayed on the display screen. Therefore, the electronic device can pre-configure human body movements, the user makes a preset action, the electronic device determines whether the user's action matches the preset action, and determines the user whose action matches the preset action as the target user. The preset action controls the picture displayed on the display screen of the electronic device.
- the so-called preset movements are human body movements pre-configured by the electronic device.
- target users refer to users who make preset actions.
- the so-called action matching the preset action refers to the same action as the preset action.
- the electronic device determines that the user's action is exactly the same as the preset action, it considers that the user's action matches the preset action, and the user is the target user.
- the so-called action matching the preset action refers to an action that is substantially the same as the preset action.
- the electronic device determines that the user's action is substantially the same as the preset action, it considers that the user's action matches the preset action, and the user is the target user.
- the preset motion is swinging the forearm, and the angle of swinging the forearm is 45 degrees. If the angle at which the user swings the forearm is 30 degrees, at this time, when the electronic device determines that the user's action is swinging the forearm, it considers that the user's action matches the preset action, and the user is the target user.
- the preset action is the left arm akimbo.
- a akimbo refers to bending the elbows and placing the five fingers on the waist. If the user's left hand does not have five fingers on the waist, but is placed on the waist with a fist, the electronic device can also determine that the user's action is when the left arm is on the hip, and the user's action matches the preset action, and the user is the target user.
- the electronic device determines that the user's action does not match the preset action, the user is not the target user. If the target user is not included in the image captured by the camera of the electronic device, perform S301 again.
- the preset actions include, but are not limited to: swinging forearm, left arm on hips, right arm on hips, arms on hips, nodding, and making fists.
- Swinging the forearm may refer to the user shaking the forearm from side to side. As shown in FIG. 6( a ), it is a schematic diagram of swinging the forearm.
- Left arm akimbo refers to bending the elbow of the left arm and placing the five fingers on the waist. As shown in (b) of FIG. 6 , it is a schematic diagram of the left arm akimbo.
- Right arm akimbo refers to bending the elbow of the right arm and placing the five fingers on the waist. As shown in (c) of FIG. 6 , it is a schematic diagram of the right arm akimbo.
- Hitting the hips means bending the elbow of the right arm, placing the five fingers on the waist, and bending the elbow of the left arm, placing the five fingers on the waist.
- FIG. 6 it is a schematic diagram of arms akimbo.
- Nodding is a rapid forward bow. As shown in (e) of FIG. 6 , it is a schematic diagram of nodding downward.
- a fist is when the fingers are bent towards the palm of the hand into a fist.
- (f) of FIG. 6 it is a schematic diagram of making a fist.
- Different actions represent different operations on the content displayed on the display screen of the electronic device. For example, waving the forearm to select a menu. For another example, the left arm on hips means returning to the previous level. For another example, the right arm on hips means entering the next level. For another example, akimbo means returning to the main interface. For another example, nodding means confirming the action. For another example, clenching a fist means confirming the action. For another example, waving the forearm left and right means releasing the control authority. This application does not limit the operations corresponding to the preset actions.
- the electronic device controls the screen display of the electronic device according to the action of the target user.
- the electronic device controlling the screen display of the electronic device according to the action of the target user may include the steps shown in FIG. 7 . If the number of target users is 0, it means that the user in the image captured by the camera of the electronic device has not performed any preset action, and then perform S301 again. If the number of target users is not 0, the electronic device determines whether the number of target users is greater than 1 (ie, S701 is executed).
- the electronic device determines the target user with control authority (ie, executes S702 ), and controls the screen display of the electronic device according to the action of the target user with control authority (ie, executes S703 ). If the action of the target user having the control authority is an action of releasing the control authority, the electronic device releases the control authority, and executes S301 again.
- the target user with control authority is the target user who has interacted with the electronic device within a preset time period.
- the so-called interaction with the electronic device can be understood as the target user has made a preset action, and the electronic device responded to the preset action.
- the preset duration is 1 minute. This application does not limit the preset duration, and users can set the preset duration according to their own needs.
- the electronic device determines a target user. At this time, it is not limited to the content displayed on the display screen of the electronic device and whether the target user has control authority to control the electronic device, and the electronic device responds to the action of the target user. , to control the screen display of the electronic device. Therefore, it is convenient for the electronic device to respond to the operation of the electronic device by the user in time.
- the display screen of the electronic device displays an animation picture, and the action of the target user (user 2) is to put his arms on his hips.
- arms akimbo means returning to the main interface.
- the electronic device responds with arms on hips, and the content displayed on the display screen of the electronic device switches from the animation screen to the main interface.
- the left arm on the hip means to return to the previous level
- the display screen of the electronic device displays the second-level menu interface
- the action of the target user is the left arm on the hip (as shown in (b) in Figure 6)
- the electronic device responds with the left arm on the hip.
- the content displayed on the display screen of the electronic device is switched from the secondary menu interface to the primary menu interface.
- the target user determined by the electronic device has control authority, and at this time, the electronic device determines the target user as the target user with control authority.
- the electronic device uses face recognition technology to determine whether the target user has control authority.
- the electronic device performs face recognition on the target user, and determines whether the target user's face image matches the stored face image. If the target user's face image matches the stored face image, the target user is a target with control authority. User, if the face image of the target user does not match the stored face image, the target user is not a target user with control authority.
- the target user determined by the electronic device does not have control authority.
- the electronic device assigns the control authority to the target user (ie, S704 is executed), and the target user is determined as the target user having the control authority.
- the electronic device assigns the control right to the target user.
- the electronic device transfers the control authority from the user with the control authority to the target user.
- a user with control authority is a user who has interacted with the electronic device within a preset time period.
- the user with control authority may be in front of the screen of the electronic device, and the image captured by the camera of the electronic device includes the image of the user with control authority, but the user with control authority does not perform any preset actions, so the The user with control authority is not the target user. If the target user wants to control the electronic device, the electronic device transfers the control authority from the user with control authority to the target user, and the target user becomes the target user with control authority.
- the user with control authority is not in front of the screen of the electronic device, and the image captured by the camera of the electronic device does not include the image of the user with control authority, so the user with control authority is not the target user, and the target user wants to control the electronic device , the electronic device transfers the control authority from the user with the control authority to the target user, and the target user becomes the target user with the control authority.
- other users may be included in the image, other users are users who do not have control rights, other users may not perform any actions, or the actions of other users do not match the preset actions, and the electronic device does not respond Actions of users other than the target user within the image.
- the electronic device determines whether there is a target user with control authority among the M target users (ie, execute S705).
- M is an integer, 2 ⁇ M ⁇ N.
- the electronic device determines the target user with control authority (ie, S702 is executed), that is, the target user continues to hold control authority, and according to the action of the target user with control authority
- the screen of the electronic device is controlled to display the picture (ie, S703 is executed).
- the electronic device uses face recognition technology to determine whether the target user has control authority.
- the display screen of the electronic device displays an animation picture, and the action of the target user (user 1 ) is nodding.
- the action of the target user (User 2) is to put his arms on his hips.
- nodding means pausing the screen.
- Arms on hips means to return to the main interface.
- User 1 and User 2 are both target users.
- the electronic device performs face recognition on user 1 and user 2, and identifies whether the face image of user 1 and the face image of user 2 match the stored face image.
- the electronic device can match the face image of user 2 with the stored face image, it is determined that user 2 is a target user with control authority at this time. As shown in (b) of FIG. 8 , the electronic device responds with arms on hips, and the content displayed on the display screen of the electronic device switches from the animation screen to the main interface.
- the electronic device uses the user's identifier to determine whether the target user has the control authority.
- the identification of the user stored by the electronic device is a special value. If a user manipulates the electronic device, the electronic device uses the human body posture detection algorithm to identify the user, and then assigns an identifier to the user, and the electronic device stores the correspondence between the user's characteristics and the user's identifier. The validity period of the user's identification may be a preset period of time. If the electronic device recognizes the user again, the user is a user with control authority.
- the electronic device recognizes the actions of user 1 and user 2 by using a human body gesture detection algorithm, the corresponding relationship between the user's characteristics and the user's identity is queried, and it is assumed that the electronic device can match the characteristics of user 2 and the stored identity.
- User 2 is the target user with control rights.
- the content displayed on the display screen of the electronic device is switched from the animation screen to the main interface.
- the characteristic may be a human biometric.
- Biometric features include but are not limited to: facial features and body features.
- the electronic device If there is no target user with control authority among the M target users, that is, none of the M target users has control authority, the electronic device prompts the user to perform the action again, and executes S301 again.
- the electronic device determines the target user indicated by the user (ie, executes S706).
- the electronic device assigns the control authority to the target user (ie, executes S704).
- control the screen display of the electronic device according to the action of the target user with control authority ie, execute S703 ).
- the target user indicated by the user is the user in the image captured by the camera of the electronic device this time.
- the electronic device may play or display prompt information, prompting the user to respond to the action of the target user.
- the electronic device plays or displays prompt information, prompting the user to respond to the action of the user 1 or the action of the user 2 .
- the electronic device may be that the electronic device has assigned control rights to other users, and none of the M target users has control rights. At this time, since other users do not control the electronic device, the M target users need to control the electronic device. , the electronic device transfers the control authority from the user having the control authority to the target user indicated by the user (ie, execute S704). For example, as shown in FIG. 9 , the electronic device assigns control authority to user 3, user 1 and user 2 have not interacted with the electronic device within a preset period of time, and neither user 1 nor user 2 has control authority. At this time, since user 3 does not control the electronic device, but user 1 and user 2 need to control the electronic device, the electronic device transfers the control authority from user 3 with control authority to user 2 indicated by the user.
- the electronic device may not assign the control authority to any user, and none of the M target users have the control authority. In this case, the electronic device assigns the control authority to the target user indicated by the user (ie, execute S704). For example, the electronic device assigns the control authority to User 2 indicated by the user.
- the preset duration is recalculated from the initial value of 0. If no user controls the electronic device within a preset time period, or the user leaves the field of view (FOV) of the camera of the electronic device, the electronic device automatically releases the control authority. Of course, the user can also perform a release action, and the electronic device releases the control authority in response to the release action. When waiting for a user to control the electronic device, assign the control authority to the user. Therefore, it is convenient for the user to control the electronic device in time.
- the electronic device may first determine whether the user has the control authority to determine the target user.
- the method includes the following steps.
- the electronic device acquires images including N users (ie, S1001 is performed).
- N is an integer greater than or equal to 2.
- the electronic device determines whether the N users include a user with control authority according to the image including the N users (ie, S1002 is executed).
- the electronic device may use the face recognition technology to determine the user who has the control authority among the N users.
- the electronic device uses a human body posture detection algorithm to identify the user's characteristics, and determines a user who has control authority among the N users according to the corresponding relationship between the user's characteristics and the user's identification.
- identifying a user with control authority reference may be made to the descriptions in the foregoing embodiments.
- the electronic device recognizes the action of the user according to the image of the user with control authority (ie, executes S1003). If the electronic device determines whether there is a user with control authority among the N users, and uses face recognition technology to determine the user with control authority among the N users, the electronic device can use the human posture detection algorithm to identify the user with control authority. Actions.
- the electronic device determines whether the action of the user with the control authority matches the preset action (ie, S1004 is executed). If the action of the user with control authority matches the preset action, the user with control authority is determined as the target user with control authority (ie, S1005 is executed). The electronic device controls the screen display of the electronic device according to the action of the target user with control authority (ie, executes S1006). If the action of the target user having the control authority is an action of releasing the control authority, the electronic device releases the control authority, and executes S1001 again. If the action of the user with control authority does not match the preset action, it is determined that the user with control authority is not the target user with control authority, and S1001 is performed again. If no user controls the electronic device within a preset time period, or the user leaves the field of view (FOV) of the camera of the electronic device, the electronic device automatically releases the control authority.
- FOV field of view
- the action of the user with control authority is identified, which reduces the time for judging the user's action, thereby facilitating the electronic device to respond to the user's operation in time.
- the human posture detection algorithm is used to identify the actions of the N users. If there is a user with control authority among the N users, at this time, no need After re-identifying the actions of the user with the control authority, the target user with the control authority can be determined according to whether the action of the user with the control authority matches the preset action.
- the electronic device recognizes the respective actions of the N users according to the images of the N users (ie, execute S1007 ), that is, the electronic device uses the human posture detection algorithm to recognize the actions of the N users.
- the electronic device determines whether the actions of the N users match the preset actions (ie, S1008 is executed). If there is a user with an action matching the preset action among the N users, the user with the action matching the preset action among the N users is determined as the target user, and the electronic device determines whether the number of target users is greater than 1 (that is, executing S1009). If there is no user with an action matching the preset action among the N users, S1001 is re-executed.
- the electronic device assigns the control authority to the target user (ie, S1010 is executed), and the target user is determined as the target user with control authority (ie, S1005 is executed).
- the action of the target user controls the screen display of the electronic device (ie, S1006 is executed). Understandably, at this time, the electronic device does not assign the control authority to any user, and the electronic device assigns the control authority to the target user.
- assigning control authority reference may be made to the explanations of the above-mentioned embodiments.
- the electronic device prompts the user to perform the action again, and executes S1001 again.
- the electronic device determines the target user indicated by the user (ie, executes S1011).
- the electronic device assigns the control authority to the target user (ie, executes S1010 ), that is, assigns the control authority to the target user indicated by the user.
- the target user indicated by the user is determined as the target user with control authority (ie, S1005 is executed), and the electronic device controls the screen display of the electronic device according to the action of the target user with control authority (ie, S1006 is executed). Understandably, the target user indicated by the user is the user in the image captured by the camera of the electronic device this time. For the explanation of the target user indicated by the user, reference may be made to the description of the above-mentioned embodiment, which will not be repeated.
- the menu displayed on the screen of the electronic device is a ring menu.
- the menu displayed on the screen of the electronic device is a carousel menu.
- the screen of an electronic device shows a pointer pointing to an option in a menu.
- the electronic device can control the direction of the pointer displayed on the screen of the electronic device according to the swinging angle of the target user's forearm, so that the pointer points to an option in the menu.
- the user watches the state.
- the user lifts the forearm and performs a preset action of evoking the limb manipulation function to activate the limb manipulation function.
- the user rotates the forearm pointing angle, and the electronic device can change the rotation angle according to the rotation angle of the forearm in (b) of FIG. 12 and the forearm in (c) of FIG. 12 . It is mapped to the pointer pointing in the roulette user interface (User Interface, UI), that is, the pointer points from icon 1 to icon 3.
- UI User Interface
- the user makes a preset gesture
- the electronic device responds to the preset gesture, for example, the user makes a grasping action
- the screen of the electronic device displays a picture of the icon 3 .
- the screen of the electronic device displays the picture of the icon 3 , and the user can repeatedly operate according to (a) to (d) of the above-mentioned FIG. 12 .
- the operation mode and roulette UI operation interface based on the polar coordinate system
- the forearm is the polar axis
- the elbow is the pole
- the forearm pointing angle is used as the basic operation dimension, which corresponds to the roulette type.
- UI interface combined with gesture recognition, to achieve fast and accurate selection, confirmation and other human-computer interaction functions. Since it is difficult to define the coordinate origin and coordinate value range, this application uses polar coordinates as the basis for the mapping relationship, which is different from the interactive mapping relationship based on the Cartesian coordinate system (Cartesian coordinate system). Coordinates correspond to unnatural problems.
- the origin of coordinates (the elbow is the pole) and the range of coordinates (0-360 degrees) can be naturally defined, which makes the mapping between human movements and the screen coordinate system more natural and reduces the need for physical interaction.
- the range of interaction space reduces the fatigue of physical interaction.
- the interaction method using body recognition and gesture recognition is different from the interaction method that only uses hand information, which improves the diversity and flexibility of interactive instructions, can perform more complex operations, and improves instruction efficiency.
- the electronic device includes corresponding hardware structures and/or software modules for performing each function.
- the units and method steps of each example described in conjunction with the embodiments disclosed in the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is performed by hardware or computer software-driven hardware depends on the specific application scenarios and design constraints of the technical solution.
- FIG. 13 is a schematic structural diagram of a possible screen interaction apparatus of an electronic device provided by an embodiment of the present application.
- the screen interaction apparatuses of these electronic devices can be used to implement the functions of the electronic devices in the above method embodiments, and thus can also achieve the beneficial effects of the above method embodiments.
- the screen interaction apparatus of the electronic device may be the electronic device as shown in FIG. 1 , or may be a module (eg, a chip) applied to the electronic device.
- the screen interaction apparatus 1300 of the electronic device includes a processing unit 1310 , a display unit 1320 and a storage unit 1330 .
- the screen interaction apparatus 1300 of the electronic device is used to implement the functions of the electronic device in the method embodiments shown in FIG. 3 , FIG. 7 and FIG. 10 above.
- the storage unit 1330 is used to store instructions executed by the processing unit 1310 or input data required by the processing unit 1310 to execute the instructions, or to store data generated after the processing unit 1310 executes the instructions.
- the processing unit 1310 is configured to acquire images including N users through a camera, so as to implement the functions of the electronic device in the method embodiments shown in FIG. 3 , FIG. 7 , and FIG. 10 .
- the display unit 1320 is configured to display a screen display image on which the processing unit 1310 controls the electronic device according to the action of the target user.
- the processing unit 1310 may perform the functions of the processor 110 in the electronic device shown in FIG. 1 .
- the processor in the embodiments of the present application may be a central processing unit (Central Processing Unit, CPU), and may also be other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application-specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field Programmable Gate Array, FPGA) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof.
- a general-purpose processor may be a microprocessor or any conventional processor.
- the method steps in the embodiments of the present application may be implemented in a hardware manner, or may be implemented in a manner in which a processor executes software instructions.
- Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (Random Access Memory, RAM), flash memory, read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM) , PROM), Erasable Programmable Read-Only Memory (Erasable PROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs or known in the art in any other form of storage medium.
- RAM Random Access Memory
- ROM read-only memory
- PROM programmable read-only memory
- PROM Erasable Programmable Read-Only Memory
- EPROM Electrically Erasable Programmable Read-Only Memory
- An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
- the storage medium can also be an integral part of the processor.
- the processor and storage medium may reside in an ASIC.
- the ASIC may be located in a network device or in an end device.
- the processor and the storage medium may also exist in the network device or the terminal device as discrete components.
- the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
- software it can be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer programs or instructions.
- the processes or functions described in the embodiments of the present application are executed in whole or in part.
- the computer may be a general purpose computer, a special purpose computer, a computer network, network equipment, user equipment, or other programmable apparatus.
- the computer program or instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer program or instructions may be downloaded from a website site, computer, A server or data center transmits by wire or wireless to another website site, computer, server or data center.
- the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server, data center, or the like that integrates one or more available media.
- the usable medium can be a magnetic medium, such as a floppy disk, a hard disk, and a magnetic tape; it can also be an optical medium, such as a digital video disc (DVD); it can also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).
- a magnetic medium such as a floppy disk, a hard disk, and a magnetic tape
- an optical medium such as a digital video disc (DVD)
- DVD digital video disc
- it can also be a semiconductor medium, such as a solid state drive (solid state drive). , SSD).
- “at least one” means one or more, and “plurality” means two or more.
- “And/or”, which describes the relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, it can indicate that A exists alone, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
- the character “/” generally indicates that the related objects are a kind of "or” relationship; in the formula of this application, the character "/” indicates that the related objects are a kind of "division” Relationship.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请公开了一种电子设备的屏幕交互方法及装置,涉及智能大屏领域,解决了从多个用户中确定操控屏幕的目标用户,根据目标用户的动作控制电子设备的屏幕显示画面的问题。所述方法包括:通过摄像头获取屏幕前包括N个用户的图像,依据包括N个用户的图像识别图像中N个用户各自的动作,将N个用户的动作与预设动作进行比较,将做出与预设动作相匹配的动作的用户确定为目标用户,从而根据目标用户的动作控制电子设备的屏幕显示画面,N为大于或等于2的整数。
Description
本申请要求于2020年07月17日提交国家知识产权局、申请号为202010696580.8、申请名称为“一种电子设备的屏幕交互方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及智能大屏领域,尤其涉及一种电子设备的屏幕交互方法及装置。
随着人工智能(Artificial Intelligence,AI)的发展,智能设备越来越受到用户的青睐。例如,智能电视可以采用人体识别技术识别用户的肢体动作,根据用户的肢体动作控制屏幕显示的画面,从而实现屏幕与用户的交互功能。其中,识别目标用户是实现屏幕与用户的交互功能的关键。但是,智能电视等大屏电子设备一般安装在客厅等公用区域,可能存在多个用户同时做出动作,想控制屏幕的显示画面,此时智能电视根据哪个用户的肢体动作及时控制屏幕是一个亟待解决问题。
发明内容
本申请提供一种电子设备的屏幕交互方法及装置,解决了从多个用户中确定操控屏幕的目标用户,根据目标用户的动作控制电子设备的屏幕显示画面的问题。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供了一种电子设备的屏幕交互方法,该方法可应用于电子设备,或者该方法可应用于可以支持电子设备实现该方法的装置,例如该装置包括芯片系统,方法包括:电子设备通过摄像头获取包括N个用户的图像,根据包括N个用户的图像识别N个用户各自的动作,将N个用户各自的动作与预设动作比较,将做出与预设动作相匹配的动作的用户确定为目标用户,从而根据目标用户的动作控制电子设备的屏幕显示画面。其中,N为大于或等于2的整数。可选的,预设动作包括挥动前臂、左臂叉腰、右臂叉腰、双臂叉腰、点头和握拳。
本申请实施例提供一种电子设备的屏幕交互方法,该方法可以应用于包括显示屏和摄像头的电子设备。该方法中,电子设备无需关注用户的控制权限,只要用户做出与预设动作相匹配的动作,电子设备便认为该用户为目标用户,从而尽快地确定目标用户,响应用户的动作,即电子设备根据目标用户的动作控制电子设备的屏幕显示画面,有效地提高了用户使用电子设备进行选择节目时的用户体验。
在一种可能的实现方式中,若目标用户包括M个用户,M为整数,2≤M≤N。可理解的,电子设备识别到两个或两个以上的目标用户,两个或两个以上的用户的动作与预设动作匹配。此时,如果M个目标用户包含具有控制权限的目标用户,电子设备根据M个目标用户中具有控制权限的目标用户的动作控制电子设备的屏幕显示画面,具有控制权限的目标用户为在预设时长内与电子设备交互过的目标用户。如果两个或两个以上的目标用户均没有控制权限,电子设备可以提示用户如何控制,电子设备根 据用户指示的目标用户的动作控制电子设备的屏幕显示画面。从而解决了两个或两个以上用户同时控制电子设备的屏幕显示画面而产生的冲突问题,使电子设备能够及时响应用户的动作,根据目标用户的动作控制电子设备的屏幕显示画面,提高用户使用电子设备进行选择节目时的用户体验。
在一些实施例中,在两个或两个以上的目标用户均没有控制权限的情况下,如果电子设备已将控制权限分配给了其他用户,在根据目标用户的动作控制电子设备的屏幕显示画面之前,电子设备将控制权限从具有控制权限的用户转移给用户指示的目标用户。如果电子设备未将控制权限分配给任何用户,在根据目标用户的动作控制电子设备的屏幕显示画面之前,电子设备为用户指示的目标用户分配控制权限。从而以便于用户指示的目标用户具有控制权限,电子设备根据用户指示的目标用户的动作控制电子设备的屏幕显示画面。
在一种可能的设计中,若预设动作为选择菜单动作,根据目标用户的动作控制电子设备的屏幕显示画面,包括:根据目标用户的前臂的挥动角度控制电子设备的屏幕显示的指针的方向,使指针指向菜单中的选项。其中,电子设备的屏幕显示的菜单为环形菜单或轮盘式菜单。在一些实施例中,以极坐标系为基础操作维度的操作方式和轮盘式用户界面(User Interface,UI),以前臂为极轴,手肘为极点,以前臂指向角度作为基础操作维度,对应到轮盘式UI界面,并配合手势识别,实现快速准确的选中、确认等人机交互的功能。由于坐标原点和坐标值域难以定义,本申请以极坐标为映射关系基础区别于以直角坐标系(笛卡尔坐标系)为主的交互映射关系,解决了直角坐标系体系下,人体运动和屏幕坐标对应不自然的问题,可以天然定义坐标原点(手肘为极点)和坐标值域(0-360度),使得人体动作和屏幕坐标系之间的映射更加自然,减小了肢体交互所需要的交互空间范围,降低了肢体交互的疲劳度。利用肢体识别和手势识别的交互方式,区别于仅利用手部信息的交互方式,提升了交互指令的多样性和灵活性,可以执行更为复杂的操作,指令效率提升。
第二方面,本申请提供了一种电子设备的屏幕交互方法,该方法可应用于电子设备,或者该方法可应用于可以支持电子设备实现该方法的装置,例如该装置包括芯片系统,方法包括:电子设备获取包括N个用户的图像,根据包括N个用户的图像判断N个用户是否包括具有控制权限的用户;若N个用户包括具有控制权限的用户,将具有控制权限的用户的动作与预设动作进行比较,若具有控制权限的用户的动作与预设动作相匹配,将具有控制权限的用户确定为具有控制权限的目标用户,根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面。N为大于或等于2的整数。如此,在确定具有控制权限的用户后,再识别该一个具有控制权限的用户的动作,减少了判断的时长,从而,以便于电子设备及时响应用户的操作。若N个用户不包括具有控制权限的用户,将N个用户各自的动作与预设动作比较,确定N个用户中的M个目标用户,将M个中的一个目标用户确定为具有控制权限的目标用户,目标用户的动作与预设动作相匹配。M为整数,1≤M≤N。
在一种可能的实现方式中,方法还包括:M≥2,将一个目标用户确定为具有控制权限的目标用户,包括:为用户指示的一个目标用户分配控制权限,将用户指示的一个目标用户确定为具有控制权限的目标用户。从而解决了两个或两个以上用户同时控 制电子设备的屏幕显示画面而产生的冲突问题,使电子设备能够及时响应用户的动作,根据目标用户的动作控制电子设备的屏幕显示画面,提高用户使用电子设备进行选择节目时的用户体验。
第三方面,本申请还提供了一种电子设备的屏幕交互装置,有益效果可以参见第一方面的描述此处不再赘述。所述电子设备的屏幕交互装置具有实现上述第一方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述电子设备的屏幕交互装置包括处理单元。处理单元,用于获取包括N个用户的图像,N为大于或等于2的整数;所述处理单元,还用于根据包括N个用户的图像识别N个用户各自的动作;所述处理单元,还用于根据N个用户各自的动作确定目标用户,目标用户的动作与预设动作相匹配;所述处理单元,还用于根据目标用户的动作控制电子设备的屏幕显示画面。这些单元可以执行上述第一方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第四方面,本申请还提供了一种电子设备的屏幕交互装置,有益效果可以参见第二方面的描述此处不再赘述。所述电子设备的屏幕交互装置具有实现上述第二方面的方法实例中行为的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。在一个可能的设计中,所述电子设备的屏幕交互装置包括处理单元。处理单元,用于获取包括N个用户的图像,N为大于或等于2的整数;所述处理单元,还用于根据包括N个用户的图像判断N个用户是否包括具有控制权限的用户;所述处理单元,还用于若N个用户包括具有控制权限的用户,将具有控制权限的用户确定为具有控制权限的目标用户,具有控制权限的目标用户的动作与预设动作相匹配;所述处理单元,还用于若N个用户不包括具有控制权限的用户,N个用户包括M个目标用户,将一个目标用户确定为具有控制权限的目标用户,目标用户的动作与预设动作相匹配,M为整数,1≤M≤N;所述处理单元,还用于根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面。这些单元可以执行上述第二方面方法示例中的相应功能,具体参见方法示例中的详细描述,此处不做赘述。
第五方面,提供了一种电子设备,该电子设备可以包括:处理器、存储器、显示屏和摄像头;处理器和显示屏,摄像头,存储器耦合,存储器用于存储计算机程序代码,计算机程序代码包括计算机软件指令,当计算机软件指令被电子设备执行时,使得电子设备执行如下操作:电子设备通过摄像头获取包括N个用户的图像,根据包括N个用户的图像识别N个用户各自的动作,并依据N个用户各自的动作确定目标用户,目标用户的动作与预设动作相匹配;根据目标用户的动作控制电子设备的屏幕显示画面。其中,N为大于或等于2的整数。
第六方面,提供一种计算机可读存储介质,包括:计算机软件指令;当计算机软件指令在电子设备中运行时,使得电子设备执行如第一方面或第一方面可能的实现方式、第二方面或第二方面可能的实现方式中任一项所述的电子设备的屏幕交互方法。
第七方面,提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面或第一方面可能的实现方式、第二方面或第二方面可能的实 现方式中任一项所述的电子设备的屏幕交互方法。
第八方面,提供一种芯片系统,该芯片系统应用于电子设备;芯片系统包括接口电路和处理器;接口电路和处理器通过线路互联;接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行该计算机指令时,芯片系统执行如第一方面或第一方面可能的实现方式、第二方面或第二方面可能的实现方式中任一项所述的电子设备的屏幕交互方法。
应当理解的是,本申请中对技术特征、技术方案、有益效果或类似语言的描述并不是暗示在任意的单个实施例中可以实现所有的特点和优点。相反,可以理解的是对于特征或有益效果的描述意味着在至少一个实施例中包括特定的技术特征、技术方案或有益效果。因此,本说明书中对于技术特征、技术方案或有益效果的描述并不一定是指相同的实施例。进而,还可以任何适当的方式组合本实施例中所描述的技术特征、技术方案和有益效果。本领域技术人员将会理解,无需特定实施例的一个或多个特定的技术特征、技术方案或有益效果即可实现实施例。在其他实施例中,还可在没有体现所有实施例的特定实施例中识别出额外的技术特征和有益效果。
图1为本申请一实施例提供的电子设备的结构示意图;
图2为本申请一实施例提供的电视机的摄像头的示意图;
图3为本申请一实施例提供的电子设备的屏幕交互方法的流程示意图;
图4为本申请一实施例提供的智能电视通过摄像头采集画幅的示意图;
图5为本申请一实施例提供的人体关节点识别结果示意图;
图6为本申请一实施例提供的预设动作的示意图;
图7为本申请一实施例提供的电子设备的屏幕交互方法的流程示意图;
图8为本申请一实施例提供的操控屏幕的过程示意图;
图9为本申请一实施例提供的控制权限转移的示意图;
图10为本申请一实施例提供的电子设备的屏幕交互方法的流程示意图;
图11为本申请一实施例提供的UI操作界面示意图;
图12为本申请一实施例提供的操控屏幕的过程示意图;
图13为本申请一实施例提供的电子设备的屏幕交互装置示意图。
本申请说明书和权利要求书及上述附图中的术语“第一”、“第二”和“第三”等是用于区别不同对象,而不是用于限定特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请实施例中的电子设备可以是电视机、平板电脑、投影仪、手机、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括显示屏和摄像 头的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
请参考图1,为本申请实施例提供的一种电子设备的结构示意图。如图1所示,电子设备包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,电源管理模块140,天线,无线通信模块160,音频模块170,扬声器170A,音箱接口170B,麦克风170C,传感器模块180,按键190,指示器191,显示屏192,以及摄像头193等。其中,上述传感器模块180可以包括距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器等传感器。
可以理解的是,本实施例示意的结构并不构成对电子设备的具体限定。在另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
在本申请中,处理器110用于通过摄像头193获取包括N个用户的图像,依据包括N个用户的图像识别N个用户各自的动作,将N个用户的动作中与预设动作相匹配的动作的用户确定为目标用户,根据目标用户的动作控制电子设备的显示屏192显示画面,N为大于或等于2的整数。
控制器可以是电子设备的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,和/或USB接口等。
电源管理模块140用于连接电源。电源管理模块140还可以与处理器110、内部存储器121、显示屏192、摄像头193和无线通信模块160等连接。电源管理模块140接收电源的输入,为处理器110、内部存储器121、显示屏192、摄像头193和无线通信模块160等供电。在一些实施例中,电源管理模块140也可以设置于处理器110中。
电子设备的无线通信功能可以通过天线和无线通信模块160等实现。其中,无线 通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。在一些实施例中,电子设备的天线和无线通信模块160耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。
电子设备通过GPU,显示屏192,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏192和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏192用于显示图像,视频等。该显示屏192包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
电子设备可以通过ISP,摄像头193,视频编解码器,GPU,显示屏192以及应用处理器等实现拍摄功能。ISP用于处理摄像头193反馈的数据。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备可以包括1个或N个摄像头193,N为大于1的正整数。例如,以电子设备是电视机为例,如图2中所示,摄像头193可以设置于电视机的显示屏192的上侧边缘处。当然,本申请实施例对摄像头193在电子设备上的位置不作限定。
或者,电子设备可以不包括摄像头,即上述摄像头193并未设置于电子设备(如电视机)中。电子设备可以通过接口(如USB接口130)外接摄像头193。该外接的摄像头193可以通过外部固定件(如带夹子的摄像头支架)固定在电子设备上。例如,外接的摄像头193可以通过外部固定件,固定在电子设备的显示屏192的边缘处,如上侧边缘处。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其 他数字信号。例如,当电子设备在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。视频编解码器用于对数字视频压缩或解压缩。电子设备可以支持一种或多种视频编解码器。这样,电子设备可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备使用过程中所创建的数据(比如音频数据等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备可以通过音频模块170,扬声器170A,麦克风170C,音箱接口170B,以及应用处理器等实现音频功能。例如,音乐播放,录音等。在本申请中,麦克风170C可以用于接收用户对电子设备发出的语音指令。扬声器170A可以用于向用户反馈电子设备的决策指令。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。
音箱接口170B用于连接有线音箱。音箱接口170B可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键信号输入。
指示器191可以是指示灯,可以用于指示电子设备处于开机状态、待机状态或者关机状态等。例如,指示灯灭灯,可指示电子设备处于关机状态;指示灯为绿色或者蓝色,可指示电子设备处于开机状态;指示灯为红色,可指示电子设备处于待机状态。
可以理解的是,本申请实施例示意的结构并不构成对电子设备的具体限定。其可以具有比图1中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。例如,该电子设备还可以包括音箱等部件。图1中所示出的各种部件可以在包括一个或多个信号处理或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
以下实施例中的方法均可以在具有上述硬件结构的电子设备中实现。以下实施例中以上述电子设备是智能电视为例,对本申请实施例的方法进行说明。
图3为本申请实施例提供的一种电子设备的屏幕交互方法的流程示意图。如图3所示,该方法可以包括:
S301、电子设备获取包括N个用户的图像。
用户启动电子设备,电子设备显示主界面。在本文中,对于用户启动电子设备的方法不作限定。在一种可能的实现方式中,用户可以通过语音方式启动电子设备。比如,用户说“打开电视机”,智能电视接收到“打开电视机”的语音指令,便启动智能电视。在另一种可能的实现方式中,用户可以通过遥控器启动电子设备。用户点击“开/关”键,电视机接收到“开”的指令,便启动电视机。
电子设备启动后,用户可以控制电子设备启动摄像头。电子设备通过摄像头拍摄电子设备的显示屏前的图像。在本文中,对于用户可以控制电子设备启动摄像头的方法不作限定。例如,用户可以通过语音指令控制电子设备启动摄像头,或者,用户利用遥控器控制电子设备启动摄像头。显示屏前的图像包括摄像头视角(Field of View,FOV)的部分区域或全部区域内的景象。如果摄像头视角的部分区域或全部区域内的景象有人,图像包括至少一个用户的图像。在本申请实施例中,假设通过摄像头拍摄的图像包括N个用户的图像,N为大于或等于2的整数。
示例的,如图4所示,智能电视通过摄像头采集画幅(如图4中的扇形范围)内所有用户的图像。扇形范围内包括用户1和用户2,用户1坐在沙发上,用户2是站着的用户。智能电视通过摄像头采集到用户1的图像和用户2的图像。
需要说明的是,摄像头采集到的画幅内的用户图像是用户的完整图像,用户的完整图像包括头、四肢和躯干。或者,摄像头采集到的画幅内的用户图像是用户的上半身图像,用户的上半身图像包括头、上肢和躯干。或者,摄像头采集到的用户的图像是不完整的图像,不完整的图像可以是指用户的左边图像或用户的右边图像。左边图像包括头的左边部分、左上肢、左下肢和左躯干。右边图像包括头的右边部分、右上肢、右下肢和右躯干。如果摄像头采集到的用户的图像是不完整的图像,电子设备认为该用户的图像不可用,不统计该用户的图像。
S302、电子设备根据包括N个用户的图像识别N个用户各自的动作。
电子设备可以利用人体姿态检测(human pose estimation)算法,识别画幅内N个用户中每个用户的人体姿态。其中,人体姿态检测算法是一种通过训练神经网络模型检测人体关键点(key point),并根据人体关键点来描述人的姿态(pose)的算法。
例如,人体姿态检测算法可识别人体的头、肩、肘、腕、髋、膝、踝等至少十余个点。如表1所示,为可识别的人体关节点描述。如图5所示,为本申请实施例提供的人体关节点识别结果示意图。
表1
| 关节序号 | 关节描述 | 关节序号 | 关节描述 |
| 0 | 头顶 | 8 | 右髋 |
| 1 | 脖子 | 9 | 右膝 |
| 2 | 右肩 | 10 | 右踝 |
| 3 | 右肘 | 11 | 左髋 |
| 4 | 右手腕 | 12 | 左膝 |
| 5 | 左肩 | 13 | 左踝 |
| 6 | 左肘 | 14 | 人体中心 |
| 7 | 左手腕 |
动作是角色五官位置的变化(表情变化)、角色肢体位置的变化(动作变化)和角色与所处环境相对位置的变化(运动距离的变化)的过程。电子设备可以利用人体姿态检测算法识别连续多帧内N个用户的人体姿态,根据连续多帧内N个用户各自的人体姿态确定各自用户的动作。
S303、电子设备根据N个用户各自的动作确定目标用户,目标用户的动作与预设动作相匹配。
可理解的,电子设备一般安装在客厅等公用区域,如果用户的任何动作都可以控制电子设备的显示屏显示的画面,有可能违背了用户的意愿,导致用户体验较低。例如,用户走路经过电子设备的显示屏前,用户并没有要暂停电子设备的显示屏显示的画面,但是电子设备暂停了显示屏显示的画面。因此,电子设备可以预先配置人体肢体动作,用户做出预设动作,电子设备判断用户的动作和预设动作是否匹配,将与预设动作相匹配的动作的用户确定为目标用户,电子设备根据预设动作控制电子设备的显示屏显示的画面。所谓预设动作是电子设备预先配置的人体肢体动作。在本文中,目标用户是指做出预设动作的用户。可理解的,所谓与预设动作相匹配的动作是指与预设动作相同的动作。电子设备确定用户的动作和预设动作完全相同时,认为用户的动作与预设动作匹配,该用户为目标用户。
或者,所谓与预设动作相匹配的动作是指与预设动作大致相同的动作。电子设备确定用户的动作和预设动作大致相同时,认为用户的动作与预设动作匹配,该用户为目标用户。
例如,预设动作为挥动前臂,且挥动前臂的角度是45度。若用户挥动前臂的角度是30度,此时,电子设备确定用户的动作为挥动前臂时,认为用户的动作与预设动作匹配,该用户为目标用户。
又如,预设动作为左臂叉腰。叉腰是指手肘弯曲,五指放置腰间。若用户的左手并没有五指放置腰间,是握拳放置腰间,此时,电子设备也可确定用户的动作为左臂叉腰时,认为用户的动作与预设动作匹配,该用户为目标用户。
如果电子设备判断用户的动作和预设动作不匹配,该用户不是目标用户。如果电子设备的摄像头拍摄的图像中不包括目标用户,则重新执行S301。
在本文中,预设动作包括但不限于:挥动前臂、左臂叉腰、右臂叉腰、双臂叉腰、点头和握拳。
挥动前臂可以是指用户左右晃动前臂。如图6中的(a)所示,为挥动前臂的示意图。
左臂叉腰是指左臂手肘弯曲,五指放置腰间。如图6中的(b)所示,为左臂叉腰的示意图。
右臂叉腰是指右臂手肘弯曲,五指放置腰间。如图6中的(c)所示,为右臂叉腰的示意图。
双臂叉腰是指右臂手肘弯曲,五指放置腰间,且左臂手肘弯曲,五指放置腰间。如图6中的(d)所示,为双臂叉腰的示意图。
点头是指快速地向前低头。如图6中的(e)所示,为向下点头的示意图。
握拳是指手指向掌心弯曲成拳头。如图6中的(f)所示,为握拳的示意图。
不同的动作表示对电子设备的显示屏显示的内容的不同的操作。例如,挥动前臂表示选择菜单。又如,左臂叉腰表示返回上一级。又如,右臂叉腰表示进入下一级。又如,双臂叉腰表示返回主界面。又如,点头表示确定动作。又如,握拳表示确定动作。又如,左右挥动前臂表示释放控制权限。本申请对预设动作对应的操作不予限定。
S304、电子设备根据目标用户的动作控制电子设备的屏幕显示画面。
电子设备根据目标用户的动作控制电子设备的屏幕显示画面可包括如图7所示的步骤。如果目标用户的数量为0,表示电子设备的摄像头拍摄的图像中的用户没有做任何预设动作,则重新执行S301。如果目标用户的数量不为0,电子设备判断目标用户的人数是否大于1(即执行S701)。
如果目标用户的人数等于1,则电子设备确定具有控制权限的目标用户(即执行S702),并根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面(即执行S703)。如果该具有控制权限的目标用户的动作为释放控制权限的动作,电子设备释放控制权限,重新执行S301。具有控制权限的目标用户为在预设时长内与电子设备交互过的目标用户。所谓与电子设备交互过可理解为目标用户做出了预设动作,电子设备响应了预设动作。比如预设时长是1分钟。本申请对预设时长不予限定,用户可以根据自己的需求自行设置预设时长。
在第一种可能的场景中,电子设备确定了一个目标用户,此时,不限于电子设备的显示屏显示的内容,以及该目标用户是否有控制权限控制电子设备,电子设备响应目标用户的动作,控制电子设备的屏幕显示画面。从而以便于电子设备及时响应于用户对电子设备的操作。
示例的,如图8中的(a)所示,电子设备的显示屏显示动画的画面,目标用户(用户2)的动作为双臂叉腰。假设双臂叉腰表示返回主界面。如图8中的(b)所示,电子设备响应双臂叉腰,电子设备的显示屏显示的内容从动画的画面切换到主界面。
又如,左臂叉腰表示返回上一级,电子设备的显示屏显示二级菜单界面,目标用户的动作为左臂叉腰(如图6中的(b)所示),电子设备响应左臂叉腰,电子设备的显示屏显示的内容从二级菜单界面切换到一级菜单界面。
在一些实施例中,电子设备确定的目标用户具有控制权限,此时该电子设备将该目标用户确定为具有控制权限的目标用户。示例的,电子设备采用人脸识别技术判断目标用户是否具有控制权限。电子设备对目标用户进行人脸识别,判断目标用户的人脸图像与存储的人脸图像是否匹配,若目标用户的人脸图像与存储的人脸图像匹配, 该目标用户为具有控制权限的目标用户,若目标用户的人脸图像与存储的人脸图像不匹配,该目标用户不是具有控制权限的目标用户。
在另一些实施例中,电子设备确定的目标用户不具有控制权限。此时,该电子设备将控制权限分配给该目标用户(即执行S704),将该目标用户确定为具有控制权限的目标用户。
如果电子设备未将控制权限分配给任何用户,该电子设备将控制权限分配给该目标用户。
如果电子设备已将控制权限分配给了其他用户,电子设备将控制权限从具有控制权限的用户转移给目标用户。具有控制权限的用户为在预设时长内与电子设备交互过的用户。可选的,具有控制权限的用户可能在电子设备的屏幕前面,电子设备的摄像头拍摄的图像中包括具有控制权限的用户的图像,但是该具有控制权限的用户没有做任何预设动作,因此该具有控制权限的用户不是目标用户,目标用户想要控制电子设备,电子设备便将控制权限从具有控制权限的用户转移给目标用户,该目标用户成为具有控制权限的目标用户。或者,具有控制权限的用户不在电子设备的屏幕前面,电子设备的摄像头拍摄的图像中不包括具有控制权限的用户的图像,因此该具有控制权限的用户不是目标用户,目标用户想要控制电子设备,电子设备便将控制权限从具有控制权限的用户转移给目标用户,该目标用户成为具有控制权限的目标用户。
此外,图像内除了目标用户之外还可能包括其他用户,其他用户是不具有控制权限的用户,其他用户可能没有做任何动作,或者,其他用户的动作与预设动作不匹配,电子设备不响应图像内除了目标用户之外的其他用户的动作。
如果目标用户的人数大于1,电子设备判断M个目标用户中是否有具有控制权限的目标用户(即执行S705)。其中,M为整数,2≤M≤N。
如果M个目标用户中存在具有控制权限的目标用户,则电子设备确定具有控制权限的目标用户(即执行S702),即该目标用户继续持有控制权限,并根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面(即执行S703)。
可理解的,在第二种可能的场景中,若目标用户包括M个用户,M个目标用户包括一个具有控制权限的目标用户。在一些实施例中,电子设备采用人脸识别技术判断目标用户是否具有控制权限。示例的,如图8中的(a)所示,电子设备的显示屏显示动画的画面,目标用户(用户1)的动作为点头。目标用户(用户2)的动作为双臂叉腰。假设点头表示暂停画面。双臂叉腰表示返回主界面。用户1和用户2均为目标用户。电子设备对用户1和用户2进行人脸识别,识别用户1的人脸图像和用户2的人脸图像与存储的人脸图像是否匹配。假设电子设备可匹配出用户2的人脸图像与存储的人脸图像匹配,此时确定用户2是具有控制权限的目标用户。如图8中的(b)所示,电子设备响应双臂叉腰,电子设备的显示屏显示的内容从动画的画面切换到主界面。
在另一些实施例中,电子设备采用用户的标识判断目标用户是否具有控制权限。在没有用户操控电子设备时,电子设备存储的用户的标识是一个特殊值。如果有用户对电子设备进行操控时,电子设备采用人体姿态检测算法识别用户后,为用户分配标识,电子设备存储该用户的特征与用户的标识的对应关系。用户的标识的有效期可以是预设时长。如果电子设备再识别到该用户,该用户为具有控制权限的用户。例如, 电子设备采用人体姿态检测算法识别用户1和用户2的动作后,查询用户的特征与用户的标识的对应关系,假设电子设备可匹配出用户2的特征与存储的标识匹配,此时确定用户2是具有控制权限的目标用户。如图8中的(b)所示,电子设备的显示屏显示的内容从动画的画面切换到主界面。特征可以是人的生物特征。生物特征包括但不限于:人脸特征和体态特征。
如果M个目标用户中不存在具有控制权限的目标用户,即M个目标用户均没有控制权限,电子设备提示用户重新做动作,重新执行S301。可选的,电子设备确定用户指示的目标用户(即执行S706)。电子设备将控制权限分配给该目标用户(即执行S704)。确定具有控制权限的目标用户(即执行S702),并根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面(即执行S703)。可理解的,用户指示的目标用户是本次电子设备的摄像头拍摄到的图像中的用户。
在第三种可能的场景中,若M个目标用户均没有控制权限,表示该M个目标用户在预设时长内未与电子设备交互过。在一种可能的实现方式,电子设备可以播放或显示提示信息,提示用户响应于哪个目标用户的动作。例如,用户1和用户2均为目标用户,都没有控制权限。电子设备播放或显示提示信息,提示用户响应于用户1的动作还是用户2的动作。
假设电子设备接收到的用户指示是响应于用户2的动作,此时确定用户2是用户指示的目标用户。如图8中的(b)所示,假设用户2的动作为双臂叉腰,电子设备的显示屏显示的内容从动画的画面切换到主界面。
在一些实施例中,可以是电子设备已将控制权限分配给了其他用户,M个目标用户均没有控制权限,此时,由于其他用户并没有操控电子设备,而M个目标用户需要操控电子设备,电子设备将控制权限从具有控制权限的用户转移给用户指示的目标用户(即执行S704)。示例的,如图9所示,电子设备将控制权限分配给了用户3,用户1和用户2在预设时长内未与电子设备交互过,用户1和用户2均没有控制权限。此时,由于用户3并没有操控电子设备,而用户1和用户2需要操控电子设备,电子设备将控制权限从具有控制权限的用户3转移给用户指示的用户2。
在另一些实施例中,可以是电子设备未将控制权限分配给任何用户,M个目标用户均没有控制权限,此时,电子设备将控制权限分配给用户指示的目标用户(即执行S704)。例如,电子设备将控制权限分配给用户指示的用户2。
需要说明的是,但凡电子设备重新为用户分配控制权限后,预设时长便从初始值0重新计算。如果在预设时长内,没有任何用户操控电子设备,或者,用户离开电子设备的摄像头的视角(FOV)的范围,电子设备自动释放控制权限。当然,用户也可以做出释放动作,电子设备响应于释放动作,释放控制权限。等待有用户操控电子设备时,为该用户分配控制权限。从而,以便于用户及时操控电子设备。
在另一种可能的设计中,电子设备可以先判断用户是否具有控制权限来确定目标用户。示例的,如图10所示,该方法包括以下步骤。电子设备获取包括N个用户的图像(即执行S1001)。N为大于或等于2的整数。电子设备根据包括N个用户的图像判断N个用户是否包括具有控制权限的用户(即执行S1002)。电子设备可以采用人脸识别技术确定N个用户中具有控制权限的用户。或者,电子设备采用人体姿态检 测算法识别用户的特征,根据用户的特征与用户的标识的对应关系确定N个用户中具有控制权限的用户。识别具有控制权限的用户的方法可以参考上述实施例的阐述。
如果N个用户中存在具有控制权限的用户,电子设备根据具有控制权限的用户的图像识别用户的动作(即执行S1003)。如果在电子设备判断N个用户中是否存在具有控制权限的用户时,采用人脸识别技术确定N个用户中具有控制权限的用户,此时电子设备可以采用人体姿态检测算法识别具有控制权限的用户的动作。
电子设备判断具有控制权限的用户的动作与预设动作是否匹配(即执行S1004)。如果具有控制权限的用户的动作与预设动作相匹配,将具有控制权限的用户确定为具有控制权限的目标用户(即执行S1005)。电子设备根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面(即执行S1006)。如果该具有控制权限的目标用户的动作为释放控制权限的动作,电子设备释放控制权限,重新执行S1001。如果具有控制权限的用户的动作与预设动作不匹配,确定具有控制权限的用户不是具有控制权限的目标用户,重新执行S1001。如果在预设时长内,没有任何用户操控电子设备,或者,用户离开电子设备的摄像头的视角(FOV)的范围,电子设备自动释放控制权限。
在确定具有控制权限的用户后,再识别该一个具有控制权限的用户的动作,减少了判断用户的动作的时长,从而,以便于电子设备及时响应用户的操作。
可选的,如果在电子设备判断N个用户中是否存在具有控制权限的用户时,采用人体姿态检测算法识别N个用户的动作,如果N个用户中存在具有控制权限的用户,此时,无需再识别具有控制权限的用户的动作,可以根据具有控制权限的用户的动作与预设动作是否相匹配,确定具有控制权限的目标用户。
如果N个用户中不存在具有控制权限的用户,电子设备根据N个用户的图像识别N个用户各自的动作(即执行S1007),即电子设备采用人体姿态检测算法识别N个用户的动作。电子设备判断N个用户的动作与预设动作是否匹配(即执行S1008)。如果N个用户中存在与预设动作相匹配的动作的用户,将N个用户中与预设动作相匹配的动作的用户确定为目标用户,电子设备判断目标用户的人数是否大于1(即执行S1009)。如果N个用户中不存在与预设动作相匹配的动作的用户,则重新执行S1001。
如果目标用户的人数等于1,则电子设备将控制权限分配给该目标用户(即执行S1010),将该目标用户确定为具有控制权限的目标用户(即执行S1005),电子设备根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面(即执行S1006)。可理解的,此时电子设备未将控制权限分配给任何用户,电子设备将控制权限分配给该目标用户。关于分配控制权限的具体阐述可以参考上述实施例的解释。
如果目标用户的人数大于1,例如,目标用户的人数为M个,M个目标用户均没有控制权限,电子设备提示用户重新做动作,重新执行S1001。可选的,电子设备确定用户指示的目标用户(即执行S1011)。电子设备将控制权限分配给该目标用户(即执行S1010),即将控制权限分配给用户指示的目标用户。将该用户指示的目标用户确定为具有控制权限的目标用户(即执行S1005),电子设备根据具有控制权限的目标用户的动作控制电子设备的屏幕显示画面(即执行S1006)。可理解的,用户指示的目标用户是本次电子设备的摄像头拍摄到的图像中的用户。关于用户指示的目标用 户的解释可以参考上述实施例的阐述,不予赘述。
在一种可能的设计中,如图11中的(a)所示,电子设备的屏幕显示的菜单为环形菜单。如图11中的(b)所示,电子设备的屏幕显示的菜单为轮盘式菜单。电子设备的屏幕显示指针,指针指向菜单中的选项。电子设备可以根据目标用户的前臂的挥动角度控制电子设备的屏幕显示的指针的方向,使指针指向菜单中的选项。
如图12中的(a)所示,用户观看状态。如图12中的(b)所示,用户抬起前臂,做出预设的唤起肢体操控功能的动作,以启动肢体操控功能。如图12中的(c)所示,用户旋转前臂指向角度,电子设备可以根据图12中的(b)中的前臂和图12中的(c)中的前臂的旋转角度,将该旋转角度映射为轮盘式用户界面(User Interface,UI)中的指针指向,即指针从图标1指向图标3。
如图12中的(d)所示,用户做出预设手势,电子设备响应于预设手势,例如用户做出抓握动作,电子设备的屏幕显示图标3的画面。如图12中的(e)所示,电子设备的屏幕显示图标3的画面,用户可以根据上述图12中的(a)~(d)重复操控。
在一些实施例中,以极坐标系为基础操作维度的操作方式和轮盘式UI操作界面,以前臂为极轴,手肘为极点,以前臂指向角度作为基础操作维度,对应到轮盘式UI界面,并配合手势识别,实现快速准确的选中、确认等人机交互的功能。由于坐标原点和坐标值域难以定义,本申请以极坐标为映射关系基础区别于以直角坐标系(笛卡尔坐标系)为主的交互映射关系,解决了直角坐标系体系下,人体运动和屏幕坐标对应不自然的问题,可以天然定义坐标原点(手肘为极点)和坐标值域(0-360度),使得人体动作和屏幕坐标系之间的映射更加自然,减小了肢体交互所需要的交互空间范围,降低了肢体交互的疲劳度。利用肢体识别和手势识别的交互方式,区别于仅利用手部信息的交互方式,提升了交互指令的多样性和灵活性,可以执行更为复杂的操作,指令效率提升。
可以理解的是,为了实现上述实施例中功能,电子设备包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的单元及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。
图13为本申请的实施例提供的可能的电子设备的屏幕交互装置的结构示意图。这些电子设备的屏幕交互装置可以用于实现上述方法实施例中电子设备的功能,因此也能实现上述方法实施例所具备的有益效果。在本申请的实施例中,该电子设备的屏幕交互装置可以是如图1所示的电子设备,还可以是应用于电子设备的模块(如芯片)。
如图13所示,电子设备的屏幕交互装置1300包括处理单元1310、显示单元1320和存储单元1330。电子设备的屏幕交互装置1300用于实现上述图3、图7和图10中所示的方法实施例中电子设备的功能。存储单元1330用于存储处理单元1310执行的指令或存储处理单元1310运行指令所需要的输入数据或存储处理单元1310运行指令后产生的数据。处理单元1310用于通过摄像头获取包括N个用户的图像,实现上述图3、图7和图10中所示的方法实施例中电子设备的功能。显示单元1320用于显示处理单元1310根据目标用户的动作控制电子设备的屏幕显示画面。
有关上述处理单元1310更详细的描述可以直接参考图3、图7和图10所示的方法实施例中相关描述直接得到,这里不加赘述。处理单元1310可以执行图1所示的电子设备中处理器110的功能。
可以理解的是,本申请的实施例中的处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于网络设备或终端设备中。当然,处理器和存储介质也可以作为分立组件存在于网络设备或终端设备中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行所述计算机程序或指令时,全部或部分地执行本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其它可编程装置。所述计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复 数。在本申请的文字描述中,字符“/”,一般表示前后关联对象是一种“或”的关系;在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。
可以理解的是,在本申请的实施例中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。
Claims (23)
- 一种电子设备的屏幕交互方法,其特征在于,包括:获取包括N个用户的图像,N为大于或等于2的整数;根据所述包括N个用户的图像识别所述N个用户各自的动作;根据所述N个用户各自的动作确定目标用户,所述目标用户的动作与预设动作相匹配;根据所述目标用户的动作控制电子设备的屏幕显示画面。
- 根据权利要求1所述的方法,其特征在于,若所述目标用户包括M个用户,根据所述目标用户的动作控制电子设备的屏幕显示画面,包括:根据M个目标用户中具有控制权限的目标用户的动作控制所述电子设备的屏幕显示画面,所述具有控制权限的目标用户为在预设时长内与所述电子设备交互过的目标用户,M为整数,2≤M≤N。
- 根据权利要求1所述的方法,其特征在于,若所述目标用户包括M个用户,所述M个用户均没有控制权限,根据所述目标用户的动作控制电子设备的屏幕显示画面,包括:根据用户指示的目标用户的动作控制电子设备的屏幕显示画面,M为整数,2≤M≤N。
- 根据权利要求3所述的方法,其特征在于,在根据所述目标用户的动作控制电子设备的屏幕显示画面之前,所述方法还包括:将控制权限从具有控制权限的用户转移给所述用户指示的目标用户。
- 根据权利要求3所述的方法,其特征在于,在根据所述目标用户的动作控制电子设备的屏幕显示画面之前,所述方法还包括:为所述用户指示的目标用户分配控制权限。
- 根据权利要求1-5中任一项所述的方法,其特征在于,所述预设动作包括挥动前臂、左臂叉腰、右臂叉腰、双臂叉腰、点头和握拳。
- 根据权利要求6所述的方法,其特征在于,若所述预设动作为选择菜单动作,所述根据所述目标用户的动作控制电子设备的屏幕显示画面,包括:根据所述目标用户的前臂的挥动角度控制所述电子设备的屏幕显示的指针的方向,使所述指针指向所述菜单中的选项。
- 根据权利要求7所述的方法,其特征在于,所述电子设备的屏幕显示的菜单为环形菜单或轮盘式菜单。
- 一种电子设备的屏幕交互方法,其特征在于,包括:获取包括N个用户的图像,N为大于或等于2的整数;根据所述包括N个用户的图像判断所述N个用户是否包括具有控制权限的用户;若所述N个用户包括具有控制权限的用户,将所述具有控制权限的用户确定为具有控制权限的目标用户,所述具有控制权限的目标用户的动作与预设动作相匹配;若所述N个用户不包括具有控制权限的用户,所述N个用户包括M个目标用户,将一个目标用户确定为具有控制权限的目标用户,所述目标用户的动作与预设动作相匹配,M为整数,1≤M≤N;根据所述具有控制权限的目标用户的动作控制电子设备的屏幕显示画面。
- 根据权利要求9所述的方法,其特征在于,M≥2,所述将一个目标用户确定为具有控制权限的目标用户,包括:为用户指示的所述一个目标用户分配控制权限,将用户指示的所述一个目标用户确定为具有控制权限的目标用户。
- 一种电子设备的屏幕交互装置,其特征在于,包括:处理单元,用于获取包括N个用户的图像,N为大于或等于2的整数;所述处理单元,还用于根据包括N个用户的图像识别所述N个用户各自的动作;所述处理单元,还用于根据所述N个用户各自的动作确定目标用户,所述目标用户的动作与预设动作相匹配;所述处理单元,还用于根据所述目标用户的动作控制电子设备的屏幕显示画面。
- 根据权利要求11所述的装置,其特征在于,若所述目标用户包括M个用户,所述处理单元具体用于:根据M个目标用户中具有控制权限的目标用户的动作控制所述电子设备的屏幕显示画面,所述具有控制权限的目标用户为在预设时长内与所述电子设备交互过的目标用户,M为整数,2≤M≤N。
- 根据权利要求11所述的装置,其特征在于,若所述目标用户包括M个用户,所述M个用户均没有控制权限,所述处理单元具体用于:根据用户指示的目标用户的动作控制电子设备的屏幕显示画面,M为整数,2≤M≤N。
- 根据权利要求13所述的装置,其特征在于,所述处理单元,还用于:将控制权限从具有控制权限的用户转移给所述用户指示的目标用户。
- 根据权利要求13所述的装置,其特征在于,所述处理单元,还用于:为所述用户指示的目标用户分配控制权限。
- 根据权利要求11-15中任一项所述的装置,其特征在于,所述预设动作包括挥动前臂、左臂叉腰、右臂叉腰、双臂叉腰、点头和握拳。
- 根据权利要求16所述的装置,其特征在于,若所述预设动作为选择菜单动作,所述处理单元具体用于:根据所述目标用户的前臂的挥动角度控制所述电子设备的屏幕显示的指针的方向,使所述指针指向所述菜单中的选项。
- 根据权利要求17所述的装置,其特征在于,所述电子设备的屏幕显示的菜单为环形菜单或轮盘式菜单。
- 一种电子设备的屏幕交互装置,其特征在于,包括:处理单元,用于获取包括N个用户的图像,N为大于或等于2的整数;所述处理单元,还用于根据所述包括N个用户的图像判断所述N个用户是否包括具有控制权限的用户;所述处理单元,还用于若所述N个用户包括具有控制权限的用户,将所述具有控制权限的用户确定为具有控制权限的目标用户,所述具有控制权限的目标用户的动作与预设动作相匹配;所述处理单元,还用于若所述N个用户不包括具有控制权限的用户,所述N个用户包括M个目标用户,将一个目标用户确定为具有控制权限的目标用户,所述目标用户的动作与预设动作相匹配,M为整数,1≤M≤N;所述处理单元,还用于根据所述具有控制权限的目标用户的动作控制电子设备的屏幕显示画面。
- 根据权利要求19所述的装置,其特征在于,M≥2,所述处理单元,具体用于:为用户指示的所述一个目标用户分配控制权限,将用户指示的所述一个目标用户确定为具有控制权限的目标用户。
- 一种电子设备,其特征在于,所述电子设备包括:处理器、存储器、显示屏和摄像头;所述处理器和所述显示屏、所述摄像头、所述存储器耦合,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机软件指令,当所述计算机软件指令被所述电子设备执行时,使得所述电子设备执行如权利要求1至8中任一项所述的电子设备的屏幕交互方法,或者,如权利要求9至10中任一项所述的电子设备的屏幕交互方法。
- 一种计算机可读存储介质,其特征在于,包括:计算机软件指令;当所述计算机软件指令在电子设备中运行时,使得所述电子设备执行如权利要求1至8中任一项所述的电子设备的屏幕交互方法,或者,如权利要求9至10中任一项所述的电子设备的屏幕交互方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求1至8中任一项所述的电子设备的屏幕交互方法,或者,如权利要求9至10中任一项所述的电子设备的屏幕交互方法。
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP21841917.4A EP4170467A4 (en) | 2020-07-17 | 2021-07-14 | SCREEN INTERACTION METHOD AND APPARATUS FOR ELECTRONIC DEVICE |
| US18/154,090 US12126864B2 (en) | 2020-07-17 | 2023-01-13 | Screen interaction method and apparatus for electronic device |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010696580.8A CN113949936A (zh) | 2020-07-17 | 2020-07-17 | 一种电子设备的屏幕交互方法及装置 |
| CN202010696580.8 | 2020-07-17 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/154,090 Continuation US12126864B2 (en) | 2020-07-17 | 2023-01-13 | Screen interaction method and apparatus for electronic device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022012602A1 true WO2022012602A1 (zh) | 2022-01-20 |
Family
ID=79327193
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2021/106352 Ceased WO2022012602A1 (zh) | 2020-07-17 | 2021-07-14 | 一种电子设备的屏幕交互方法及装置 |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US12126864B2 (zh) |
| EP (1) | EP4170467A4 (zh) |
| CN (1) | CN113949936A (zh) |
| WO (1) | WO2022012602A1 (zh) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117435058B (zh) * | 2023-12-21 | 2024-03-29 | 北京赛凡策划有限公司 | 一种智慧展厅的交互控制方法和系统 |
| US20250272165A1 (en) * | 2024-02-28 | 2025-08-28 | Zapier, Inc. | Action presets for intent driving action mapping |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102375542A (zh) * | 2011-10-27 | 2012-03-14 | Tcl集团股份有限公司 | 一种肢体遥控电视的方法及电视遥控装置 |
| US20130254700A1 (en) * | 2012-03-21 | 2013-09-26 | International Business Machines Corporation | Force-based contextualizing of multiple pages for electronic book reader |
| CN104333793A (zh) * | 2014-10-17 | 2015-02-04 | 宝鸡文理学院 | 一种手势遥控系统 |
| CN104750252A (zh) * | 2015-03-09 | 2015-07-01 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
| CN106682482A (zh) * | 2016-12-30 | 2017-05-17 | 南京航空航天大学 | 用于触屏终端的用户身份验证与应用同步唤醒方法与系统 |
| CN108415574A (zh) * | 2018-03-29 | 2018-08-17 | 北京微播视界科技有限公司 | 对象数据获取方法、装置、可读存储介质及人机交互装置 |
Family Cites Families (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6982780B2 (en) * | 2001-03-02 | 2006-01-03 | Technicolor Digital Cinema, Llc | Apparatus and method for building a playlist |
| US20040088737A1 (en) * | 2002-11-04 | 2004-05-06 | Donlan Brian Joseph | Method and apparatus for removing client from an interactive TV network |
| BRPI0607459A2 (pt) * | 2005-02-15 | 2009-09-08 | Thomson Licensing | sistema de gerenciamento de chaves para cinema digital |
| BRPI0618019A2 (pt) * | 2005-10-28 | 2011-08-16 | Directv Group Inc | método para seletivamente exibir sinal de televisão, infra-estrutura, sistema para seletivamente exibir programa de vìdeo |
| US20070294710A1 (en) * | 2006-06-19 | 2007-12-20 | Alps Automotive Inc. | Simple bluetooth software development kit |
| US20080114880A1 (en) * | 2006-11-14 | 2008-05-15 | Fabrice Jogand-Coulomb | System for connecting to a network location associated with content |
| US7751807B2 (en) * | 2007-02-12 | 2010-07-06 | Oomble, Inc. | Method and system for a hosted mobile management service architecture |
| US20080270462A1 (en) * | 2007-04-24 | 2008-10-30 | Interse A/S | System and Method of Uniformly Classifying Information Objects with Metadata Across Heterogeneous Data Stores |
| CN101855648B (zh) * | 2007-09-12 | 2017-11-17 | 索尼公司 | 开放市场内容分发 |
| US20090133090A1 (en) * | 2007-11-19 | 2009-05-21 | Verizon Data Services Inc. | Method and system of providing content management for a set-top box |
| US20100138298A1 (en) * | 2008-04-02 | 2010-06-03 | William Fitzgerald | System for advertising integration with auxiliary interface |
| US9258326B2 (en) * | 2008-04-02 | 2016-02-09 | Yougetitback Limited | API for auxiliary interface |
| US9244533B2 (en) * | 2009-12-17 | 2016-01-26 | Microsoft Technology Licensing, Llc | Camera navigation for presentations |
| US9213890B2 (en) | 2010-09-17 | 2015-12-15 | Sony Corporation | Gesture recognition system for TV control |
| US20130154913A1 (en) * | 2010-12-16 | 2013-06-20 | Siemens Corporation | Systems and methods for a gaze and gesture interface |
| JP6243112B2 (ja) * | 2011-12-09 | 2017-12-06 | ソニー株式会社 | 情報処理装置、情報処理方法、および記録媒体 |
| KR101885295B1 (ko) * | 2011-12-26 | 2018-09-11 | 엘지전자 주식회사 | 전자기기 및 그 제어방법 |
| US20140373041A1 (en) * | 2012-01-05 | 2014-12-18 | Thomson Licensing | Method for media content delivery using video and/or audio on demand assets |
| US8544724B2 (en) * | 2012-01-06 | 2013-10-01 | Seachange International, Inc. | Systems and methods for associating a mobile electronic device with a preexisting subscriber account |
| AU2013205613B2 (en) * | 2012-05-04 | 2017-12-21 | Samsung Electronics Co., Ltd. | Terminal and method for controlling the same based on spatial interaction |
| JP5966596B2 (ja) * | 2012-05-16 | 2016-08-10 | 株式会社リコー | 情報処理装置、投影システム及び情報処理プログラム |
| US9606647B1 (en) * | 2012-07-24 | 2017-03-28 | Palantir Technologies, Inc. | Gesture management system |
| US9195368B2 (en) * | 2012-09-13 | 2015-11-24 | Google Inc. | Providing radial menus with touchscreens |
| JP6325659B2 (ja) * | 2014-05-08 | 2018-05-16 | Necソリューションイノベータ株式会社 | 操作画面表示装置、操作画面表示方法およびプログラム |
| CN104281265B (zh) * | 2014-10-14 | 2017-06-16 | 京东方科技集团股份有限公司 | 一种应用程序的控制方法、装置及电子设备 |
| JP6791994B2 (ja) * | 2017-02-02 | 2020-11-25 | マクセル株式会社 | 表示装置 |
| CN109542219B (zh) * | 2018-10-22 | 2021-07-30 | 广东精标科技股份有限公司 | 一种应用于智能教室的手势交互系统及方法 |
-
2020
- 2020-07-17 CN CN202010696580.8A patent/CN113949936A/zh active Pending
-
2021
- 2021-07-14 EP EP21841917.4A patent/EP4170467A4/en active Pending
- 2021-07-14 WO PCT/CN2021/106352 patent/WO2022012602A1/zh not_active Ceased
-
2023
- 2023-01-13 US US18/154,090 patent/US12126864B2/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102375542A (zh) * | 2011-10-27 | 2012-03-14 | Tcl集团股份有限公司 | 一种肢体遥控电视的方法及电视遥控装置 |
| US20130254700A1 (en) * | 2012-03-21 | 2013-09-26 | International Business Machines Corporation | Force-based contextualizing of multiple pages for electronic book reader |
| CN104333793A (zh) * | 2014-10-17 | 2015-02-04 | 宝鸡文理学院 | 一种手势遥控系统 |
| CN104750252A (zh) * | 2015-03-09 | 2015-07-01 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
| CN106682482A (zh) * | 2016-12-30 | 2017-05-17 | 南京航空航天大学 | 用于触屏终端的用户身份验证与应用同步唤醒方法与系统 |
| CN108415574A (zh) * | 2018-03-29 | 2018-08-17 | 北京微播视界科技有限公司 | 对象数据获取方法、装置、可读存储介质及人机交互装置 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP4170467A4 |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4170467A1 (en) | 2023-04-26 |
| CN113949936A (zh) | 2022-01-18 |
| US12126864B2 (en) | 2024-10-22 |
| EP4170467A4 (en) | 2023-12-20 |
| US20230171467A1 (en) | 2023-06-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2022179376A1 (zh) | 手势控制方法与装置、电子设备及存储介质 | |
| KR102849352B1 (ko) | 가상 키보드에 기초한 텍스트 입력 방법 및 장치 | |
| EP3991814A1 (en) | Intelligent speech playing method and device | |
| CN105654952A (zh) | 用于输出语音的电子设备、服务器和方法 | |
| JP2019532543A (ja) | 制御システムならびに制御処理方法および装置 | |
| WO2021036714A1 (zh) | 一种语音控制的分屏显示方法及电子设备 | |
| CN113325948B (zh) | 隔空手势的调节方法及终端 | |
| WO2022188551A1 (zh) | 信息处理方法与装置、主控设备和受控设备 | |
| WO2022193989A1 (zh) | 电子设备的操作方法、装置和电子设备 | |
| US12126864B2 (en) | Screen interaction method and apparatus for electronic device | |
| WO2022095983A1 (zh) | 一种防止手势误识别的方法及电子设备 | |
| CN108881721B (zh) | 一种显示方法及终端 | |
| JP7341324B2 (ja) | 目標ユーザのロック方法および電子デバイス | |
| WO2021036562A1 (zh) | 用于健身训练的提示方法和电子设备 | |
| CN112204943B (zh) | 拍摄方法、设备、系统及计算机可读存储介质 | |
| WO2023280020A1 (zh) | 一种系统模式切换方法、电子设备及计算机可读存储介质 | |
| WO2021227525A1 (zh) | 一种手柄按键映射的方法和电子设备 | |
| CN103376838A (zh) | 自动控制设备 | |
| CN116052235A (zh) | 注视点估计方法及电子设备 | |
| CN115847419A (zh) | 控制机器人的装置、方法及计算设备 | |
| WO2023174214A1 (zh) | 一种基于摄像组件的通用型设备控制方法、设备及系统 | |
| CN109618097A (zh) | 辅助拍照方法及终端设备 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21841917 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2021841917 Country of ref document: EP Effective date: 20230118 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |