WO2020237617A1 - Procédé, dispositif et appareil de commande d'écran, et support de stockage - Google Patents
Procédé, dispositif et appareil de commande d'écran, et support de stockage Download PDFInfo
- Publication number
- WO2020237617A1 WO2020237617A1 PCT/CN2019/089489 CN2019089489W WO2020237617A1 WO 2020237617 A1 WO2020237617 A1 WO 2020237617A1 CN 2019089489 W CN2019089489 W CN 2019089489W WO 2020237617 A1 WO2020237617 A1 WO 2020237617A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- screen
- control
- control objects
- display sub
- screen control
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Definitions
- This application relates to the field of screen control technology, and in particular to a method, device, device, and storage medium for screen control.
- the screen is controlled through gestures, usually through a single-person gesture to interact with the large screen, and the large screen is controlled correspondingly through gestures, so as to achieve the effect of controlling the screen through gestures.
- the screen is controlled by gestures, and the large screen can only be controlled by a single gesture, resulting in low screen utilization.
- the embodiments of the present application provide a method, device, device, and storage medium for controlling a screen, which are used to implement split-screen control of a screen by multiple screen control objects, which not only improves screen utilization, but also enhances user experience.
- this application provides a method for controlling a screen, including:
- the display sub-screen corresponding to the control object is determined according to the characteristic information of the control object, and the display sub-screen corresponding to the control object is controlled according to the action information of the control object, thereby realizing the split screen Control not only improves screen utilization, but also enhances user experience.
- the screen control method provided in this embodiment of the present application further includes:
- the multi-gesture control mode is activated based on the preset activation action and Only the control object that satisfies the preset start action can become one of the controllers of multiple display sub-screens, which improves the anti-interference of the split screen.
- the control object is the user's hand. If there are four people in front of the screen , But only three people want to participate in the split-screen control, and only the hand that presents the preset start action can participate in the split-screen control, avoiding the fourth person’s hand from interfering with the split-screen control.
- the screen control method provided in the embodiment of the present application further includes:
- the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; according to N Determining the respective display sub-screens corresponding to the N screen control objects by the respective feature information of each control screen object, including: determining the respective display corresponding to the N screen control objects according to the first correspondence and the respective feature information of the N screen control objects Sub-screen.
- the corresponding relationship between the feature information of a plurality of screen control objects satisfying the preset startup action and the multiple display sub-screens is established, and the control screen is determined according to the corresponding relationship and the feature information of the screen control object
- the display sub-screens corresponding to the objects respectively realize the corresponding control of the control object to the display sub-screens, and improve the accuracy of the control object's control of the display sub-screens.
- controlling the respective display sub-screens corresponding to the N screen control objects according to the respective action information of the N screen control objects includes:
- the target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple A one-to-one correspondence between each action information and multiple control operations; according to the target control operation, the display sub-screen corresponding to the target control screen object is controlled.
- an embodiment of the present application provides a screen control device, including:
- the first acquisition module is used to acquire the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image.
- the screen control object is the body part of the user in the first image, and N is greater than or equal to 2
- the first determining module is used to determine the display sub-screen corresponding to each of the N control screen objects according to the respective characteristic information of the N control screen objects, and the display sub-screen is a partial area of the display screen; the control module is used for According to the respective action information of the N screen control objects, the corresponding display sub-screens of the N screen control objects are controlled.
- the screen control device provided in the embodiment of the present application further includes:
- the second acquisition module is used to acquire a second image of the user; the second determination module is used to determine the number N of control objects in the second image that meets the preset activation action, and the preset activation action is used to activate multiple gestures Control mode; segmentation module, used to obtain N display sub-screens presented on the display screen.
- the screen control device provided in the embodiment of the present application further includes:
- the establishment module is used to establish a first corresponding relationship between the feature information of the N screen control objects that meet the preset startup action and the N display sub-screens.
- the first corresponding relationship includes the feature information of the control object and the display sub-screen.
- the first determining module is specifically configured to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
- control module is specifically used for:
- the target control operation that matches the action information of the target control object is determined.
- the target control object is any one of the N control objects.
- the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
- an embodiment of the present application provides a device, including:
- the processor and the transmission interface, the transmission interface, are used to receive the first image of the user obtained by the camera; the processor is used to call the software instructions stored in the memory to perform the following steps:
- the transmission interface is also used to receive the second image of the user obtained by the camera; the processor is also used to:
- the number N of control objects satisfying the preset activation action in the second image is determined, and the preset activation action is used to activate the multi-gesture control mode; N display sub-screens presented on the display screen are obtained.
- the processor is also used to:
- the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; the processor; , Specifically used to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
- the processor is also used to:
- the target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple A one-to-one correspondence between each action information and multiple control operations; according to the target control operation, the display sub-screen corresponding to the target control screen object is controlled.
- an embodiment of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium.
- the instructions run on a computer or a processor, the computer or the processor executes the Examples are the screen control methods provided in the first aspect and the optional methods of the first aspect.
- the fifth aspect of the present application provides a computer program product containing instructions, which, when run on a computer or processor, causes the computer or processor to execute the control provided in the first aspect or alternatively provided by the first aspect. ⁇ method.
- the screen control method, device, device, and storage medium provided by the embodiments of the present application obtain the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects in the first image, and the screen control object Is the body part of the user in the first image, and N is a positive integer greater than or equal to 2; then, according to the respective characteristic information of the N screen control objects, determine the display sub-screens corresponding to each of the N screen control objects, The display sub-screen is a partial area of the display screen; finally, according to the respective action information of the N screen control objects, the display sub-screens corresponding to each of the N screen control objects are controlled. .
- the split-screen control of the screen is realized, which not only improves Screen utilization, and enhanced user experience.
- Fig. 1 is a schematic diagram of an exemplary application scenario provided by an embodiment of the present application
- Figure 2 is a schematic diagram of another exemplary application scenario provided by an embodiment of the present application.
- FIG. 3 is a flowchart of a method for controlling a screen provided by an embodiment of the present application
- Fig. 4 is an exemplary neural network application architecture diagram provided by an embodiment of the present application.
- FIG. 5 is a flowchart of a screen control method provided by another embodiment of the present application.
- FIG. 6 is a flowchart of a screen control method provided by another embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a screen control device provided by an embodiment of the present application.
- FIG. 8A is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
- FIG. 8B is a schematic structural diagram of a terminal device provided by another embodiment of the present application.
- FIG. 9 is a schematic diagram of the hardware architecture of an exemplary screen control device provided by an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of a terminal device provided by another embodiment of the present application.
- first, second, third, fourth, etc. may be used to describe user images in the embodiments of the present invention, these user images should not be limited to these terms. These terms are only used to distinguish user images from each other.
- the first user image may also be referred to as the second user image, and similarly, the second user image may also be referred to as the first user image.
- Fig. 1 is a schematic diagram of an exemplary application scenario provided by an embodiment of the present application.
- a television 10 is connected to a camera 11 through a universal serial bus or other high-speed bus 12.
- a universal serial bus or other high-speed bus 12 When multiple users watch the television, every The TV programs that a user wants to watch are different, or there are video games in which multiple users participate. It is necessary to split the TV screen according to the characteristic information of each user, and each user can independently control the characteristic information of the user.
- the corresponding display sub-screen for example, as shown in FIG. 1, the user image or video opposite to the television 10 is captured by the camera 11, and after processing and judgment, the display screen of the television 10 is divided into a display sub-screen 1 and a display sub-screen 2.
- display sub-screen 1 and display sub-screen 2 can display different playback content, and can continuously obtain user images through camera 11, and then process the user image to perform display sub-screen 1 and display sub-screen 2 respectively Control, the embodiment of this application is not limited to this.
- the television 10 may further include a video signal source interface 13, a wired network interface or a wireless network interface module 14, or a peripheral device interface 15, which is not limited in the embodiment of the present application.
- FIG. 2 is a schematic diagram of another exemplary application scenario provided by an embodiment of this application.
- the display device may include a central processing unit, a system memory, and an edge artificial intelligence processor.
- the central processing unit is connected to the system memory, the central processing unit can be used to execute the screen control method provided in the embodiments of this application, the central processing unit can also be connected to the edge artificial intelligence processor core, and the edge artificial intelligence can be used for To implement the image processing part of the screen control method provided by the embodiments of this application, the edge artificial intelligence processor core is connected to the image memory, and the image memory can be used to store the images acquired by the camera, the universal serial bus of the camera and the display device or other high-speed buses connection.
- the embodiments of the present application provide a method, device, device, and storage medium for controlling a screen.
- FIG. 3 is a flowchart of a method for controlling a screen provided by an embodiment of the present application.
- the method can be executed by the device for controlling a screen provided by an embodiment of the present application.
- the controlling device may be part or all of a terminal device, for example, in a terminal device.
- the following uses the terminal device as the execution body as an example to introduce the screen control method provided in the embodiment of the present application.
- the screen control method provided by the embodiment of the present application may include:
- Step S101 Obtain the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image.
- the screen control objects are the body parts of the user in the first image, and N is a positive integer greater than or equal to 2.
- the first image of the user can be obtained through a camera or an image sensor.
- the camera can be set in the terminal device, or set independently of the terminal device, and be connected to the terminal device by wired or wireless connection. There are no restrictions on the installation location, as long as the user's first image can be obtained.
- the camera collects the first image of the user by means of video collection or image collection.
- the embodiment of the present application does not limit the specific method of how to obtain the first image of the user through the camera.
- the transmission interface of the processor chip receiving the user's image obtained by the camera or the image sensor can also be regarded as obtaining the user's first image, that is, The processor chip obtains the user's first image through the transmission interface.
- the embodiment of the present application does not limit the specific part of the user's body part.
- the judgment of the user's body part in the first image can be realized by setting a preset body part.
- the preset body part may be a human hand.
- the N screen control objects are at least one human hand in the first image, and the feature information of the N screen control objects is N in the first image.
- the hand feature information of a human hand includes but is not limited to handprints, hand shape, hand size, or hand skin color.
- the motion information of each of the N screen control objects is the hand motion information of the N human hands in the first image.
- the preset body part may be a human face.
- the N screen control objects are the N faces in the first image, and the feature information of the N screen control objects is in the first image.
- the facial feature information of the N faces, and the respective action information of the N screen control objects are the facial action information of the N faces in the first image, such as facial expressions, and the embodiment of the present application is not limited thereto.
- the feature information of each of the N screen control objects is used to distinguish the N screen control objects.
- the screen control object is a human hand
- the hand feature information of the human hand is used to distinguish different human hands.
- Face the feature information of the face is used to distinguish different faces.
- the embodiment of this application does not limit the specific implementation manners of how to obtain the respective characteristic information of the at least one screen control object and the respective action information of the at least one screen control object according to the first image.
- the way of machine learning such as Convolutional Neural Network (CNN) model, etc.
- CNN Convolutional Neural Network
- FIG. 4 is an exemplary neural network application architecture diagram provided by an embodiment of the present application.
- the exemplary neural network application architecture diagram provided by an embodiment of the present application may include an application program entry 41, a model external interface 42, and The deep learning structure 43, the device driver 44, the central processing unit 45, the graphics processor 46, the network processor 47, and the digital processor 48.
- the application program entry 41 is used to select the neural network model
- the model external interface 42 is used
- the deep learning structure 43 is used to process the input first user image through the neural network model.
- the deep learning structure 43 includes an environment manager 431, a model manager 432, and task scheduling 433, task performer 434, and event manager 435.
- the environment manager 431 is used to control the startup and shutdown of the device-related environment
- the model manager 432 is used to load and unload the neural network model
- the task scheduler 433 is used to manage the neural network model. Which sequence is used for scheduling, the task executor 434 is responsible for executing tasks of the neural network model, and the event manager 435 is responsible for notifications of various events.
- the neural network application architecture provided by the embodiments of the present application is not limited to this.
- Step S102 Determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, and the display sub-screen is a partial area of the display screen.
- the display screen After acquiring the respective characteristic information of the N screen control objects, determine the corresponding display sub-screens of the N screen control objects.
- 4 control objects and 4 control objects are obtained.
- the display screen is divided into 4 display sub-screens, and each display sub-screen is bound to the characteristic information of a control object, so that the control object can only control the characteristic information of the control object.
- Bound display sub-screens in a possible implementation, preset feature information can be set for each display sub-screen, and then N can be determined according to the respective feature information and preset feature information of the N control screen objects
- Each control screen object corresponds to the display sub-screen.
- the screen is divided into 4 display sub-screens, and each display sub-screen has a one-to-one corresponding preset feature information, and the feature information of the control screen object is matched with the preset feature information, and then determined according to the matching result
- the display sub-screen corresponding to the control screen object does not limit the specific implementation of how to determine the display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects.
- the display sub-screen may be a partial area of the display screen, for example: the display screen is divided into different display sub-screens; in another possible implementation manner, the display sub-screen is a display All areas of the screen, for example: the display screen is a multi-channel display mode, which can realize the function of outputting multiple different pictures at the same time on the same display screen, and has multi-channel audio output. Users can watch separately by wearing different glasses and headphones For two different programs, etc., the embodiment of the present application does not limit the area of the display sub-screen and the splitting method.
- Determining the display sub-screens corresponding to each of the N screen control objects can be implemented according to the identification of the display sub-screen and the identification between the screen control objects.
- each display sub-screen is identified according to the preset feature information.
- the embodiment of the present application does not limit the specific identification method of the display sub-screen, for example, by encoding, numbers, symbols, text, etc., for example, Display sub-screen 1 corresponds to preset feature information 1, display sub-screen 2 corresponds to preset feature information 2, and so on.
- the feature information of the N screen control objects in the first image is detected, and the N screen control objects are identified according to the feature information of the screen control object.
- the embodiment of this application does not limit the specific identification method of the screen control object, for example, If the feature information of the screen control object matches the preset feature information 1, the screen control object is identified as the screen control object 1, and so on.
- the embodiment of this application does not limit the specific implementation of how to identify the screen control object.
- the feature information of the N screen control objects in the first image can be detected through the CNN model, and the feature information of the control screen objects can be detected according to the control.
- the feature information of the screen object identifies the N screen control objects.
- the coordinate information of each control screen object in the first image can be checked through CNN, and each control screen image can be cropped in the original image according to the coordinate information of each control screen object , Process as a separate image, detect the characteristic information of the control screen object in each separate image, and identify each separate image.
- the display sub-screen corresponding to each screen control object is determined according to the identification of the display sub-screen or the corresponding relationship between the identification of the screen control object and the identification of the display sub-screen.
- the first image includes 3 screen control objects, which are respectively identified as screen control object 1, screen control object 2 and screen control object 3.
- the screen is divided into 3 display sub-screens, respectively identified as display sub-screen 1, display Sub-screen 2 and display sub-screen 3.
- the display sub-screen corresponding to control-screen object 1 is display sub-screen 1
- the display sub-screen corresponding to control-screen object 2 is display sub-screen 2
- the display sub-screen corresponding to control-screen object 3 is display Sub-screen 3, the comparison of the embodiments of this application is not limited.
- Step S103 According to the respective action information of the N screen control objects, control the display sub-screens corresponding to each of the N screen control objects.
- the respective display sub-screens corresponding to the N screen control objects are controlled according to the respective action information of the N screen control objects.
- the display sub-screen 1 is controlled according to the action information of the screen control object 1
- the sub-screen 2 is controlled according to the action information of the screen control object 2.
- the sub-screen 3 is controlled to be displayed according to the action information of the control object 3
- the embodiment of the present application does not limit how to control the display sub-screen corresponding to the screen control object according to the action information of the screen control object.
- the respective display sub-screens corresponding to the N screen control objects are controlled according to the respective motion information of the N screen control objects.
- Screen including:
- the target control operation that matches the action information of the target control object is determined.
- the target control object is any one of the N control objects.
- the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
- the second correspondence includes a one-to-one correspondence between multiple action information and multiple control screen operations, where the action information may be a control instruction to the screen, and the control screen operation is used for a specific control method for displaying the sub-screen. Then establish a second correspondence between the preset action information and the control screen operation.
- the embodiment of this application does not limit the specific relationship between the multiple action information and the multiple control screen operations, as long as the action can be performed according to the action information
- the control screen corresponding to the information can be operated. For example: when the action information is the gesture "OK”, the corresponding control screen operation is "OK”, the action information is the gesture "single finger down”, the corresponding control screen operation is “select box down”, and the action information is gesture " Thumbs up”, the corresponding control screen operation is "return”, etc.
- a target screen control operation matching the action information of the target screen control object is determined from the plurality of action information in the second corresponding relationship, and the target screen control object is any one of the N screen control objects.
- a neural network model can be used to match the action information of the target screen control object with multiple action information in the second correspondence. If the action information of the target screen control object matches the multiple action information in the second correspondence If none of them match, the action information of the target screen control object is invalid action information. If the action information of the target screen control object matches any one of the action information in the second correspondence, the second correspondence is determined The action information in the relationship that matches the action information of the target screen object is the target screen operation.
- the display sub-screen corresponding to the target control screen object is controlled according to the control operation corresponding to the target control screen operation.
- the control screen After determining the target control screen operation of the target control screen object, determine the control screen operation corresponding to the target control screen operation according to the second correspondence, and then control the display sub-screen corresponding to the target control screen object according to the control screen operation corresponding to the target control screen operation .
- the control screen can be controlled by the control screen object’s action information and characteristic information.
- the display sub-screen corresponding to the object performs corresponding control.
- the method for controlling the screen obtaineds the first image of the user, and according to the first image, obtains the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects.
- the feature information of is used to distinguish N screen control objects, and then according to the respective feature information of the N screen control objects, the display sub-screen corresponding to each of the N screen control objects is determined.
- the display sub-screen is part or all of the area of the display screen, and finally According to the respective action information of the N screen control objects, the corresponding display sub-screens of the N screen control objects are controlled.
- the flexible control of the screen splitting is realized. Improved screen utilization and enhanced user experience.
- Fig. 5 is a flowchart of a screen control method provided by another embodiment of this application.
- the method can be executed by the screen control device provided in this embodiment of the application.
- the screen control device can be part or all of a terminal device.
- the execution subject is taken as an example to introduce the screen control method provided in the embodiment of the present application.
- the screen control method provided in the embodiment of the present application may further include:
- Step S201 Acquire a second image of the user.
- the second image of the user refers to the description of the method of obtaining the first image of the user in the step S101, which is not repeated in this embodiment of the present application.
- the second image includes screen control objects that satisfy the preset startup action.
- the camera switch may be turned on when preparing to obtain the second image of the user to obtain the second image of the user, which is not limited in the embodiment of the present application.
- Step S202 Determine the number N of control objects in the second image that satisfy the preset activation action, and the preset activation action is used to activate the multi-gesture control mode.
- the invalid control screen objects can be filtered out by the way of preset start actions, so as to realize accurate judgment on the number of displayed sub-screens.
- the embodiment of the application does not limit the specific actions of the preset activation action.
- the preset activation action may be a preset gesture action, and it is determined that the second image The number of human hands that satisfy the preset gesture action is N; in another possible implementation, if the screen control object is a face, the preset activation action may be a preset face expression, and it is determined that the second image satisfies the preset The number of facial expressions is N.
- the embodiment of the present application does not limit the manner of how to determine the number N of screen control objects that satisfy the preset activation action in the second image.
- the A plurality of screen control objects and the action information of the plurality of screen control objects are obtained, and then it is determined whether the action information of the plurality of screen control objects meets the preset start action, so as to determine the control block in the second image that meets the preset start action
- the number of objects N may be detected to determine the number N of screen control objects satisfying the preset startup action in the second image.
- Step S203 Present N display sub-screens on the display screen.
- N display sub-screens are presented on the display screen.
- the embodiment of the present application has an explanation of how to present N display sub-screens on the display screen.
- the specific implementation manner is not limited.
- the display screen is divided into N display sub-screens.
- the embodiment of the present application does not limit the specific implementation manner of dividing the display screen into N display sub-screens.
- the display screen is divided into N display sub-screens, the display screen can be equally divided into N display sub-screens, or the size and size of the N display sub-screens can be adjusted according to user needs.
- the position relationship is set.
- the embodiment of the present application does not limit the size and position relationship of each display sub-screen.
- the display screen can be divided into N multi-channels, and different images can be displayed through the multi-channels.
- multiple display sub-screens are presented on the display screen, which realizes screen segmentation and multi-gesture control.
- the screen mode is turned on, before the first image of the user is acquired, it can be determined whether the display screen can be split-screen control by detecting whether the multi-gesture control screen mode is turned on. If the multi-gesture control screen mode is not enabled, the user is required to start the multi-gesture control screen mode according to the preset start action, and then perform split-screen control of the display screen, which improves the efficiency of the user's split-screen control.
- FIG. 6 is a flowchart of a screen control method provided by another embodiment of the present application.
- the method can be executed by the screen control device provided in the embodiment of the present application, and the screen control device may be part or all of a terminal device, for example It may be a processor in a terminal device.
- the following takes the terminal device as an execution subject as an example to introduce the screen control method provided in the embodiment of the present application.
- the screen control method provided in the embodiment of the present application may further include:
- Step S301 Establish a first corresponding relationship between the feature information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens .
- the first correspondence between the sub-screens is displayed, and the first correspondence includes a one-to-one correspondence between the feature information of the screen control object and the displayed sub-screens.
- the number of screen control objects that meet the preset start action is 4, and the screen control objects are human hand 1, human hand 2, human hand 3, and human hand 4, and the display screen is divided into 4 display sub
- the screens are display sub-screen 1, display sub-screen 2, display sub-screen 3, and display sub-screen 4, respectively, to obtain the characteristic information of 4 control screen objects, namely, the characteristic information of human hand 1, the characteristic information of human hand 2, and the human hand
- the feature information of human hand 3 and the feature information of human hand 4 establish a one-to-one correspondence between the feature information of human hand and the display sub-screen.
- the feature information of human hand 1 corresponds to display sub-screen 1
- the feature information of human hand 2 corresponds to the display sub-screen.
- the screen 2 corresponds
- the characteristic information of the human hand 3 corresponds to the display sub-screen 3
- the characteristic information of the human hand 4 corresponds to the display sub-screen 4.
- the embodiment of the present application is not limited to this.
- step S102 may be:
- Step S302 According to the first correspondence and the respective characteristic information of the N screen control objects, determine the display sub-screens corresponding to each of the N screen control objects.
- the embodiment of this application does not limit how to determine the manner of displaying the sub-screen corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
- the respective characteristic information of the N screen control objects By acquiring the respective characteristic information of the N screen control objects, and then respectively match the characteristic information of the N screen control objects that meet the preset startup action, and finally determine the corresponding N screen control objects according to the first correspondence and the matching result
- the sub-screen is displayed.
- the first image includes 4 human hands, namely, human hand A, human hand B, human hand C, and human hand D, which are obtained separately
- the feature information of the four human hands is matched with the feature information of human hand 1, human hand 2, human hand 3, and human hand 4 in the second image.
- the feature information of human hand A is consistent with the feature information of human hand 1. It is determined that the display sub-screen 1 corresponding to the characteristic information of the human hand 1 is the display sub-screen corresponding to the human hand A, and the display sub-screen is controlled by the motion information of the human hand A, and so on, and will not be repeated.
- the corresponding relationship between the feature information of a plurality of screen control objects satisfying the preset startup action and the multiple display sub-screens is established, and the control screen is determined according to the corresponding relationship and the feature information of the screen control object
- the respective display sub-screens corresponding to the objects realize the separate control of the display sub-screens by the control object, and improve the accuracy of the control object's control of the display sub-screens.
- FIG. 7 is a schematic structural diagram of the screen control device provided by an embodiment of the present application.
- the screen control device may be part or all of a terminal device. The following takes the terminal device as the execution body as an example.
- the screen control device provided by the embodiment of the present application may include:
- the first acquisition module 71 is configured to acquire the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image.
- the screen control object is the body part of the user in the first image, and N is greater than or equal to Positive integer of 2;
- the first determining module 72 is configured to determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, and the display sub-screen is a partial area of the display screen;
- the control module 73 is configured to control the display sub-screens corresponding to the N screen control objects according to the respective action information of the N screen control objects.
- the functions of the first determining module and the control module may also be performed by a processing module.
- the processing module may be, for example, a processor, and the first acquiring module may be a transmission interface of the processor, or It can also be said that the first acquiring module is the receiving interface of the processor.
- the functions of the first determining module and the control module may also be performed by the processor.
- the screen control device provided in the embodiment of the present application may further include:
- the second acquisition module 74 is configured to acquire a second image of the user
- the second determining module 75 is configured to determine the number N of control objects in the second image that meet the preset activation action, and the preset activation action is used to activate the multi-gesture control mode;
- the segmentation module 76 is used to obtain N display sub-screens presented on the display screen.
- the dividing module divides the screen into a corresponding number of display sub-screens according to the number of control objects that meet the preset activation action determined by the second determining module.
- the second acquisition module and the first acquisition module may both be the transmission interface or the reception interface of the processor, and the functions of the second determination module and the segmentation module may both be completed by the processing module.
- the module may be a processor, for example. In this case, the functions of the second determining module and the splitting module may both be completed by the processor.
- the screen control device provided in the embodiment of the present application may further include:
- the establishment module 77 is used to establish a first correspondence between the characteristic information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, and the first correspondence includes one of the characteristic information of the control object and the display sub-screens.
- the first determining module 72 is specifically configured to:
- the display sub-screens corresponding to the N screen control objects are determined.
- control module 73 is specifically used for:
- the target control operation that matches the action information of the target control object is determined.
- the target control object is any one of the N control objects.
- the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
- the device embodiments provided in this application are merely illustrative, and the module division in FIG. 7 is only a logical function division, and there may be other division methods in actual implementation.
- multiple modules can be combined or integrated into another system.
- the mutual coupling between the various modules can be realized through some interfaces. These interfaces are usually electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces. Therefore, the modules described as separate components may or may not be physically separated, and may be located in one place or distributed to different locations on the same or different devices.
- FIG. 8A is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
- the terminal device provided by the present application includes a processor 81, a memory 82, and a transceiver 83.
- the memory stores software instructions or computer programs; the processor may be a chip, the transceiver 83 implements the sending and receiving of communication data by the terminal device, and the processor 81 is configured to call the software instructions in the memory to implement the above-mentioned screen control method, Please refer to the method embodiment for the content and effect.
- FIG. 8B is a schematic structural diagram of a terminal device provided by another embodiment of the present application. As shown in FIG. 8B, the terminal device provided by the present application includes a processor 84 and a transmission interface 85.
- the transmission interface 85 Is used to receive the first image of the user obtained by the camera; the processor 84 is used to call the software instructions stored in the memory to perform the following steps: obtain the respective feature information and N control objects of the N control screen objects in the first image
- the respective action information of the screen objects, the screen control object is the user's body part in the first image, and N is a positive integer greater than or equal to 2; according to the respective characteristic information of the N screen control objects, the corresponding display of the N screen control objects is determined
- the display sub-screen is a partial area of the display screen; according to the respective action information of the N screen-control objects, the display sub-screens corresponding to the N screen-control objects are controlled.
- the transmission interface 85 is further configured to receive a second image of the user acquired by the camera; the processor 84 is further configured to: determine the number N of control objects in the second image that satisfy the preset activation action, and preset The start action is used to start the multi-gesture control mode; N display sub-screens presented on the display screen are obtained.
- processor 84 is also used to:
- the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; the processor; 84. It is specifically configured to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
- processor 84 is also used to:
- the target control operation that matches the action information of the target control object is determined.
- the target control object is any one of the N control objects.
- the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
- FIG. 9 is a schematic diagram of the hardware architecture of an exemplary screen control device provided by an embodiment of the present application. As shown in FIG. 9, the hardware architecture of the screen control device 900 may be applicable to SOC and application processor (AP).
- AP application processor
- the screen control device 900 includes at least one central processing unit (CPU), at least one memory, a graphics processing unit (GPU), a decoder, a dedicated video or graphics processor, and a receiver. Interface and sending interface, etc.
- the screen control device 900 may also include a microprocessor and a microcontroller MCU, etc.
- the above-mentioned parts of the screen control device 900 are coupled through a connector. It should be understood that, in the various embodiments of the present application, coupling refers to mutual connection in a specific manner, including direct connection or through other The devices are indirectly connected, such as through various interfaces, transmission lines or buses.
- interfaces are usually electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces, which are not limited in this embodiment.
- the above-mentioned parts are integrated on the same chip; in another optional case, the CPU, GPU, decoder, receiving interface, and transmitting interface are integrated on one chip, and the chip is The various parts of the bus access external memory.
- the dedicated video/graphics processor may be integrated with the CPU on the same chip, or may exist as a separate processor chip.
- the dedicated video/graphics processor may be a dedicated image signal processor (ISP).
- the chip involved in the embodiments of this application is a system manufactured on the same semiconductor substrate by an integrated circuit process, also called a semiconductor chip, which can be manufactured on a substrate using an integrated circuit process (usually a semiconductor such as silicon)
- the outer layer of the integrated circuit formed on the material) is usually encapsulated by a semiconductor packaging material.
- the integrated circuit may include various types of functional devices, and each type of functional device includes transistors such as logic gate circuits, Metal-Oxide-Semiconductor (MOS) transistors, bipolar transistors or diodes, and may also include capacitors and resistors. Or inductance and other components.
- MOS Metal-Oxide-Semiconductor
- bipolar transistors or diodes may also include capacitors and resistors. Or inductance and other components.
- Each functional device can work independently or under the action of necessary driver software, and can realize various functions such as communication, calculation, or storage.
- the CPU may be a single-CPU processor or a multi-CPU processor; optionally, the CPU may be a processor group composed of multiple processors, between multiple processors Coupled to each other through one or more buses.
- part of the processing of the image signal or video signal is done by the GPU, part is done by a dedicated video/graphics processor, and it may also be done by software code running on a general-purpose CPU or GPU.
- the device may also include a memory, which can be used to store computer program instructions, including various computer program codes including an operating system (Operation System, OS), various user application programs, and program codes used to execute the solutions of the present application; the memory; It can also be used to store video data, image data, etc.; the CPU can be used to execute computer program codes stored in the memory to implement the methods in the embodiments of the present application.
- OS Operating System
- OS operating system
- user application programs various user application programs
- program codes used to execute the solutions of the present application
- the memory It can also be used to store video data, image data, etc.
- the CPU can be used to execute computer program codes stored in the memory to implement the methods in the embodiments of the present application.
- the memory may be a non-power-down volatile memory, such as Embedded MultiMedia Card (EMMC), Universal Flash Storage (UFS) or Read-Only Memory (ROM) ), or other types of static storage devices that can store static information and instructions, or volatile memory (volatile memory), such as random access memory (Random Access Memory, RAM), or can store information and instructions
- EMMC Embedded MultiMedia Card
- UFS Universal Flash Storage
- ROM Read-Only Memory
- volatile memory volatile memory
- volatile memory volatile memory
- volatile memory volatile memory
- RAM random access memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD-ROM Compact Disc Read-Only Memory
- CD storage
- the receiving interface may be a data input interface of the processor chip.
- the receiving interface may be a mobile industry processor interface (MIPI) or a high-definition multimedia interface (High Definition). Multimedia Interface, HDMI) or Display Port (DP), etc.
- MIPI mobile industry processor interface
- HDMI High Definition
- DP Display Port
- FIG. 10 is a schematic structural diagram of a terminal device provided by another embodiment of the present application.
- the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, and a universal serial bus ( universal serial bus, USB) interface 130, charging management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C , A headset interface 170D, a sensor 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
- SIM subscriber identification module
- the structure illustrated in this embodiment does not constitute a specific limitation on the terminal device 100.
- the terminal device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
- the illustrated components can be implemented by hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units.
- the processor 110 may include an AP, a modem processor, a GPU, an ISP, a controller, a video codec, and a digital signal processor (DSP). , Baseband processor, and/or neural-network processing unit (NPU), etc.
- the different processing units may be independent devices or integrated in one or more processors.
- the terminal device 100 may also include one or more processors 110.
- the controller may be the nerve center and command center of the terminal device 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
- a memory may also be provided in the processor 110 to store instructions and data.
- the memory in the processor 110 is a cache memory.
- the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. This avoids repeated accesses, reduces the waiting time of the processor 110, and improves the efficiency of the terminal device 100 system.
- the processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- universal asynchronous transmitter universal asynchronous transmitter
- the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
- the USB interface 130 can be used to connect a charger to charge the terminal device 100, and can also be used to transfer data between the terminal device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
- the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the terminal device 100.
- the terminal device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
- the charging management module 140 is used to receive charging input from the charger.
- the charger can be a wireless charger or a wired charger.
- the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
- the charging management module 140 may receive wireless charging input through the wireless charging coil of the terminal device 100. While the charging management module 140 charges the battery 142, it can also supply power to the terminal device 100 through the power management module 141.
- the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
- the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
- the power management module 141 may also be provided in the processor 110.
- the power management module 141 and the charging management module 140 may also be provided in the same device.
- the wireless communication function of the terminal device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
- the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in the terminal device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
- the antenna can be used in combination with a tuning switch.
- the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the terminal device 100.
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc.
- the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
- at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
- at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
- the modem processor may include a modulator and a demodulator.
- the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
- the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
- the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
- the modem processor may be an independent device.
- the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide applications on the terminal device 100, including wireless local area networks (wireless local area networks, WLAN), Bluetooth, global navigation satellite system (GNSS), frequency modulation (FM), NFC, Infrared technology (infrared, IR) and other wireless communication solutions.
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
- the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
- the antenna 1 of the terminal device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, GNSS, WLAN, NFC, FM, and/or IR technology.
- the aforementioned GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi- Zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
- the terminal device 100 can implement a display function through a GPU, a display screen 194, and an application processor.
- the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- the processor 110 may include one or more GPUs, which execute instructions to generate or change display information.
- the display screen 194 is used to display images, videos, etc.
- the display screen 194 includes a display panel.
- the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
- LCD liquid crystal display
- OLED organic light-emitting diode
- active-matrix organic light-emitting diode active-matrix organic light-emitting diode
- AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
- the terminal device 100 may include one or N display screens 194, and N is a positive integer greater than one.
- the terminal device 100 may implement a shooting function through an ISP, one or more cameras 193, a video codec, a GPU, one or more display screens 194, and an application processor.
- NPU is a neural-network (NN) computing processor.
- NN neural-network
- the NPU can realize applications such as intelligent cognition of the terminal device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
- the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data files in an external memory card.
- the internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions.
- the processor 110 can run the above-mentioned instructions stored in the internal memory 121 to enable the terminal device 100 to execute the screen control methods provided in some embodiments of the present application, as well as various functional applications and data processing.
- the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on.
- the data storage area can store data (such as photos, contacts, etc.) created during the use of the terminal device 100.
- the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
- the processor 110 may execute instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110 to cause the terminal device 100 to execute the instructions provided in the embodiments of the present application. Screen control methods, as well as various functional applications and data processing.
- the terminal device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
- the audio module 170 is used to convert digital audio information into an analog audio signal for output, and also used to convert an analog audio input into a digital audio signal.
- the audio module 170 can also be used to encode and decode audio signals.
- the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
- the speaker 170A also called a "speaker", is used to convert audio electrical signals into sound signals.
- the terminal device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
- the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
- the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
- the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
- the terminal device 100 may be provided with at least one microphone 170C.
- the terminal device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the terminal device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
- the earphone interface 170D is used to connect wired earphones.
- the earphone interface 170D can be a USB interface 130, a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) Standard interface.
- the sensor 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an ambient light sensor 180L , Bone conduction sensor 180M and so on.
- the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
- the pressure sensor 180A may be provided on the display screen 194.
- the capacitive pressure sensor may be composed of at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The terminal device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the terminal device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
- the terminal device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
- touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
- the gyro sensor 180B may be used to determine the movement posture of the terminal device 100.
- the angular velocity of the terminal device 100 around three axes ie, x, y, and z axes
- the gyro sensor 180B can be used for image stabilization.
- the gyroscope sensor 180B detects the shake angle of the terminal device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the terminal device 100 through a reverse movement to achieve anti-shake.
- the gyro sensor 180B can also be used for navigation, somatosensory game scenes and so on.
- the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal device 100 in various directions (generally three-axis). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the terminal device, and is used in applications such as horizontal and vertical screen switching, and pedometer.
- the terminal device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the terminal device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
- the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
- the light emitting diode may be an infrared light emitting diode.
- the terminal device 100 emits infrared light to the outside through the light emitting diode.
- the terminal device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 100. When insufficient reflected light is detected, the terminal device 100 can determine that there is no object near the terminal device 100.
- the terminal device 100 can use the proximity light sensor 180G to detect that the user holds the terminal device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
- the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
- the ambient light sensor 180L is used to sense the brightness of the ambient light.
- the terminal device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal device 100 is in a pocket to prevent accidental touch.
- the fingerprint sensor 180H (also called a fingerprint reader) is used to collect fingerprints.
- the terminal device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
- other descriptions of the fingerprint sensor can be found in the international patent application PCT/CN2017/082773 entitled “Method and Terminal Device for Processing Notification", the entire content of which is incorporated in this application by reference.
- the touch sensor 180K can also be called a touch panel or a touch-sensitive surface.
- the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a touch screen.
- the touch sensor 180K is used to detect touch operations acting on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- the display screen 194 may provide visual output related to touch operations.
- the touch sensor 180K may also be disposed on the surface of the terminal device 100, which is different from the position of the display screen 194.
- the bone conduction sensor 180M can acquire vibration signals.
- the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
- the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
- the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
- the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
- the application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
- the button 190 includes a power button, a volume button, and so on.
- the button 190 may be a mechanical button or a touch button.
- the terminal device 100 may receive key input, and generate key signal input related to user settings and function control of the terminal device 100.
- the SIM card interface 195 is used to connect to the SIM card.
- the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the terminal device 100.
- the terminal device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
- the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
- the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
- the SIM card interface 195 can also be compatible with different types of SIM cards.
- the SIM card interface 195 may also be compatible with external memory cards.
- the terminal device 100 interacts with the network through the SIM card to realize functions such as call and data communication.
- the terminal device 100 uses an eSIM, that is, an embedded SIM card.
- the eSIM card can be embedded in the terminal device 100 and cannot be separated from the terminal device 100.
- the embodiments of the present application also provide a computer-readable storage medium.
- the computer-readable storage medium stores computer-executable instructions.
- the user equipment executes the aforementioned various possibilities. Methods.
- the computer-readable medium includes a computer storage medium and a communication medium
- the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
- the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.
- An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and can write information to the storage medium.
- the storage medium may also be an integral part of the processor.
- the processor and the storage medium may be located in the ASIC.
- the ASIC may be located in the user equipment.
- the processor and the storage medium may also exist as discrete components in the communication device.
- a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
- the aforementioned program can be stored in a computer readable storage medium.
- the steps including the foregoing method embodiments are executed; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
La présente invention concerne un procédé, un dispositif et un appareil de commande d'écran, ainsi qu'un support de stockage. Ledit procédé consiste à : acquérir des informations de caractéristiques respectives de N objets de commande d'écran dans une première Image et des informations d'action respectives des N objets de commande d'écran, les objets de commande d'écran étant des parties de corps d'un utilisateur dans la première image, et N étant un nombre entier positif supérieur ou égal à 2 ; selon les informations de caractéristiques respectives des N objets de commande d'écran, déterminer des sous-écrans d'affichage correspondant respectivement aux N objets de commande d'écran, les sous-écrans d'affichage étant des zones partielles d'un écran d'affichage ; et selon les informations d'action respectives des N objets de commande d'écran, commander les sous-écrans d'affichage correspondant respectivement aux N objets de commande d'écran. De cette manière, la commande d'écran partagé d'un écran est mise en œuvre, améliorant le taux d'utilisation de l'écran ainsi que l'expérience de l'utilisateur.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980095746.6A CN113728295B (zh) | 2019-05-31 | 2019-05-31 | 控屏方法、装置、设备及存储介质 |
PCT/CN2019/089489 WO2020237617A1 (fr) | 2019-05-31 | 2019-05-31 | Procédé, dispositif et appareil de commande d'écran, et support de stockage |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/089489 WO2020237617A1 (fr) | 2019-05-31 | 2019-05-31 | Procédé, dispositif et appareil de commande d'écran, et support de stockage |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020237617A1 true WO2020237617A1 (fr) | 2020-12-03 |
Family
ID=73552477
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/089489 WO2020237617A1 (fr) | 2019-05-31 | 2019-05-31 | Procédé, dispositif et appareil de commande d'écran, et support de stockage |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113728295B (fr) |
WO (1) | WO2020237617A1 (fr) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112860367A (zh) * | 2021-03-04 | 2021-05-28 | 康佳集团股份有限公司 | 设备界面可视化方法、智能终端及计算机可读存储介质 |
CN114527922A (zh) * | 2022-01-13 | 2022-05-24 | 珠海视熙科技有限公司 | 一种基于屏幕识别实现触控的方法及屏幕控制设备 |
CN114915721A (zh) * | 2021-02-09 | 2022-08-16 | 华为技术有限公司 | 建立连接的方法与电子设备 |
CN115113797A (zh) * | 2022-08-29 | 2022-09-27 | 深圳市优奕视界有限公司 | 一种控制面板的智能分区显示方法及相关产品 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103207741A (zh) * | 2012-01-12 | 2013-07-17 | 飞宏科技股份有限公司 | 多人触控计算机虚拟对象的控制方法及其系统 |
US20150200985A1 (en) * | 2013-11-13 | 2015-07-16 | T1visions, Inc. | Simultaneous input system for web browsers and other applications |
CN105138122A (zh) * | 2015-08-12 | 2015-12-09 | 深圳市卡迪尔通讯技术有限公司 | 一种通过识别手势遥控屏幕设备的方法 |
CN105653024A (zh) * | 2015-12-22 | 2016-06-08 | 深圳市金立通信设备有限公司 | 一种终端控制方法及终端 |
CN106569596A (zh) * | 2016-10-20 | 2017-04-19 | 努比亚技术有限公司 | 一种手势控制方法和设备 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104572004B (zh) * | 2015-02-02 | 2019-01-15 | 联想(北京)有限公司 | 一种信息处理方法及电子设备 |
CN107479815A (zh) * | 2017-06-29 | 2017-12-15 | 努比亚技术有限公司 | 实现分屏屏幕控制的方法、终端和计算机可读存储介质 |
-
2019
- 2019-05-31 WO PCT/CN2019/089489 patent/WO2020237617A1/fr active Application Filing
- 2019-05-31 CN CN201980095746.6A patent/CN113728295B/zh active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103207741A (zh) * | 2012-01-12 | 2013-07-17 | 飞宏科技股份有限公司 | 多人触控计算机虚拟对象的控制方法及其系统 |
US20150200985A1 (en) * | 2013-11-13 | 2015-07-16 | T1visions, Inc. | Simultaneous input system for web browsers and other applications |
CN105138122A (zh) * | 2015-08-12 | 2015-12-09 | 深圳市卡迪尔通讯技术有限公司 | 一种通过识别手势遥控屏幕设备的方法 |
CN105653024A (zh) * | 2015-12-22 | 2016-06-08 | 深圳市金立通信设备有限公司 | 一种终端控制方法及终端 |
CN106569596A (zh) * | 2016-10-20 | 2017-04-19 | 努比亚技术有限公司 | 一种手势控制方法和设备 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114915721A (zh) * | 2021-02-09 | 2022-08-16 | 华为技术有限公司 | 建立连接的方法与电子设备 |
CN112860367A (zh) * | 2021-03-04 | 2021-05-28 | 康佳集团股份有限公司 | 设备界面可视化方法、智能终端及计算机可读存储介质 |
CN112860367B (zh) * | 2021-03-04 | 2023-12-12 | 康佳集团股份有限公司 | 设备界面可视化方法、智能终端及计算机可读存储介质 |
CN114527922A (zh) * | 2022-01-13 | 2022-05-24 | 珠海视熙科技有限公司 | 一种基于屏幕识别实现触控的方法及屏幕控制设备 |
CN115113797A (zh) * | 2022-08-29 | 2022-09-27 | 深圳市优奕视界有限公司 | 一种控制面板的智能分区显示方法及相关产品 |
Also Published As
Publication number | Publication date |
---|---|
CN113728295A (zh) | 2021-11-30 |
CN113728295B (zh) | 2024-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3974970A1 (fr) | Procédé d'affichage plein écran pour terminal mobile et appareil | |
WO2020168965A1 (fr) | Procédé de commande d'un dispositif électronique à écran pliant et dispositif électronique | |
WO2020177619A1 (fr) | Procédé, dispositif et appareil pour fournir un rappel de mise en charge d'un terminal, et support de stockage | |
WO2021052214A1 (fr) | Procédé et appareil d'interaction par geste de la main et dispositif terminal | |
WO2021213164A1 (fr) | Procédé d'interaction entre des interfaces d'application, dispositif électronique et support de stockage lisible par ordinateur | |
US11930130B2 (en) | Screenshot generating method, control method, and electronic device | |
WO2021036770A1 (fr) | Procédé de traitement d'écran partagé et dispositif terminal | |
WO2020237617A1 (fr) | Procédé, dispositif et appareil de commande d'écran, et support de stockage | |
WO2021073448A1 (fr) | Procédé et dispositif de rendu d'image, équipement électronique et support d'enregistrement | |
JP2023503281A (ja) | エネルギー効率の良い表示処理方法およびデバイス | |
WO2020062310A1 (fr) | Procédé de détection de stylet, système et dispositif associé | |
US20230117194A1 (en) | Communication Service Status Control Method, Terminal Device, and Readable Storage Medium | |
WO2021104000A1 (fr) | Procédé d'affichage d'écran et dispositif électronique | |
WO2021052170A1 (fr) | Procédé de commande de vibration de moteur et dispositif électronique | |
WO2020019355A1 (fr) | Procédé de commande tactile pour dispositif vestimentaire, et système et dispositif vestimentaire | |
WO2021052407A1 (fr) | Procédé de commande de dispositif électronique et dispositif électronique | |
WO2022095744A1 (fr) | Procédé de commande d'affichage vr, dispositif électronique et support de stockage lisible par ordinateur | |
WO2022170856A1 (fr) | Procédé d'établissement de connexion et dispositif électronique | |
CN114090102B (zh) | 启动应用程序的方法、装置、电子设备和介质 | |
WO2022170854A1 (fr) | Procédé d'appel vidéo et dispositif associé | |
CN115589051B (zh) | 充电方法和终端设备 | |
EP4395290A1 (fr) | Procédé de lecture audio bluetooth, dispositif électronique, et support de stockage | |
WO2020221062A1 (fr) | Procédé d'opération de navigation et dispositif électronique | |
CN112527220B (zh) | 一种电子设备显示方法及电子设备 | |
WO2023216930A1 (fr) | Procédé de rétroaction de vibration basé sur un dispositif habitronique, système, dispositif habitronique et dispositif électronique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19930974 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19930974 Country of ref document: EP Kind code of ref document: A1 |