WO2020237617A1 - Screen control method, device and apparatus, and storage medium - Google Patents

Screen control method, device and apparatus, and storage medium Download PDF

Info

Publication number
WO2020237617A1
WO2020237617A1 PCT/CN2019/089489 CN2019089489W WO2020237617A1 WO 2020237617 A1 WO2020237617 A1 WO 2020237617A1 CN 2019089489 W CN2019089489 W CN 2019089489W WO 2020237617 A1 WO2020237617 A1 WO 2020237617A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
control
control objects
display sub
screen control
Prior art date
Application number
PCT/CN2019/089489
Other languages
French (fr)
Chinese (zh)
Inventor
荀振
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980095746.6A priority Critical patent/CN113728295A/en
Priority to PCT/CN2019/089489 priority patent/WO2020237617A1/en
Publication of WO2020237617A1 publication Critical patent/WO2020237617A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • This application relates to the field of screen control technology, and in particular to a method, device, device, and storage medium for screen control.
  • the screen is controlled through gestures, usually through a single-person gesture to interact with the large screen, and the large screen is controlled correspondingly through gestures, so as to achieve the effect of controlling the screen through gestures.
  • the screen is controlled by gestures, and the large screen can only be controlled by a single gesture, resulting in low screen utilization.
  • the embodiments of the present application provide a method, device, device, and storage medium for controlling a screen, which are used to implement split-screen control of a screen by multiple screen control objects, which not only improves screen utilization, but also enhances user experience.
  • this application provides a method for controlling a screen, including:
  • the display sub-screen corresponding to the control object is determined according to the characteristic information of the control object, and the display sub-screen corresponding to the control object is controlled according to the action information of the control object, thereby realizing the split screen Control not only improves screen utilization, but also enhances user experience.
  • the screen control method provided in this embodiment of the present application further includes:
  • the multi-gesture control mode is activated based on the preset activation action and Only the control object that satisfies the preset start action can become one of the controllers of multiple display sub-screens, which improves the anti-interference of the split screen.
  • the control object is the user's hand. If there are four people in front of the screen , But only three people want to participate in the split-screen control, and only the hand that presents the preset start action can participate in the split-screen control, avoiding the fourth person’s hand from interfering with the split-screen control.
  • the screen control method provided in the embodiment of the present application further includes:
  • the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; according to N Determining the respective display sub-screens corresponding to the N screen control objects by the respective feature information of each control screen object, including: determining the respective display corresponding to the N screen control objects according to the first correspondence and the respective feature information of the N screen control objects Sub-screen.
  • the corresponding relationship between the feature information of a plurality of screen control objects satisfying the preset startup action and the multiple display sub-screens is established, and the control screen is determined according to the corresponding relationship and the feature information of the screen control object
  • the display sub-screens corresponding to the objects respectively realize the corresponding control of the control object to the display sub-screens, and improve the accuracy of the control object's control of the display sub-screens.
  • controlling the respective display sub-screens corresponding to the N screen control objects according to the respective action information of the N screen control objects includes:
  • the target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple A one-to-one correspondence between each action information and multiple control operations; according to the target control operation, the display sub-screen corresponding to the target control screen object is controlled.
  • an embodiment of the present application provides a screen control device, including:
  • the first acquisition module is used to acquire the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image.
  • the screen control object is the body part of the user in the first image, and N is greater than or equal to 2
  • the first determining module is used to determine the display sub-screen corresponding to each of the N control screen objects according to the respective characteristic information of the N control screen objects, and the display sub-screen is a partial area of the display screen; the control module is used for According to the respective action information of the N screen control objects, the corresponding display sub-screens of the N screen control objects are controlled.
  • the screen control device provided in the embodiment of the present application further includes:
  • the second acquisition module is used to acquire a second image of the user; the second determination module is used to determine the number N of control objects in the second image that meets the preset activation action, and the preset activation action is used to activate multiple gestures Control mode; segmentation module, used to obtain N display sub-screens presented on the display screen.
  • the screen control device provided in the embodiment of the present application further includes:
  • the establishment module is used to establish a first corresponding relationship between the feature information of the N screen control objects that meet the preset startup action and the N display sub-screens.
  • the first corresponding relationship includes the feature information of the control object and the display sub-screen.
  • the first determining module is specifically configured to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
  • control module is specifically used for:
  • the target control operation that matches the action information of the target control object is determined.
  • the target control object is any one of the N control objects.
  • the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
  • an embodiment of the present application provides a device, including:
  • the processor and the transmission interface, the transmission interface, are used to receive the first image of the user obtained by the camera; the processor is used to call the software instructions stored in the memory to perform the following steps:
  • the transmission interface is also used to receive the second image of the user obtained by the camera; the processor is also used to:
  • the number N of control objects satisfying the preset activation action in the second image is determined, and the preset activation action is used to activate the multi-gesture control mode; N display sub-screens presented on the display screen are obtained.
  • the processor is also used to:
  • the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; the processor; , Specifically used to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
  • the processor is also used to:
  • the target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple A one-to-one correspondence between each action information and multiple control operations; according to the target control operation, the display sub-screen corresponding to the target control screen object is controlled.
  • an embodiment of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium.
  • the instructions run on a computer or a processor, the computer or the processor executes the Examples are the screen control methods provided in the first aspect and the optional methods of the first aspect.
  • the fifth aspect of the present application provides a computer program product containing instructions, which, when run on a computer or processor, causes the computer or processor to execute the control provided in the first aspect or alternatively provided by the first aspect. ⁇ method.
  • the screen control method, device, device, and storage medium provided by the embodiments of the present application obtain the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects in the first image, and the screen control object Is the body part of the user in the first image, and N is a positive integer greater than or equal to 2; then, according to the respective characteristic information of the N screen control objects, determine the display sub-screens corresponding to each of the N screen control objects, The display sub-screen is a partial area of the display screen; finally, according to the respective action information of the N screen control objects, the display sub-screens corresponding to each of the N screen control objects are controlled. .
  • the split-screen control of the screen is realized, which not only improves Screen utilization, and enhanced user experience.
  • Fig. 1 is a schematic diagram of an exemplary application scenario provided by an embodiment of the present application
  • Figure 2 is a schematic diagram of another exemplary application scenario provided by an embodiment of the present application.
  • FIG. 3 is a flowchart of a method for controlling a screen provided by an embodiment of the present application
  • Fig. 4 is an exemplary neural network application architecture diagram provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a screen control method provided by another embodiment of the present application.
  • FIG. 6 is a flowchart of a screen control method provided by another embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a screen control device provided by an embodiment of the present application.
  • FIG. 8A is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 8B is a schematic structural diagram of a terminal device provided by another embodiment of the present application.
  • FIG. 9 is a schematic diagram of the hardware architecture of an exemplary screen control device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a terminal device provided by another embodiment of the present application.
  • first, second, third, fourth, etc. may be used to describe user images in the embodiments of the present invention, these user images should not be limited to these terms. These terms are only used to distinguish user images from each other.
  • the first user image may also be referred to as the second user image, and similarly, the second user image may also be referred to as the first user image.
  • Fig. 1 is a schematic diagram of an exemplary application scenario provided by an embodiment of the present application.
  • a television 10 is connected to a camera 11 through a universal serial bus or other high-speed bus 12.
  • a universal serial bus or other high-speed bus 12 When multiple users watch the television, every The TV programs that a user wants to watch are different, or there are video games in which multiple users participate. It is necessary to split the TV screen according to the characteristic information of each user, and each user can independently control the characteristic information of the user.
  • the corresponding display sub-screen for example, as shown in FIG. 1, the user image or video opposite to the television 10 is captured by the camera 11, and after processing and judgment, the display screen of the television 10 is divided into a display sub-screen 1 and a display sub-screen 2.
  • display sub-screen 1 and display sub-screen 2 can display different playback content, and can continuously obtain user images through camera 11, and then process the user image to perform display sub-screen 1 and display sub-screen 2 respectively Control, the embodiment of this application is not limited to this.
  • the television 10 may further include a video signal source interface 13, a wired network interface or a wireless network interface module 14, or a peripheral device interface 15, which is not limited in the embodiment of the present application.
  • FIG. 2 is a schematic diagram of another exemplary application scenario provided by an embodiment of this application.
  • the display device may include a central processing unit, a system memory, and an edge artificial intelligence processor.
  • the central processing unit is connected to the system memory, the central processing unit can be used to execute the screen control method provided in the embodiments of this application, the central processing unit can also be connected to the edge artificial intelligence processor core, and the edge artificial intelligence can be used for To implement the image processing part of the screen control method provided by the embodiments of this application, the edge artificial intelligence processor core is connected to the image memory, and the image memory can be used to store the images acquired by the camera, the universal serial bus of the camera and the display device or other high-speed buses connection.
  • the embodiments of the present application provide a method, device, device, and storage medium for controlling a screen.
  • FIG. 3 is a flowchart of a method for controlling a screen provided by an embodiment of the present application.
  • the method can be executed by the device for controlling a screen provided by an embodiment of the present application.
  • the controlling device may be part or all of a terminal device, for example, in a terminal device.
  • the following uses the terminal device as the execution body as an example to introduce the screen control method provided in the embodiment of the present application.
  • the screen control method provided by the embodiment of the present application may include:
  • Step S101 Obtain the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image.
  • the screen control objects are the body parts of the user in the first image, and N is a positive integer greater than or equal to 2.
  • the first image of the user can be obtained through a camera or an image sensor.
  • the camera can be set in the terminal device, or set independently of the terminal device, and be connected to the terminal device by wired or wireless connection. There are no restrictions on the installation location, as long as the user's first image can be obtained.
  • the camera collects the first image of the user by means of video collection or image collection.
  • the embodiment of the present application does not limit the specific method of how to obtain the first image of the user through the camera.
  • the transmission interface of the processor chip receiving the user's image obtained by the camera or the image sensor can also be regarded as obtaining the user's first image, that is, The processor chip obtains the user's first image through the transmission interface.
  • the embodiment of the present application does not limit the specific part of the user's body part.
  • the judgment of the user's body part in the first image can be realized by setting a preset body part.
  • the preset body part may be a human hand.
  • the N screen control objects are at least one human hand in the first image, and the feature information of the N screen control objects is N in the first image.
  • the hand feature information of a human hand includes but is not limited to handprints, hand shape, hand size, or hand skin color.
  • the motion information of each of the N screen control objects is the hand motion information of the N human hands in the first image.
  • the preset body part may be a human face.
  • the N screen control objects are the N faces in the first image, and the feature information of the N screen control objects is in the first image.
  • the facial feature information of the N faces, and the respective action information of the N screen control objects are the facial action information of the N faces in the first image, such as facial expressions, and the embodiment of the present application is not limited thereto.
  • the feature information of each of the N screen control objects is used to distinguish the N screen control objects.
  • the screen control object is a human hand
  • the hand feature information of the human hand is used to distinguish different human hands.
  • Face the feature information of the face is used to distinguish different faces.
  • the embodiment of this application does not limit the specific implementation manners of how to obtain the respective characteristic information of the at least one screen control object and the respective action information of the at least one screen control object according to the first image.
  • the way of machine learning such as Convolutional Neural Network (CNN) model, etc.
  • CNN Convolutional Neural Network
  • FIG. 4 is an exemplary neural network application architecture diagram provided by an embodiment of the present application.
  • the exemplary neural network application architecture diagram provided by an embodiment of the present application may include an application program entry 41, a model external interface 42, and The deep learning structure 43, the device driver 44, the central processing unit 45, the graphics processor 46, the network processor 47, and the digital processor 48.
  • the application program entry 41 is used to select the neural network model
  • the model external interface 42 is used
  • the deep learning structure 43 is used to process the input first user image through the neural network model.
  • the deep learning structure 43 includes an environment manager 431, a model manager 432, and task scheduling 433, task performer 434, and event manager 435.
  • the environment manager 431 is used to control the startup and shutdown of the device-related environment
  • the model manager 432 is used to load and unload the neural network model
  • the task scheduler 433 is used to manage the neural network model. Which sequence is used for scheduling, the task executor 434 is responsible for executing tasks of the neural network model, and the event manager 435 is responsible for notifications of various events.
  • the neural network application architecture provided by the embodiments of the present application is not limited to this.
  • Step S102 Determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, and the display sub-screen is a partial area of the display screen.
  • the display screen After acquiring the respective characteristic information of the N screen control objects, determine the corresponding display sub-screens of the N screen control objects.
  • 4 control objects and 4 control objects are obtained.
  • the display screen is divided into 4 display sub-screens, and each display sub-screen is bound to the characteristic information of a control object, so that the control object can only control the characteristic information of the control object.
  • Bound display sub-screens in a possible implementation, preset feature information can be set for each display sub-screen, and then N can be determined according to the respective feature information and preset feature information of the N control screen objects
  • Each control screen object corresponds to the display sub-screen.
  • the screen is divided into 4 display sub-screens, and each display sub-screen has a one-to-one corresponding preset feature information, and the feature information of the control screen object is matched with the preset feature information, and then determined according to the matching result
  • the display sub-screen corresponding to the control screen object does not limit the specific implementation of how to determine the display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects.
  • the display sub-screen may be a partial area of the display screen, for example: the display screen is divided into different display sub-screens; in another possible implementation manner, the display sub-screen is a display All areas of the screen, for example: the display screen is a multi-channel display mode, which can realize the function of outputting multiple different pictures at the same time on the same display screen, and has multi-channel audio output. Users can watch separately by wearing different glasses and headphones For two different programs, etc., the embodiment of the present application does not limit the area of the display sub-screen and the splitting method.
  • Determining the display sub-screens corresponding to each of the N screen control objects can be implemented according to the identification of the display sub-screen and the identification between the screen control objects.
  • each display sub-screen is identified according to the preset feature information.
  • the embodiment of the present application does not limit the specific identification method of the display sub-screen, for example, by encoding, numbers, symbols, text, etc., for example, Display sub-screen 1 corresponds to preset feature information 1, display sub-screen 2 corresponds to preset feature information 2, and so on.
  • the feature information of the N screen control objects in the first image is detected, and the N screen control objects are identified according to the feature information of the screen control object.
  • the embodiment of this application does not limit the specific identification method of the screen control object, for example, If the feature information of the screen control object matches the preset feature information 1, the screen control object is identified as the screen control object 1, and so on.
  • the embodiment of this application does not limit the specific implementation of how to identify the screen control object.
  • the feature information of the N screen control objects in the first image can be detected through the CNN model, and the feature information of the control screen objects can be detected according to the control.
  • the feature information of the screen object identifies the N screen control objects.
  • the coordinate information of each control screen object in the first image can be checked through CNN, and each control screen image can be cropped in the original image according to the coordinate information of each control screen object , Process as a separate image, detect the characteristic information of the control screen object in each separate image, and identify each separate image.
  • the display sub-screen corresponding to each screen control object is determined according to the identification of the display sub-screen or the corresponding relationship between the identification of the screen control object and the identification of the display sub-screen.
  • the first image includes 3 screen control objects, which are respectively identified as screen control object 1, screen control object 2 and screen control object 3.
  • the screen is divided into 3 display sub-screens, respectively identified as display sub-screen 1, display Sub-screen 2 and display sub-screen 3.
  • the display sub-screen corresponding to control-screen object 1 is display sub-screen 1
  • the display sub-screen corresponding to control-screen object 2 is display sub-screen 2
  • the display sub-screen corresponding to control-screen object 3 is display Sub-screen 3, the comparison of the embodiments of this application is not limited.
  • Step S103 According to the respective action information of the N screen control objects, control the display sub-screens corresponding to each of the N screen control objects.
  • the respective display sub-screens corresponding to the N screen control objects are controlled according to the respective action information of the N screen control objects.
  • the display sub-screen 1 is controlled according to the action information of the screen control object 1
  • the sub-screen 2 is controlled according to the action information of the screen control object 2.
  • the sub-screen 3 is controlled to be displayed according to the action information of the control object 3
  • the embodiment of the present application does not limit how to control the display sub-screen corresponding to the screen control object according to the action information of the screen control object.
  • the respective display sub-screens corresponding to the N screen control objects are controlled according to the respective motion information of the N screen control objects.
  • Screen including:
  • the target control operation that matches the action information of the target control object is determined.
  • the target control object is any one of the N control objects.
  • the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
  • the second correspondence includes a one-to-one correspondence between multiple action information and multiple control screen operations, where the action information may be a control instruction to the screen, and the control screen operation is used for a specific control method for displaying the sub-screen. Then establish a second correspondence between the preset action information and the control screen operation.
  • the embodiment of this application does not limit the specific relationship between the multiple action information and the multiple control screen operations, as long as the action can be performed according to the action information
  • the control screen corresponding to the information can be operated. For example: when the action information is the gesture "OK”, the corresponding control screen operation is "OK”, the action information is the gesture "single finger down”, the corresponding control screen operation is “select box down”, and the action information is gesture " Thumbs up”, the corresponding control screen operation is "return”, etc.
  • a target screen control operation matching the action information of the target screen control object is determined from the plurality of action information in the second corresponding relationship, and the target screen control object is any one of the N screen control objects.
  • a neural network model can be used to match the action information of the target screen control object with multiple action information in the second correspondence. If the action information of the target screen control object matches the multiple action information in the second correspondence If none of them match, the action information of the target screen control object is invalid action information. If the action information of the target screen control object matches any one of the action information in the second correspondence, the second correspondence is determined The action information in the relationship that matches the action information of the target screen object is the target screen operation.
  • the display sub-screen corresponding to the target control screen object is controlled according to the control operation corresponding to the target control screen operation.
  • the control screen After determining the target control screen operation of the target control screen object, determine the control screen operation corresponding to the target control screen operation according to the second correspondence, and then control the display sub-screen corresponding to the target control screen object according to the control screen operation corresponding to the target control screen operation .
  • the control screen can be controlled by the control screen object’s action information and characteristic information.
  • the display sub-screen corresponding to the object performs corresponding control.
  • the method for controlling the screen obtaineds the first image of the user, and according to the first image, obtains the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects.
  • the feature information of is used to distinguish N screen control objects, and then according to the respective feature information of the N screen control objects, the display sub-screen corresponding to each of the N screen control objects is determined.
  • the display sub-screen is part or all of the area of the display screen, and finally According to the respective action information of the N screen control objects, the corresponding display sub-screens of the N screen control objects are controlled.
  • the flexible control of the screen splitting is realized. Improved screen utilization and enhanced user experience.
  • Fig. 5 is a flowchart of a screen control method provided by another embodiment of this application.
  • the method can be executed by the screen control device provided in this embodiment of the application.
  • the screen control device can be part or all of a terminal device.
  • the execution subject is taken as an example to introduce the screen control method provided in the embodiment of the present application.
  • the screen control method provided in the embodiment of the present application may further include:
  • Step S201 Acquire a second image of the user.
  • the second image of the user refers to the description of the method of obtaining the first image of the user in the step S101, which is not repeated in this embodiment of the present application.
  • the second image includes screen control objects that satisfy the preset startup action.
  • the camera switch may be turned on when preparing to obtain the second image of the user to obtain the second image of the user, which is not limited in the embodiment of the present application.
  • Step S202 Determine the number N of control objects in the second image that satisfy the preset activation action, and the preset activation action is used to activate the multi-gesture control mode.
  • the invalid control screen objects can be filtered out by the way of preset start actions, so as to realize accurate judgment on the number of displayed sub-screens.
  • the embodiment of the application does not limit the specific actions of the preset activation action.
  • the preset activation action may be a preset gesture action, and it is determined that the second image The number of human hands that satisfy the preset gesture action is N; in another possible implementation, if the screen control object is a face, the preset activation action may be a preset face expression, and it is determined that the second image satisfies the preset The number of facial expressions is N.
  • the embodiment of the present application does not limit the manner of how to determine the number N of screen control objects that satisfy the preset activation action in the second image.
  • the A plurality of screen control objects and the action information of the plurality of screen control objects are obtained, and then it is determined whether the action information of the plurality of screen control objects meets the preset start action, so as to determine the control block in the second image that meets the preset start action
  • the number of objects N may be detected to determine the number N of screen control objects satisfying the preset startup action in the second image.
  • Step S203 Present N display sub-screens on the display screen.
  • N display sub-screens are presented on the display screen.
  • the embodiment of the present application has an explanation of how to present N display sub-screens on the display screen.
  • the specific implementation manner is not limited.
  • the display screen is divided into N display sub-screens.
  • the embodiment of the present application does not limit the specific implementation manner of dividing the display screen into N display sub-screens.
  • the display screen is divided into N display sub-screens, the display screen can be equally divided into N display sub-screens, or the size and size of the N display sub-screens can be adjusted according to user needs.
  • the position relationship is set.
  • the embodiment of the present application does not limit the size and position relationship of each display sub-screen.
  • the display screen can be divided into N multi-channels, and different images can be displayed through the multi-channels.
  • multiple display sub-screens are presented on the display screen, which realizes screen segmentation and multi-gesture control.
  • the screen mode is turned on, before the first image of the user is acquired, it can be determined whether the display screen can be split-screen control by detecting whether the multi-gesture control screen mode is turned on. If the multi-gesture control screen mode is not enabled, the user is required to start the multi-gesture control screen mode according to the preset start action, and then perform split-screen control of the display screen, which improves the efficiency of the user's split-screen control.
  • FIG. 6 is a flowchart of a screen control method provided by another embodiment of the present application.
  • the method can be executed by the screen control device provided in the embodiment of the present application, and the screen control device may be part or all of a terminal device, for example It may be a processor in a terminal device.
  • the following takes the terminal device as an execution subject as an example to introduce the screen control method provided in the embodiment of the present application.
  • the screen control method provided in the embodiment of the present application may further include:
  • Step S301 Establish a first corresponding relationship between the feature information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens .
  • the first correspondence between the sub-screens is displayed, and the first correspondence includes a one-to-one correspondence between the feature information of the screen control object and the displayed sub-screens.
  • the number of screen control objects that meet the preset start action is 4, and the screen control objects are human hand 1, human hand 2, human hand 3, and human hand 4, and the display screen is divided into 4 display sub
  • the screens are display sub-screen 1, display sub-screen 2, display sub-screen 3, and display sub-screen 4, respectively, to obtain the characteristic information of 4 control screen objects, namely, the characteristic information of human hand 1, the characteristic information of human hand 2, and the human hand
  • the feature information of human hand 3 and the feature information of human hand 4 establish a one-to-one correspondence between the feature information of human hand and the display sub-screen.
  • the feature information of human hand 1 corresponds to display sub-screen 1
  • the feature information of human hand 2 corresponds to the display sub-screen.
  • the screen 2 corresponds
  • the characteristic information of the human hand 3 corresponds to the display sub-screen 3
  • the characteristic information of the human hand 4 corresponds to the display sub-screen 4.
  • the embodiment of the present application is not limited to this.
  • step S102 may be:
  • Step S302 According to the first correspondence and the respective characteristic information of the N screen control objects, determine the display sub-screens corresponding to each of the N screen control objects.
  • the embodiment of this application does not limit how to determine the manner of displaying the sub-screen corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
  • the respective characteristic information of the N screen control objects By acquiring the respective characteristic information of the N screen control objects, and then respectively match the characteristic information of the N screen control objects that meet the preset startup action, and finally determine the corresponding N screen control objects according to the first correspondence and the matching result
  • the sub-screen is displayed.
  • the first image includes 4 human hands, namely, human hand A, human hand B, human hand C, and human hand D, which are obtained separately
  • the feature information of the four human hands is matched with the feature information of human hand 1, human hand 2, human hand 3, and human hand 4 in the second image.
  • the feature information of human hand A is consistent with the feature information of human hand 1. It is determined that the display sub-screen 1 corresponding to the characteristic information of the human hand 1 is the display sub-screen corresponding to the human hand A, and the display sub-screen is controlled by the motion information of the human hand A, and so on, and will not be repeated.
  • the corresponding relationship between the feature information of a plurality of screen control objects satisfying the preset startup action and the multiple display sub-screens is established, and the control screen is determined according to the corresponding relationship and the feature information of the screen control object
  • the respective display sub-screens corresponding to the objects realize the separate control of the display sub-screens by the control object, and improve the accuracy of the control object's control of the display sub-screens.
  • FIG. 7 is a schematic structural diagram of the screen control device provided by an embodiment of the present application.
  • the screen control device may be part or all of a terminal device. The following takes the terminal device as the execution body as an example.
  • the screen control device provided by the embodiment of the present application may include:
  • the first acquisition module 71 is configured to acquire the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image.
  • the screen control object is the body part of the user in the first image, and N is greater than or equal to Positive integer of 2;
  • the first determining module 72 is configured to determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, and the display sub-screen is a partial area of the display screen;
  • the control module 73 is configured to control the display sub-screens corresponding to the N screen control objects according to the respective action information of the N screen control objects.
  • the functions of the first determining module and the control module may also be performed by a processing module.
  • the processing module may be, for example, a processor, and the first acquiring module may be a transmission interface of the processor, or It can also be said that the first acquiring module is the receiving interface of the processor.
  • the functions of the first determining module and the control module may also be performed by the processor.
  • the screen control device provided in the embodiment of the present application may further include:
  • the second acquisition module 74 is configured to acquire a second image of the user
  • the second determining module 75 is configured to determine the number N of control objects in the second image that meet the preset activation action, and the preset activation action is used to activate the multi-gesture control mode;
  • the segmentation module 76 is used to obtain N display sub-screens presented on the display screen.
  • the dividing module divides the screen into a corresponding number of display sub-screens according to the number of control objects that meet the preset activation action determined by the second determining module.
  • the second acquisition module and the first acquisition module may both be the transmission interface or the reception interface of the processor, and the functions of the second determination module and the segmentation module may both be completed by the processing module.
  • the module may be a processor, for example. In this case, the functions of the second determining module and the splitting module may both be completed by the processor.
  • the screen control device provided in the embodiment of the present application may further include:
  • the establishment module 77 is used to establish a first correspondence between the characteristic information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, and the first correspondence includes one of the characteristic information of the control object and the display sub-screens.
  • the first determining module 72 is specifically configured to:
  • the display sub-screens corresponding to the N screen control objects are determined.
  • control module 73 is specifically used for:
  • the target control operation that matches the action information of the target control object is determined.
  • the target control object is any one of the N control objects.
  • the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
  • the device embodiments provided in this application are merely illustrative, and the module division in FIG. 7 is only a logical function division, and there may be other division methods in actual implementation.
  • multiple modules can be combined or integrated into another system.
  • the mutual coupling between the various modules can be realized through some interfaces. These interfaces are usually electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces. Therefore, the modules described as separate components may or may not be physically separated, and may be located in one place or distributed to different locations on the same or different devices.
  • FIG. 8A is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device provided by the present application includes a processor 81, a memory 82, and a transceiver 83.
  • the memory stores software instructions or computer programs; the processor may be a chip, the transceiver 83 implements the sending and receiving of communication data by the terminal device, and the processor 81 is configured to call the software instructions in the memory to implement the above-mentioned screen control method, Please refer to the method embodiment for the content and effect.
  • FIG. 8B is a schematic structural diagram of a terminal device provided by another embodiment of the present application. As shown in FIG. 8B, the terminal device provided by the present application includes a processor 84 and a transmission interface 85.
  • the transmission interface 85 Is used to receive the first image of the user obtained by the camera; the processor 84 is used to call the software instructions stored in the memory to perform the following steps: obtain the respective feature information and N control objects of the N control screen objects in the first image
  • the respective action information of the screen objects, the screen control object is the user's body part in the first image, and N is a positive integer greater than or equal to 2; according to the respective characteristic information of the N screen control objects, the corresponding display of the N screen control objects is determined
  • the display sub-screen is a partial area of the display screen; according to the respective action information of the N screen-control objects, the display sub-screens corresponding to the N screen-control objects are controlled.
  • the transmission interface 85 is further configured to receive a second image of the user acquired by the camera; the processor 84 is further configured to: determine the number N of control objects in the second image that satisfy the preset activation action, and preset The start action is used to start the multi-gesture control mode; N display sub-screens presented on the display screen are obtained.
  • processor 84 is also used to:
  • the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; the processor; 84. It is specifically configured to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
  • processor 84 is also used to:
  • the target control operation that matches the action information of the target control object is determined.
  • the target control object is any one of the N control objects.
  • the second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
  • FIG. 9 is a schematic diagram of the hardware architecture of an exemplary screen control device provided by an embodiment of the present application. As shown in FIG. 9, the hardware architecture of the screen control device 900 may be applicable to SOC and application processor (AP).
  • AP application processor
  • the screen control device 900 includes at least one central processing unit (CPU), at least one memory, a graphics processing unit (GPU), a decoder, a dedicated video or graphics processor, and a receiver. Interface and sending interface, etc.
  • the screen control device 900 may also include a microprocessor and a microcontroller MCU, etc.
  • the above-mentioned parts of the screen control device 900 are coupled through a connector. It should be understood that, in the various embodiments of the present application, coupling refers to mutual connection in a specific manner, including direct connection or through other The devices are indirectly connected, such as through various interfaces, transmission lines or buses.
  • interfaces are usually electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces, which are not limited in this embodiment.
  • the above-mentioned parts are integrated on the same chip; in another optional case, the CPU, GPU, decoder, receiving interface, and transmitting interface are integrated on one chip, and the chip is The various parts of the bus access external memory.
  • the dedicated video/graphics processor may be integrated with the CPU on the same chip, or may exist as a separate processor chip.
  • the dedicated video/graphics processor may be a dedicated image signal processor (ISP).
  • the chip involved in the embodiments of this application is a system manufactured on the same semiconductor substrate by an integrated circuit process, also called a semiconductor chip, which can be manufactured on a substrate using an integrated circuit process (usually a semiconductor such as silicon)
  • the outer layer of the integrated circuit formed on the material) is usually encapsulated by a semiconductor packaging material.
  • the integrated circuit may include various types of functional devices, and each type of functional device includes transistors such as logic gate circuits, Metal-Oxide-Semiconductor (MOS) transistors, bipolar transistors or diodes, and may also include capacitors and resistors. Or inductance and other components.
  • MOS Metal-Oxide-Semiconductor
  • bipolar transistors or diodes may also include capacitors and resistors. Or inductance and other components.
  • Each functional device can work independently or under the action of necessary driver software, and can realize various functions such as communication, calculation, or storage.
  • the CPU may be a single-CPU processor or a multi-CPU processor; optionally, the CPU may be a processor group composed of multiple processors, between multiple processors Coupled to each other through one or more buses.
  • part of the processing of the image signal or video signal is done by the GPU, part is done by a dedicated video/graphics processor, and it may also be done by software code running on a general-purpose CPU or GPU.
  • the device may also include a memory, which can be used to store computer program instructions, including various computer program codes including an operating system (Operation System, OS), various user application programs, and program codes used to execute the solutions of the present application; the memory; It can also be used to store video data, image data, etc.; the CPU can be used to execute computer program codes stored in the memory to implement the methods in the embodiments of the present application.
  • OS Operating System
  • OS operating system
  • user application programs various user application programs
  • program codes used to execute the solutions of the present application
  • the memory It can also be used to store video data, image data, etc.
  • the CPU can be used to execute computer program codes stored in the memory to implement the methods in the embodiments of the present application.
  • the memory may be a non-power-down volatile memory, such as Embedded MultiMedia Card (EMMC), Universal Flash Storage (UFS) or Read-Only Memory (ROM) ), or other types of static storage devices that can store static information and instructions, or volatile memory (volatile memory), such as random access memory (Random Access Memory, RAM), or can store information and instructions
  • EMMC Embedded MultiMedia Card
  • UFS Universal Flash Storage
  • ROM Read-Only Memory
  • volatile memory volatile memory
  • volatile memory volatile memory
  • volatile memory volatile memory
  • RAM random access memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • CD storage
  • the receiving interface may be a data input interface of the processor chip.
  • the receiving interface may be a mobile industry processor interface (MIPI) or a high-definition multimedia interface (High Definition). Multimedia Interface, HDMI) or Display Port (DP), etc.
  • MIPI mobile industry processor interface
  • HDMI High Definition
  • DP Display Port
  • FIG. 10 is a schematic structural diagram of a terminal device provided by another embodiment of the present application.
  • the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, and a universal serial bus ( universal serial bus, USB) interface 130, charging management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C , A headset interface 170D, a sensor 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the structure illustrated in this embodiment does not constitute a specific limitation on the terminal device 100.
  • the terminal device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented by hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an AP, a modem processor, a GPU, an ISP, a controller, a video codec, and a digital signal processor (DSP). , Baseband processor, and/or neural-network processing unit (NPU), etc.
  • the different processing units may be independent devices or integrated in one or more processors.
  • the terminal device 100 may also include one or more processors 110.
  • the controller may be the nerve center and command center of the terminal device 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. This avoids repeated accesses, reduces the waiting time of the processor 110, and improves the efficiency of the terminal device 100 system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • universal asynchronous transmitter universal asynchronous transmitter
  • the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the terminal device 100, and can also be used to transfer data between the terminal device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the terminal device 100.
  • the terminal device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the terminal device 100. While the charging management module 140 charges the battery 142, it can also supply power to the terminal device 100 through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the terminal device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the terminal device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the terminal device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc.
  • the mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the terminal device 100, including wireless local area networks (wireless local area networks, WLAN), Bluetooth, global navigation satellite system (GNSS), frequency modulation (FM), NFC, Infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the terminal device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, GNSS, WLAN, NFC, FM, and/or IR technology.
  • the aforementioned GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi- Zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • the terminal device 100 can implement a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, etc.
  • the display screen 194 includes a display panel.
  • the display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the terminal device 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the terminal device 100 may implement a shooting function through an ISP, one or more cameras 193, a video codec, a GPU, one or more display screens 194, and an application processor.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the terminal device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data files in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions.
  • the processor 110 can run the above-mentioned instructions stored in the internal memory 121 to enable the terminal device 100 to execute the screen control methods provided in some embodiments of the present application, as well as various functional applications and data processing.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on.
  • the data storage area can store data (such as photos, contacts, etc.) created during the use of the terminal device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 may execute instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110 to cause the terminal device 100 to execute the instructions provided in the embodiments of the present application. Screen control methods, as well as various functional applications and data processing.
  • the terminal device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called a "speaker", is used to convert audio electrical signals into sound signals.
  • the terminal device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C.
  • the terminal device 100 may be provided with at least one microphone 170C.
  • the terminal device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the terminal device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D can be a USB interface 130, a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) Standard interface.
  • the sensor 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an ambient light sensor 180L , Bone conduction sensor 180M and so on.
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may be composed of at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The terminal device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the terminal device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the terminal device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the terminal device 100.
  • the angular velocity of the terminal device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyroscope sensor 180B detects the shake angle of the terminal device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the terminal device 100 through a reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation, somatosensory game scenes and so on.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal device 100 in various directions (generally three-axis). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the terminal device, and is used in applications such as horizontal and vertical screen switching, and pedometer.
  • the terminal device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the terminal device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the terminal device 100 emits infrared light to the outside through the light emitting diode.
  • the terminal device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 100. When insufficient reflected light is detected, the terminal device 100 can determine that there is no object near the terminal device 100.
  • the terminal device 100 can use the proximity light sensor 180G to detect that the user holds the terminal device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
  • the ambient light sensor 180L is used to sense the brightness of the ambient light.
  • the terminal device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H (also called a fingerprint reader) is used to collect fingerprints.
  • the terminal device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc.
  • other descriptions of the fingerprint sensor can be found in the international patent application PCT/CN2017/082773 entitled “Method and Terminal Device for Processing Notification", the entire content of which is incorporated in this application by reference.
  • the touch sensor 180K can also be called a touch panel or a touch-sensitive surface.
  • the touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a touch screen.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the display screen 194 may provide visual output related to touch operations.
  • the touch sensor 180K may also be disposed on the surface of the terminal device 100, which is different from the position of the display screen 194.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal.
  • the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the button 190 includes a power button, a volume button, and so on.
  • the button 190 may be a mechanical button or a touch button.
  • the terminal device 100 may receive key input, and generate key signal input related to user settings and function control of the terminal device 100.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the terminal device 100.
  • the terminal device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 may also be compatible with external memory cards.
  • the terminal device 100 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the terminal device 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the terminal device 100 and cannot be separated from the terminal device 100.
  • the embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer-executable instructions.
  • the user equipment executes the aforementioned various possibilities. Methods.
  • the computer-readable medium includes a computer storage medium and a communication medium
  • the communication medium includes any medium that facilitates the transfer of a computer program from one place to another.
  • the storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer.
  • An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and can write information to the storage medium.
  • the storage medium may also be an integral part of the processor.
  • the processor and the storage medium may be located in the ASIC.
  • the ASIC may be located in the user equipment.
  • the processor and the storage medium may also exist as discrete components in the communication device.
  • a person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware.
  • the aforementioned program can be stored in a computer readable storage medium.
  • the steps including the foregoing method embodiments are executed; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.

Abstract

The present application provides a screen control method, device and apparatus, and a storage medium. Said method comprises: acquiring respective feature information of N screen control objects in a first image and respective action information of the N screen control objects, the screen control objects being body parts of a user in the first image, and N being a positive integer greater than or equal to 2; according to the respective feature information of the N screen control objects, determining display sub-screens respectively corresponding to the N screen control objects, the display sub-screens being partial areas of a display screen; and according to the respective action information of the N screen control objects, controlling the display sub-screens respectively corresponding to the N screen control objects. In this way, split-screen control of a screen is implemented, improving the utilization rate of the screen, and enhancing the user experience.

Description

控屏方法、装置、设备及存储介质Screen control method, device, equipment and storage medium 技术领域Technical field
本申请涉及屏幕控制技术领域,尤其涉及一种控屏方法、装置、设备及存储介质。This application relates to the field of screen control technology, and in particular to a method, device, device, and storage medium for screen control.
背景技术Background technique
随着科学技术与终端设备的发展,其中带有大屏幕的设备、系统等因其直观性和易用性,被越来越多的领域所使用。与此同时,对大屏幕的操控也给人们带来了相应的困扰,因此,如何更好的利用和控制大屏幕设备就变的尤为重要,通常情况下,对大屏幕的控制可以通过遥控器、按钮、手势和语音操控等方式。With the development of science and technology and terminal equipment, equipment and systems with large screens are used in more and more fields due to their intuitiveness and ease of use. At the same time, the manipulation of large screens has brought corresponding problems to people. Therefore, how to better use and control large screen devices becomes particularly important. Normally, the control of large screens can be done through the remote control. , Buttons, gestures, and voice control.
现有技术中,通过手势控制屏幕,通常是通过单人手势与大屏幕进行交互,并通过手势的动作对大屏幕进行相应的控制,实现了通过手势控制屏幕的效果。In the prior art, the screen is controlled through gestures, usually through a single-person gesture to interact with the large screen, and the large screen is controlled correspondingly through gestures, so as to achieve the effect of controlling the screen through gestures.
然而现有技术中,通过手势控制屏幕,只能通过单人手势对大屏幕进行控制,造成屏幕利用率较低。However, in the prior art, the screen is controlled by gestures, and the large screen can only be controlled by a single gesture, resulting in low screen utilization.
发明内容Summary of the invention
本申请实施例提供一种控屏方法、装置、设备及存储介质,用于实现了多个控屏对象对屏幕的分屏控制,不仅提高了屏幕利用率,而且增强了用户体验。The embodiments of the present application provide a method, device, device, and storage medium for controlling a screen, which are used to implement split-screen control of a screen by multiple screen control objects, which not only improves screen utilization, but also enhances user experience.
第一方面,本申请提供一种控屏方法,包括:In the first aspect, this application provides a method for controlling a screen, including:
获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息,所述控屏对象为所述第一图像中用户的身体部位,N为大于等于2的正整数;根据所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕,所述显示子屏幕为显示屏幕的部分区域;根据所述N个控屏对象各自的动作信息,控制所述N个控屏对象各自对应的显示子屏幕。本申请实施例中,通过根据控屏对象的特征信息确定控屏对象对应的显示子屏幕,并根据控屏对象的动作信息,控制控屏对象对应的显示子屏幕,实现了对屏幕的分屏控制,不仅提高了屏幕利用率,而且增强了用户体验。Acquire the respective feature information of the N screen control objects in the first image and the respective action information of the N screen control objects, where the screen control objects are the body parts of the user in the first image, and N is greater than or equal to 2 A positive integer; according to the respective characteristic information of the N screen control objects, determine the display sub-screen corresponding to each of the N screen control objects, the display sub-screen is a partial area of the display screen; according to the N control screens The respective action information of the objects controls the display sub-screens corresponding to the respective N screen control objects. In the embodiment of this application, the display sub-screen corresponding to the control object is determined according to the characteristic information of the control object, and the display sub-screen corresponding to the control object is controlled according to the action information of the control object, thereby realizing the split screen Control not only improves screen utilization, but also enhances user experience.
可选的,在获取第一图像中N个控屏对象各自的特征信息和N个控屏对象各自的动作信息之前,本申请实施例提供的控屏方法还包括:Optionally, before acquiring the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image, the screen control method provided in this embodiment of the present application further includes:
获取用户的第二图像;确定第二图像中的满足预设启动动作的控屛对象的个数N,预设启动动作用于启动多手势控屛模式;在显示屏幕上呈现N个显示子屏幕。Acquire a second image of the user; determine the number N of control objects in the second image that meet the preset activation action, and the preset activation action is used to activate the multi-gesture control mode; present N display sub-screens on the display screen .
本申请实施例中,通过根据用户的第二图像中满足预设启动动作的控屏对象的个数确定将显示屏幕呈现为多少个显示子屏幕,基于预设启动动作开启多手势控屛模式并且只有满足预设启动动作的控屛对象才可以成为多个显示子屏幕的其中一个控制者,提升分屏的抗干扰性,示例性的,控屛对象为用户的手,如果屏幕前有四个人,但是只有三个人想参与分屏控制,只有呈现预设启动动作的手才可以参与分屏控制,避免 第四个人的手对分屏控制造成干扰。In the embodiment of the present application, by determining how many display sub-screens to present the display screen according to the number of screen control objects satisfying the preset activation action in the second image of the user, the multi-gesture control mode is activated based on the preset activation action and Only the control object that satisfies the preset start action can become one of the controllers of multiple display sub-screens, which improves the anti-interference of the split screen. Illustratively, the control object is the user's hand. If there are four people in front of the screen , But only three people want to participate in the split-screen control, and only the hand that presents the preset start action can participate in the split-screen control, avoiding the fourth person’s hand from interfering with the split-screen control.
可选的,获取第一图像中N个控屏对象各自的特征信息和N个控屏对象各自的动作信息之前,本申请实施例提供的控屏方法还包括:Optionally, before acquiring the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image, the screen control method provided in the embodiment of the present application further includes:
建立N个满足预设启动动作的控屏对象的特征信息与N个显示子屏幕的第一对应关系,第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;根据N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕,包括:根据第一对应关系和N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕。Establish a first corresponding relationship between the feature information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; according to N Determining the respective display sub-screens corresponding to the N screen control objects by the respective feature information of each control screen object, including: determining the respective display corresponding to the N screen control objects according to the first correspondence and the respective feature information of the N screen control objects Sub-screen.
本申请实施例中,通过建立多个满足预设启动动作的控屏对象的特征信息与多个显示子屏幕之间的对应关系,并根据该对应关系和控屏对象的特征信息,确定控屏对象各自对应的显示子屏幕,实现了控屏对象对显示子屏幕的分别对应控制,提高了控屏对象对显示子屏幕的控制的准确性。In the embodiment of the present application, the corresponding relationship between the feature information of a plurality of screen control objects satisfying the preset startup action and the multiple display sub-screens is established, and the control screen is determined according to the corresponding relationship and the feature information of the screen control object The display sub-screens corresponding to the objects respectively realize the corresponding control of the control object to the display sub-screens, and improve the accuracy of the control object's control of the display sub-screens.
可选的,在根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕,包括:Optionally, controlling the respective display sub-screens corresponding to the N screen control objects according to the respective action information of the N screen control objects includes:
在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,所述目标控屏对象为所述N个控屏对象中的任一个,所述第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;根据所述目标控屛操作控制所述目标控屏对象对应的显示子屏幕。The target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple A one-to-one correspondence between each action information and multiple control operations; according to the target control operation, the display sub-screen corresponding to the target control screen object is controlled.
本申请实施例中,通过确定目标控屏对象的目标控屏操作,并根据目标控屏操作对应的控屏操作控制目标控屏对象对应的显示子屏幕,实现了通过控屏对象的动作信息和特征信息,对控屏对象对应的显示子屏幕进行相应的控制。In the embodiment of the present application, by determining the target control screen operation of the target control screen object, and controlling the display sub-screen corresponding to the target control screen object according to the control screen operation corresponding to the target control screen operation, it is realized through the control screen object's action information and Characteristic information, correspondingly control the display sub-screen corresponding to the control screen object.
下面介绍本申请实施例提供的控屏装置、设备、存储介质以及计算机程序产品,其内容和效果可参考本申请实施例第一方面及第一方面可选方式提供的控屏方法,不再赘述。The following describes the screen control devices, equipment, storage media, and computer program products provided by the embodiments of the present application. For the content and effects, please refer to the first aspect and the optional method of the first aspect of the embodiments of the present application. .
第二方面,本申请实施例提供一种控屏装置,包括:In the second aspect, an embodiment of the present application provides a screen control device, including:
第一获取模块,用于获取第一图像中N个控屏对象各自的特征信息和N个控屏对象各自的动作信息,控屏对象为第一图像中用户的身体部位,N为大于等于2的正整数;第一确定模块,用于根据N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕,显示子屏幕为显示屏幕的部分区域;控制模块,用于根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕。The first acquisition module is used to acquire the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image. The screen control object is the body part of the user in the first image, and N is greater than or equal to 2 The first determining module is used to determine the display sub-screen corresponding to each of the N control screen objects according to the respective characteristic information of the N control screen objects, and the display sub-screen is a partial area of the display screen; the control module is used for According to the respective action information of the N screen control objects, the corresponding display sub-screens of the N screen control objects are controlled.
可选的,本申请实施例提供的控屏装置,还包括:Optionally, the screen control device provided in the embodiment of the present application further includes:
第二获取模块,用于获取用户的第二图像;第二确定模块,用于确定第二图像中的满足预设启动动作的控屛对象的个数N,预设启动动作用于启动多手势控屛模式;切分模块,用于得到在显示屏幕上呈现的N个显示子屏幕。The second acquisition module is used to acquire a second image of the user; the second determination module is used to determine the number N of control objects in the second image that meets the preset activation action, and the preset activation action is used to activate multiple gestures Control mode; segmentation module, used to obtain N display sub-screens presented on the display screen.
可选的,本申请实施例提供的控屏装置,还包括:Optionally, the screen control device provided in the embodiment of the present application further includes:
建立模块,用于建立N个满足预设启动动作的控屏对象的特征信息与N个显示子屏幕的第一对应关系,第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;第一确定模块具体用于:根据第一对应关系和N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕。The establishment module is used to establish a first corresponding relationship between the feature information of the N screen control objects that meet the preset startup action and the N display sub-screens. The first corresponding relationship includes the feature information of the control object and the display sub-screen. Correspondence; the first determining module is specifically configured to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
可选的,控制模块具体用于:Optionally, the control module is specifically used for:
在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,目标控屏对象为N个控屏对象中的任一个,第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;根据目标控屛操作控制目标控屏对象对应的显示子屏幕。In the second correspondence, the target control operation that matches the action information of the target control object is determined. The target control object is any one of the N control objects. The second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
第三方面,本申请实施例提供一种设备,包括:In a third aspect, an embodiment of the present application provides a device, including:
处理器和传输接口,传输接口,用于接收摄像头获取的用户的第一图像;处理器,用于调用存储在存储器中的软件指令以执行如下步骤:The processor and the transmission interface, the transmission interface, are used to receive the first image of the user obtained by the camera; the processor is used to call the software instructions stored in the memory to perform the following steps:
获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息,所述控屏对象为所述第一图像中用户的身体部位,N为大于等于2的正整数;根据所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕,所述显示子屏幕为显示屏幕的部分区域;根据所述N个控屏对象各自的动作信息,控制所述N个控屏对象各自对应的显示子屏幕。Acquire the respective feature information of the N screen control objects in the first image and the respective action information of the N screen control objects, where the screen control objects are the body parts of the user in the first image, and N is greater than or equal to 2 A positive integer; according to the respective characteristic information of the N screen control objects, determine the display sub-screen corresponding to each of the N screen control objects, the display sub-screen is a partial area of the display screen; according to the N control screens The respective action information of the objects controls the display sub-screens corresponding to the respective N screen control objects.
可选的,传输接口,还用于接收摄像头获取的用户的第二图像;处理器还用于:Optionally, the transmission interface is also used to receive the second image of the user obtained by the camera; the processor is also used to:
确定第二图像中的满足预设启动动作的控屛对象的个数N,预设启动动作用于启动多手势控屛模式;得到显示屏幕上呈现的N个显示子屏幕。The number N of control objects satisfying the preset activation action in the second image is determined, and the preset activation action is used to activate the multi-gesture control mode; N display sub-screens presented on the display screen are obtained.
可选的,处理器还用于:Optionally, the processor is also used to:
建立N个满足预设启动动作的控屏对象的特征信息与N个显示子屏幕的第一对应关系,第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;处理器,具体用于:根据第一对应关系和N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕。Establish a first corresponding relationship between the feature information of the N screen control objects that satisfy the preset startup action and the N display sub-screens, the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; the processor; , Specifically used to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
可选的,处理器还用于:Optionally, the processor is also used to:
在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,所述目标控屏对象为所述N个控屏对象中的任一个,所述第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;根据所述目标控屛操作控制所述目标控屏对象对应的显示子屏幕。The target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple A one-to-one correspondence between each action information and multiple control operations; according to the target control operation, the display sub-screen corresponding to the target control screen object is controlled.
第四方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当指令在计算机或处理器上运行时,使得该计算机或处理器执行如本申请实施例第一方面及第一方面可选方式提供的控屏方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium. When the instructions run on a computer or a processor, the computer or the processor executes the Examples are the screen control methods provided in the first aspect and the optional methods of the first aspect.
本申请第五方面提供了一种包含指令的计算机程序产品,当其在计算机或处理器上运行时,使得该计算机或处理器执行如上述第一方面或者及第一方面可选方式提供的控屛方法。The fifth aspect of the present application provides a computer program product containing instructions, which, when run on a computer or processor, causes the computer or processor to execute the control provided in the first aspect or alternatively provided by the first aspect.屛 method.
本申请实施例提供的控屏方法、装置、设备及存储介质,通过获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息,所述控屏对象为所述第一图像中用户的身体部位,N为大于等于2的正整数;然后根据所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕,所述显示子屏幕为显示屏幕的部分区域;最后根据所述N个控屏对象各自的动作信息,控制所述N个控屏对象各自对应的显示子屏幕。。由于通过根据控屏对象的特征信息确定控屏对象对应的显示子屏幕,并根据控屏对象的动作信息,控制控屏对象对应的显示子屏幕,实现了对屏幕的分屏控制,不仅提高了屏幕利用率,而且增强了用户体验。The screen control method, device, device, and storage medium provided by the embodiments of the present application obtain the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects in the first image, and the screen control object Is the body part of the user in the first image, and N is a positive integer greater than or equal to 2; then, according to the respective characteristic information of the N screen control objects, determine the display sub-screens corresponding to each of the N screen control objects, The display sub-screen is a partial area of the display screen; finally, according to the respective action information of the N screen control objects, the display sub-screens corresponding to each of the N screen control objects are controlled. . Since the display sub-screen corresponding to the control screen object is determined according to the characteristic information of the control screen object, and the display sub-screen corresponding to the control screen object is controlled according to the action information of the control screen object, the split-screen control of the screen is realized, which not only improves Screen utilization, and enhanced user experience.
附图说明Description of the drawings
图1是本申请实施例提供的示例性应用场景示意图;Fig. 1 is a schematic diagram of an exemplary application scenario provided by an embodiment of the present application;
图2是本申请实施例提供的另一示例性应用场景示意图;Figure 2 is a schematic diagram of another exemplary application scenario provided by an embodiment of the present application;
图3是本申请一实施例提供的控屏方法的流程图;FIG. 3 is a flowchart of a method for controlling a screen provided by an embodiment of the present application;
图4是本申请实施例提供的示例性的神经网络应用架构图;Fig. 4 is an exemplary neural network application architecture diagram provided by an embodiment of the present application;
图5是本申请另一实施例提供的控屏方法的流程图;FIG. 5 is a flowchart of a screen control method provided by another embodiment of the present application;
图6是本申请又一实施例提供的控屏方法的流程图;FIG. 6 is a flowchart of a screen control method provided by another embodiment of the present application;
图7是本申请一实施例提供的控屏装置的结构示意图;FIG. 7 is a schematic structural diagram of a screen control device provided by an embodiment of the present application;
图8A是本申请一实施例提供的终端设备的结构示意图;FIG. 8A is a schematic structural diagram of a terminal device provided by an embodiment of the present application;
图8B是本申请另一实施例提供的终端设备的结构示意图;FIG. 8B is a schematic structural diagram of a terminal device provided by another embodiment of the present application;
图9是本申请实施例提供的一种示例性的控屏装置的硬件架构示意图;FIG. 9 is a schematic diagram of the hardware architecture of an exemplary screen control device provided by an embodiment of the present application;
图10是本申请又一实施例提供的终端设备的结构示意图。FIG. 10 is a schematic structural diagram of a terminal device provided by another embodiment of the present application.
具体实施方式Detailed ways
应当理解,尽管在本发明实施例中可能采用术语第一、第二、第三、第四等来描述用户图像,但这些用户图像不应限于这些术语。这些术语仅用来将用户图像彼此区分开。例如,在不脱离本申请实施例范围的情况下,第一用户图像也可以被称为第二用户图像,类似地,第二用户图像也可以被称为第一用户图像。It should be understood that although the terms first, second, third, fourth, etc. may be used to describe user images in the embodiments of the present invention, these user images should not be limited to these terms. These terms are only used to distinguish user images from each other. For example, without departing from the scope of the embodiments of the present application, the first user image may also be referred to as the second user image, and similarly, the second user image may also be referred to as the first user image.
随着科学技术与终端设备的发展,带有大屏幕的设备、系统等因其直观性和易用性,被越来越多的领域所使用。与此同时,对大屏幕的操控也给人们带来了相应的困扰,因此,如何更好的利用和控制大屏幕设备就变的尤为重要,通常情况下,对大屏幕的控制可以通过遥控器、按钮、手势和语音操控等方式。然而现有技术中,通过手势控制屏幕,通常是通过单人手势与大屏幕进行交互,并通过手势的动作对大屏幕进行相应的控制,造成屏幕利用率较低。为了解决上述问题,本申请实施例提供一种控屏方法、装置、设备及存储介质。With the development of science and technology and terminal equipment, equipment and systems with large screens are used in more and more fields because of their intuitiveness and ease of use. At the same time, the manipulation of large screens has brought corresponding problems to people. Therefore, how to better use and control large screen devices becomes particularly important. Normally, the control of large screens can be done through the remote control. , Buttons, gestures, and voice control. However, in the prior art, the screen is controlled by gestures, usually a single-person gesture is used to interact with the large screen, and the large screen is correspondingly controlled through gesture actions, resulting in low screen utilization. In order to solve the foregoing problems, embodiments of the present application provide a method, device, device, and storage medium for controlling a screen.
下面对本申请实施例的一示例性应用场景进行介绍。An exemplary application scenario of the embodiment of the present application will be introduced below.
本申请实施例可以应用于具备显示屏幕的终端设备中,终端设备例如可以是电视机、电脑、投屏设备等,本申请实施例对此不做限制,以终端设备为电视机为例,对本申请实施例中的应用场景进行介绍。图1是本申请实施例提供的示例性应用场景示意图,如图1所示,电视机10通过通用串行总线或其他高速总线12与摄像头11连接,在多个用户观看电视机时,可能每个用户想要观看的电视节目不同,或者存在多用户参与的电视游戏等,需要根据每个用户的特征信息,对电视机屏幕进行分屏,并且每个用户可以单独控制与该用户的特征信息对应的显示子屏幕,例如,如图1所示,通过摄像头11拍摄电视机10对面的用户图像或视频,经过处理和判断,将电视机10的显示屏幕分为显示子屏幕1和显示子屏幕2,其中,显示子屏幕1和显示子屏幕2可以显示不同的播放内容,且可以通过摄像头11不断获取用户图像,然后通过对用户图像的处理,分别对显示子屏幕1和显示子屏幕2进行控制,本申请实施例不限于此。另外,电视机10还可以包括视频信号源接口13、有线网络接口或无线网络接口模块 14或外围设备接口15等,本申请实施例对此不做限制。The embodiments of this application can be applied to terminal devices with display screens. The terminal devices can be, for example, televisions, computers, screen projection devices, etc. The embodiments of this application do not limit this, and the terminal device is a television as an example. The application scenarios in the application embodiments are introduced. Fig. 1 is a schematic diagram of an exemplary application scenario provided by an embodiment of the present application. As shown in Fig. 1, a television 10 is connected to a camera 11 through a universal serial bus or other high-speed bus 12. When multiple users watch the television, every The TV programs that a user wants to watch are different, or there are video games in which multiple users participate. It is necessary to split the TV screen according to the characteristic information of each user, and each user can independently control the characteristic information of the user. The corresponding display sub-screen, for example, as shown in FIG. 1, the user image or video opposite to the television 10 is captured by the camera 11, and after processing and judgment, the display screen of the television 10 is divided into a display sub-screen 1 and a display sub-screen 2. Among them, display sub-screen 1 and display sub-screen 2 can display different playback content, and can continuously obtain user images through camera 11, and then process the user image to perform display sub-screen 1 and display sub-screen 2 respectively Control, the embodiment of this application is not limited to this. In addition, the television 10 may further include a video signal source interface 13, a wired network interface or a wireless network interface module 14, or a peripheral device interface 15, which is not limited in the embodiment of the present application.
下面对本申请实施例的另一示例性应用场景进行介绍。The following introduces another exemplary application scenario of the embodiment of the present application.
本申请实施例可以应用于显示设备中,图2是本申请实施例提供的另一示例性应用场景示意图,如图2所示,显示设备可以包括中央处理器、系统内存、边缘人工智能处理器核和图像内存,中央处理器与系统内存连接,中央处理器可以用于执行本申请实施例提供的控屏方法,中央处理器还可以与边缘人工智能处理器核连接,边缘人工智能可以用于实现本申请实施例提供的控屏方法中的图像处理部分,边缘人工智能处理器核与图像内存连接,图像内存可以用于存储摄像头获取的图像,摄像头与显示设备通用串行总线或其他高速总线连接。The embodiment of this application can be applied to a display device. FIG. 2 is a schematic diagram of another exemplary application scenario provided by an embodiment of this application. As shown in FIG. 2, the display device may include a central processing unit, a system memory, and an edge artificial intelligence processor. Core and image memory, the central processing unit is connected to the system memory, the central processing unit can be used to execute the screen control method provided in the embodiments of this application, the central processing unit can also be connected to the edge artificial intelligence processor core, and the edge artificial intelligence can be used for To implement the image processing part of the screen control method provided by the embodiments of this application, the edge artificial intelligence processor core is connected to the image memory, and the image memory can be used to store the images acquired by the camera, the universal serial bus of the camera and the display device or other high-speed buses connection.
基于此,本申请实施例提供一种控屏方法、装置、设备及存储介质。Based on this, the embodiments of the present application provide a method, device, device, and storage medium for controlling a screen.
图3是本申请一实施例提供的控屏方法的流程图,该方法可以通过本申请实施例提供的控屏装置执行,控屏装置可以是终端设备的部分或全部,例如可以是终端设备中的处理器,下面以终端设备为执行主体为例,对本申请实施例提供的控屏方法进行介绍。如图3所示,本申请实施例提供的控屏方法,可以包括:FIG. 3 is a flowchart of a method for controlling a screen provided by an embodiment of the present application. The method can be executed by the device for controlling a screen provided by an embodiment of the present application. The controlling device may be part or all of a terminal device, for example, in a terminal device. The following uses the terminal device as the execution body as an example to introduce the screen control method provided in the embodiment of the present application. As shown in FIG. 3, the screen control method provided by the embodiment of the present application may include:
步骤S101:获取第一图像中N个控屏对象各自的特征信息和N个控屏对象各自的动作信息,控屏对象为第一图像中用户的身体部位,N为大于等于2的正整数。Step S101: Obtain the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image. The screen control objects are the body parts of the user in the first image, and N is a positive integer greater than or equal to 2.
获取用户的第一图像,可以通过摄像头或图像传感器获取,摄像头可以设置在终端设备中,也可以独立于终端设备设置,并与终端设备进行有线或无线连接,本申请实施例对摄像头的型号、安装位置等不做限制,只要可以获取用户的第一图像即可。摄像头采集用户的第一图像,可以通过视频采集或图像采集等方式,本申请实施例对如何通过摄像头获取用户的第一图像的具体方式不做限制。在一种可选的情况中,对于终端设备中的处理器芯片来说,处理器芯片的传输接口接收摄像头或图像传感器获取的用户的图像也可以认为是在获取用户的第一图像,也即处理器芯片通过传输接口获取用户的第一图像。The first image of the user can be obtained through a camera or an image sensor. The camera can be set in the terminal device, or set independently of the terminal device, and be connected to the terminal device by wired or wireless connection. There are no restrictions on the installation location, as long as the user's first image can be obtained. The camera collects the first image of the user by means of video collection or image collection. The embodiment of the present application does not limit the specific method of how to obtain the first image of the user through the camera. In an optional situation, for the processor chip in the terminal device, the transmission interface of the processor chip receiving the user's image obtained by the camera or the image sensor can also be regarded as obtaining the user's first image, that is, The processor chip obtains the user's first image through the transmission interface.
在获取用户的第一图像之后,根据第一图像,获取N个控屏对象各自的特征信息以及N个控屏对象各自的动作信息,其中,N个控屏对象为第一图像中用户的身体部位,本申请实施例对用户的身体部位的具体部位不做限制,对第一图像中用户的身体部位的判断,可以通过设置预设身体部位实现。在一种可能的实施方式中,预设身体部位可以为人手,相应的,N个控屏对象为第一图像中的至少一只人手,N个控屏对象的特征信息为第一图像中N只人手的手部特征信息,示例性的,手部特征信息包括但不限于为手纹、手的形状、手的大小或手部肤色等。N个控屏对象各自的动作信息为第一图像中N只人手的手部动作信息。在另一种可能的实施方式中,预设身体部位可以为人脸,相应的,N个控屏对象为第一图像中的N张人脸,N个控屏对象的特征信息为第一图像中N张人脸的面部特征信息,N个控屏对象各自的动作信息为第一图像中N个人脸的面部动作信息,例如面部表情等,本申请实施例不限于此。After obtaining the first image of the user, according to the first image, obtain the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects, where the N screen control objects are the user's body in the first image Part, the embodiment of the present application does not limit the specific part of the user's body part. The judgment of the user's body part in the first image can be realized by setting a preset body part. In a possible implementation, the preset body part may be a human hand. Correspondingly, the N screen control objects are at least one human hand in the first image, and the feature information of the N screen control objects is N in the first image. The hand feature information of a human hand, for example, the hand feature information includes but is not limited to handprints, hand shape, hand size, or hand skin color. The motion information of each of the N screen control objects is the hand motion information of the N human hands in the first image. In another possible implementation manner, the preset body part may be a human face. Correspondingly, the N screen control objects are the N faces in the first image, and the feature information of the N screen control objects is in the first image. The facial feature information of the N faces, and the respective action information of the N screen control objects are the facial action information of the N faces in the first image, such as facial expressions, and the embodiment of the present application is not limited thereto.
其中,N个控屏对象各自的特征信息用于区分N个控屏对象,示例性的,若控屏对象为人手,则人手的手部特征信息用于区分不同的人手,若控屏对象为人脸,则人脸的特征信息用于区分不同的人脸。Among them, the feature information of each of the N screen control objects is used to distinguish the N screen control objects. Illustratively, if the screen control object is a human hand, the hand feature information of the human hand is used to distinguish different human hands. Face, the feature information of the face is used to distinguish different faces.
本申请实施例对如何根据第一图像,获取至少一个控屏对象各自的特征信息以及 至少一个控屏对象各自的动作信息的具体实施方式不做限制,在一种可能的实施方式中,可以通过机器学习的方式,例如卷积神经网络(Convolutional Neural Network,CNN)模型等。示例性的,以预设用户身体部位为人手为例,通过将第一图像输入至CNN模型中,经过CNN模型的处理,获取第一图像中每只人手的手部特征信息,以及每只人手的手部动作信息。The embodiment of this application does not limit the specific implementation manners of how to obtain the respective characteristic information of the at least one screen control object and the respective action information of the at least one screen control object according to the first image. In a possible implementation manner, The way of machine learning, such as Convolutional Neural Network (CNN) model, etc. Exemplarily, taking the preset user's body part as a human hand as an example, by inputting the first image into the CNN model, and processing by the CNN model, the hand feature information of each human hand in the first image, and each human hand are obtained Information about your hand movements.
图4是本申请实施例提供的示例性的神经网络应用架构图,如图4所示,本申请实施例提供的示例性的神经网络应用架构图可以包括应用程序入口41、模型对外接口42、深度学习结构43、设备驱动程序44、中央处理器45、图形处理器46、网络处理器47以及数字处理器48,其中,应用程序入口41,用于选择神经网络模型,模型对外接口42,用于调用选择的神经网络模型,深度学习结构43,用于通过神经网络模型对输入的第一用户图像进行处理,示例性的,深度学习结构43包括环境管理者431、模型管理者432、任务调度者433、任务执行者434和事件管理者435。其中,环境管理者431,用于控制设备相关环境的启动和关闭,模型管理者432,用于负责加载神经网络模型、卸载神经网络模型等操作,任务调度者433,用于负责管理神经网络模型以何种序列进行调度,任务执行者434用于负责执行神经网络模型的任务,事件管理者435用于负责各种事件的通知等。本申请实施例提供的神经网络应用架构不限于此。FIG. 4 is an exemplary neural network application architecture diagram provided by an embodiment of the present application. As shown in FIG. 4, the exemplary neural network application architecture diagram provided by an embodiment of the present application may include an application program entry 41, a model external interface 42, and The deep learning structure 43, the device driver 44, the central processing unit 45, the graphics processor 46, the network processor 47, and the digital processor 48. Among them, the application program entry 41 is used to select the neural network model, and the model external interface 42 is used To call the selected neural network model, the deep learning structure 43 is used to process the input first user image through the neural network model. Exemplarily, the deep learning structure 43 includes an environment manager 431, a model manager 432, and task scheduling 433, task performer 434, and event manager 435. Among them, the environment manager 431 is used to control the startup and shutdown of the device-related environment, the model manager 432 is used to load and unload the neural network model, and the task scheduler 433 is used to manage the neural network model. Which sequence is used for scheduling, the task executor 434 is responsible for executing tasks of the neural network model, and the event manager 435 is responsible for notifications of various events. The neural network application architecture provided by the embodiments of the present application is not limited to this.
步骤S102:根据N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕,显示子屏幕为显示屏幕的部分区域。Step S102: Determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, and the display sub-screen is a partial area of the display screen.
在获取到N个控屏对象各自的特征信息之后,确定N个控屏对象各自对应的显示子屏幕,在一种可能的实施方式中,获取到4个控屛对象及4个控屛对象各自的特征信息,对应的,将显示屏幕切分成4个显示子屏幕,每个显示子屏幕绑定一种控屛对象的特征信息,从而控屛对象只能控制与该控屛对象的特征信息相绑定的显示子屏幕;在一种可能的实施方式中,可以通过对每个显示子屏幕分别设置预设特征信息,然后根据N个控屏对象各自的特征信息与预设特征信息,确定N个控屏对象各自对应的显示子屏幕。例如:屏幕被切分为4个显示子屏幕,且每个显示子屏幕分别存在一一对应的预设特征信息,对控屏对象的特征信息与预设特征信息进行匹配,然后根据匹配结果确定控屏对象对应的显示子屏幕。本申请实施例对如何根据N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕的具体实施方式不做限制。另外,在一种可能的实施方式中,显示子屏幕可以为显示屏幕的部分区域,例如:显示屏幕切分为不同的显示子屏幕;在另一种可能的实施方式中,显示子屏幕为显示屏幕的全部区域,例如:显示屏幕为多通道的显示方式,可以实现同一显示屏幕同时输出多个不同画面的功能,并具有多通道音频输出,用户只要通过佩戴不同的眼镜及耳机就可以分别收看到两个不同节目等,本申请实施例对显示子屏幕的区域范围以及切分方式不做限制。After acquiring the respective characteristic information of the N screen control objects, determine the corresponding display sub-screens of the N screen control objects. In a possible implementation manner, 4 control objects and 4 control objects are obtained. Correspondingly, the display screen is divided into 4 display sub-screens, and each display sub-screen is bound to the characteristic information of a control object, so that the control object can only control the characteristic information of the control object. Bound display sub-screens; in a possible implementation, preset feature information can be set for each display sub-screen, and then N can be determined according to the respective feature information and preset feature information of the N control screen objects Each control screen object corresponds to the display sub-screen. For example: the screen is divided into 4 display sub-screens, and each display sub-screen has a one-to-one corresponding preset feature information, and the feature information of the control screen object is matched with the preset feature information, and then determined according to the matching result The display sub-screen corresponding to the control screen object. The embodiment of the present application does not limit the specific implementation of how to determine the display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects. In addition, in a possible implementation manner, the display sub-screen may be a partial area of the display screen, for example: the display screen is divided into different display sub-screens; in another possible implementation manner, the display sub-screen is a display All areas of the screen, for example: the display screen is a multi-channel display mode, which can realize the function of outputting multiple different pictures at the same time on the same display screen, and has multi-channel audio output. Users can watch separately by wearing different glasses and headphones For two different programs, etc., the embodiment of the present application does not limit the area of the display sub-screen and the splitting method.
确定N个控屏对象各自对应的显示子屏幕,可以根据显示子屏幕的标识与控屏对象之间的标识来实现。示例性的,首先对根据预设特征信息对每个显示子屏幕进行标识,本申请实施例对显示子屏幕的具体标识方式不做限制,例如通过编码、数字、符号、文字等方式,例如,显示子屏幕1对应预设特征信息1,显示子屏幕2对应预设特征信息2,以此类推。然后检测第一图像中的N个控屏对象的特征信息,根据控屏 对象的特征信息对N个控屏对象进行标识,本申请实施例对控屏对象的具体标识方式不做限制,例如,若控屏对象的特征信息与预设特征信息1匹配,则该控屏对象标识为控屏对象1,以此类推。Determining the display sub-screens corresponding to each of the N screen control objects can be implemented according to the identification of the display sub-screen and the identification between the screen control objects. Exemplarily, firstly, each display sub-screen is identified according to the preset feature information. The embodiment of the present application does not limit the specific identification method of the display sub-screen, for example, by encoding, numbers, symbols, text, etc., for example, Display sub-screen 1 corresponds to preset feature information 1, display sub-screen 2 corresponds to preset feature information 2, and so on. Then the feature information of the N screen control objects in the first image is detected, and the N screen control objects are identified according to the feature information of the screen control object. The embodiment of this application does not limit the specific identification method of the screen control object, for example, If the feature information of the screen control object matches the preset feature information 1, the screen control object is identified as the screen control object 1, and so on.
本申请实施例对如何对控屏对象进行标识的具体实施方式不做限制,在一种可能的实施方式中,可以通过CNN模型检测第一图像中N个控屏对象的特征信息,并根据控屏对象的特征信息对N个控屏对象进行标识。在另一种可能的实施方式中,可以通过CNN检查每个控屏对象在第一图像中的坐标信息,并根据每个控屏对象的坐标信息,在原图中将每个控屏图像裁剪下来,作为单独的图像进行处理,检测每个单独的图像中控屏对象的特征信息,并对每个单独的图像进行标识。The embodiment of this application does not limit the specific implementation of how to identify the screen control object. In a possible implementation manner, the feature information of the N screen control objects in the first image can be detected through the CNN model, and the feature information of the control screen objects can be detected according to the control. The feature information of the screen object identifies the N screen control objects. In another possible implementation, the coordinate information of each control screen object in the first image can be checked through CNN, and each control screen image can be cropped in the original image according to the coordinate information of each control screen object , Process as a separate image, detect the characteristic information of the control screen object in each separate image, and identify each separate image.
在对每个控屏对象的标识之后,根据显示子屏幕的标识或控屏对象与显示子屏幕的标识之间的对应关系确定每个控屏对象对应的显示子屏幕。例如:第一图像中包括3个控屏对象,分别标识为控屏对象1、控屏对象2和控屏对象3,屏幕共分为3个显示子屏幕,分别标识为显示子屏幕1、显示子屏幕2和显示子屏幕3,控屏对象1对应的显示子屏幕为显示子屏幕1,控屏对象2对应的显示子屏幕为显示子屏幕2,控屏对象3对应的显示子屏幕为显示子屏幕3,本申请实施例对比不做限制。After the identification of each screen control object, the display sub-screen corresponding to each screen control object is determined according to the identification of the display sub-screen or the corresponding relationship between the identification of the screen control object and the identification of the display sub-screen. For example: the first image includes 3 screen control objects, which are respectively identified as screen control object 1, screen control object 2 and screen control object 3. The screen is divided into 3 display sub-screens, respectively identified as display sub-screen 1, display Sub-screen 2 and display sub-screen 3. The display sub-screen corresponding to control-screen object 1 is display sub-screen 1, the display sub-screen corresponding to control-screen object 2 is display sub-screen 2, and the display sub-screen corresponding to control-screen object 3 is display Sub-screen 3, the comparison of the embodiments of this application is not limited.
步骤S103:根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕。Step S103: According to the respective action information of the N screen control objects, control the display sub-screens corresponding to each of the N screen control objects.
在确定N个控屏对象各自对应的显示子屏幕之后,根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕。以上述控屏对象与显示子屏幕之间的示例性的对应关系为例进一步介绍,根据控屏对象1的动作信息控制显示子屏幕1,根据控屏对象2的动作信息控制显示子屏幕2,根据控屏对象3的动作信息控制显示子屏幕3。本申请实施例对如何根据控屏对象的动作信息控制控屏对象对应的显示子屏幕不做限制。After determining the respective display sub-screens corresponding to the N screen control objects, the respective display sub-screens corresponding to the N screen control objects are controlled according to the respective action information of the N screen control objects. Taking the above exemplary correspondence between the screen control object and the display sub-screen as an example for further introduction, the display sub-screen 1 is controlled according to the action information of the screen control object 1, and the sub-screen 2 is controlled according to the action information of the screen control object 2. The sub-screen 3 is controlled to be displayed according to the action information of the control object 3 The embodiment of the present application does not limit how to control the display sub-screen corresponding to the screen control object according to the action information of the screen control object.
为了实现根据控屏对象的动作信息控制控屏对象对应的显示子屏幕,在一种可能的实施方式中,根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕,包括:In order to realize the control of the display sub-screen corresponding to the control screen object according to the motion information of the control screen object, in a possible implementation manner, the respective display sub-screens corresponding to the N screen control objects are controlled according to the respective motion information of the N screen control objects. Screen, including:
在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,目标控屏对象为N个控屏对象中的任一个,第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;根据目标控屛操作控制目标控屏对象对应的显示子屏幕。第二对应关系包括多个动作信息和多个控屏操作的一一对应关系,其中,动作信息可以为对屏幕的控制指令,控屏操作用于对显示子屏幕的具体控制方式。然后建立预设的动作信息与控屏操作的第二对应关系,本申请实施例对多个动作信息与多个控屏操作之间的具体关系不做限制,只要能够实现根据动作信息进行该动作信息对应的控屏操作即可。例如:动作信息为手势“OK”时,对应的控屏操作为“确定”,动作信息为手势“单手指向下”,对应的控屏操作为“选择框向下”,动作信息为手势“竖大拇指”,对应的控屏操作为“返回”等。In the second correspondence, the target control operation that matches the action information of the target control object is determined. The target control object is any one of the N control objects. The second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled. The second correspondence includes a one-to-one correspondence between multiple action information and multiple control screen operations, where the action information may be a control instruction to the screen, and the control screen operation is used for a specific control method for displaying the sub-screen. Then establish a second correspondence between the preset action information and the control screen operation. The embodiment of this application does not limit the specific relationship between the multiple action information and the multiple control screen operations, as long as the action can be performed according to the action information The control screen corresponding to the information can be operated. For example: when the action information is the gesture "OK", the corresponding control screen operation is "OK", the action information is the gesture "single finger down", the corresponding control screen operation is "select box down", and the action information is gesture " Thumbs up", the corresponding control screen operation is "return", etc.
在第二对应关系的多个动作信息中确定与目标控屏对象的动作信息相匹配的目标控屏操作,目标控屏对象为N个控屏对象中的任一个。A target screen control operation matching the action information of the target screen control object is determined from the plurality of action information in the second corresponding relationship, and the target screen control object is any one of the N screen control objects.
在N个控屏对象的动作信息中,可能存在无效动作信息,因此,需要在第二对应 关系的多个动作信息中确定与目标控屏对象的动作信息相匹配的目标控屏操作,以确定是否需要对显示子屏幕进行控制或如何对显示子屏幕进行控制。具体的,可以通过神经网络模型,对目标控屏对象的动作信息与第二对应关系中的多个动作信息进行匹配,若目标控屏对象的动作信息与第二对应关系中的多个动作信息均不匹配,则目标控屏对象的动作信息为无效动作信息,若目标控屏对象的动作信息与第二对应关系中的多个动作信息中的任一动作信息相互匹配,则确定第二对应关系中与目标控屏对象的动作信息相匹配的动作信息为目标控屏操作。In the action information of the N screen control objects, there may be invalid action information. Therefore, it is necessary to determine the target control screen operation matching the action information of the target screen control object from the multiple action information in the second correspondence relationship to determine Whether it is necessary to control the display of the sub-screen or how to control the display of the sub-screen. Specifically, a neural network model can be used to match the action information of the target screen control object with multiple action information in the second correspondence. If the action information of the target screen control object matches the multiple action information in the second correspondence If none of them match, the action information of the target screen control object is invalid action information. If the action information of the target screen control object matches any one of the action information in the second correspondence, the second correspondence is determined The action information in the relationship that matches the action information of the target screen object is the target screen operation.
最后,根据目标控屏操作对应的控屛操作控制目标控屏对象对应的显示子屏幕。Finally, the display sub-screen corresponding to the target control screen object is controlled according to the control operation corresponding to the target control screen operation.
在确定目标控屏对象的目标控屏操作之后,根据第二对应关系确定目标控屏操作对应的控屏操作,然后根据目标控屏操作对应的控屏操作控制目标控屏对象对应的显示子屏幕。通过确定目标控屏对象的目标控屏操作,并根据目标控屏操作对应的控屏操作控制目标控屏对象对应的显示子屏幕,实现了通过控屏对象的动作信息和特征信息,对控屏对象对应的显示子屏幕进行相应的控制。After determining the target control screen operation of the target control screen object, determine the control screen operation corresponding to the target control screen operation according to the second correspondence, and then control the display sub-screen corresponding to the target control screen object according to the control screen operation corresponding to the target control screen operation . By determining the target control screen operation of the target control screen object, and controlling the display sub-screen corresponding to the target control screen object according to the control screen operation corresponding to the target control screen operation, the control screen can be controlled by the control screen object’s action information and characteristic information. The display sub-screen corresponding to the object performs corresponding control.
本申请实施例提供的控屏方法通过获取用户的第一图像,并根据第一图像,获取N个控屏对象各自的特征信息以及N个控屏对象各自的动作信息,N个控屏对象各自的特征信息用于区分N个控屏对象,然后根据N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕,显示子屏幕为显示屏幕的部分或全部区域,最后根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕。由于通过根据控屏对象的特征信息确定控屏对象对应的显示子屏幕,并根据控屏对象的动作信息,控制控屏对象对应的显示子屏幕,实现了对屏幕的分屏的灵活控制,不仅提高了屏幕利用率,而且增强了用户体验。The method for controlling the screen provided by the embodiment of the application obtains the first image of the user, and according to the first image, obtains the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects. The feature information of is used to distinguish N screen control objects, and then according to the respective feature information of the N screen control objects, the display sub-screen corresponding to each of the N screen control objects is determined. The display sub-screen is part or all of the area of the display screen, and finally According to the respective action information of the N screen control objects, the corresponding display sub-screens of the N screen control objects are controlled. Since the display sub-screen corresponding to the control screen object is determined according to the characteristic information of the control screen object, and the display sub-screen corresponding to the control screen object is controlled according to the action information of the control screen object, the flexible control of the screen splitting is realized. Improved screen utilization and enhanced user experience.
可选的,为了实现对屏幕的分屏控制,还需要启动多手势控屏模式,以根据用户需要对屏幕进行切分。图5是本申请另一实施例提供的控屏方法的流程图,该方法可以通过本申请实施例提供的控屏装置执行,控屏装置可以是终端设备的部分或全部,下面以终端设备为执行主体为例,对本申请实施例提供的控屏方法进行介绍。如图5所示,步骤S101之前,本申请实施例提供的控屏方法还可以包括:Optionally, in order to achieve split-screen control of the screen, a multi-gesture control screen mode needs to be activated to split the screen according to user needs. Fig. 5 is a flowchart of a screen control method provided by another embodiment of this application. The method can be executed by the screen control device provided in this embodiment of the application. The screen control device can be part or all of a terminal device. The execution subject is taken as an example to introduce the screen control method provided in the embodiment of the present application. As shown in FIG. 5, before step S101, the screen control method provided in the embodiment of the present application may further include:
步骤S201:获取用户的第二图像。Step S201: Acquire a second image of the user.
获取用户的第二图像,可参考步骤S101部分关于获取用户的第一图像的方式的描述,本申请实施例对此不再赘述。第二图像中包括满足预设启动动作的控屏对象。To obtain the second image of the user, refer to the description of the method of obtaining the first image of the user in the step S101, which is not repeated in this embodiment of the present application. The second image includes screen control objects that satisfy the preset startup action.
为了节约能耗,可以在准备获取用户的第二图像时再打开摄像头开关,以获取用户的第二图像,本申请实施例对此不做限制。In order to save energy consumption, the camera switch may be turned on when preparing to obtain the second image of the user to obtain the second image of the user, which is not limited in the embodiment of the present application.
步骤S202:确定第二图像中的满足预设启动动作的控屛对象的个数N,预设启动动作用于启动多手势控屛模式。Step S202: Determine the number N of control objects in the second image that satisfy the preset activation action, and the preset activation action is used to activate the multi-gesture control mode.
第二图像中可能存在多个控屏对象,以控屏对象为人手为例,则第二图像中可能存在多只人手,在该多只人手中,可能存在无效控屏对象,在启动多手势控屏模式时,可以通过预设启动动作的方式,筛除无效控屏对象,以实现对显示子屏幕的数量的准确判断。本申请实施例对预设启动动作的具体动作不做限制,在一种可能的实施方式中,若控屏对象为人手,则预设启动动作可以是预设手势动作,并确定第二图像中满足预设手势动作的人手个数N;在另一种可能的实施方式中,若控屏对象为人脸,则 预设启动动作可以是预设人脸表情,并确定第二图像中满足预设人脸表情的人脸个数N。There may be multiple screen control objects in the second image. Taking the screen object as a human hand as an example, there may be multiple hands in the second image. In the hands of these multiple people, there may be invalid screen control objects. In the control screen mode, the invalid control screen objects can be filtered out by the way of preset start actions, so as to realize accurate judgment on the number of displayed sub-screens. The embodiment of the application does not limit the specific actions of the preset activation action. In a possible implementation manner, if the screen control object is a human hand, the preset activation action may be a preset gesture action, and it is determined that the second image The number of human hands that satisfy the preset gesture action is N; in another possible implementation, if the screen control object is a face, the preset activation action may be a preset face expression, and it is determined that the second image satisfies the preset The number of facial expressions is N.
另外,本申请实施例对如何确定第二图像中的满足预设启动动作的控屏对象的个数N的方式不做限制,在一种可能的实施方式中,可以通过确定第二图像中的多个控屏对象并获取该多个控屏对象的动作信息,然后判断该多个控屏对象的动作信息是否满足预设启动动作,以确定第二图像中的满足预设启动动作的控屛对象的个数N。在另一种可能的实施方式中,可以通过检测第二图像中满足预设启动动作的控屏对象,以确定第二图像中的满足预设启动动作的控屏对象的个数N。In addition, the embodiment of the present application does not limit the manner of how to determine the number N of screen control objects that satisfy the preset activation action in the second image. In a possible implementation manner, the A plurality of screen control objects and the action information of the plurality of screen control objects are obtained, and then it is determined whether the action information of the plurality of screen control objects meets the preset start action, so as to determine the control block in the second image that meets the preset start action The number of objects N. In another possible implementation manner, the number of screen control objects satisfying the preset startup action in the second image may be detected to determine the number N of screen control objects satisfying the preset startup action in the second image.
步骤S203:在显示屏幕上呈现N个显示子屏幕。Step S203: Present N display sub-screens on the display screen.
在确定第二图像中的满足预设启动动作的控屛对象的个数N之后,在显示屏幕上呈现N个显示子屏幕,本申请实施例对如何在显示屏幕上呈现N个显示子屏幕的具体实施方式不做限制,例如将显示屏幕切分为N个显示子屏幕,本申请实施例对将显示屏幕切分为N个显示子屏幕的具体实施方式不做限制。在一种可能的实施方式中,将显示屏幕切分为N个显示子屏幕,可以将显示屏幕平均切分为N个显示子屏幕,也可以根据用户需求,对N个显示子屏幕的大小以及位置关系进行设置,本申请实施例对每个显示子屏幕的大小和位置关系等不做限制。在另一种可能的实施方式中,可以将显示屏幕分为N个多通道,通过多通道显示不同的图像等。After determining the number N of control objects satisfying the preset activation action in the second image, N display sub-screens are presented on the display screen. The embodiment of the present application has an explanation of how to present N display sub-screens on the display screen. The specific implementation manner is not limited. For example, the display screen is divided into N display sub-screens. The embodiment of the present application does not limit the specific implementation manner of dividing the display screen into N display sub-screens. In a possible implementation manner, the display screen is divided into N display sub-screens, the display screen can be equally divided into N display sub-screens, or the size and size of the N display sub-screens can be adjusted according to user needs. The position relationship is set. The embodiment of the present application does not limit the size and position relationship of each display sub-screen. In another possible implementation manner, the display screen can be divided into N multi-channels, and different images can be displayed through the multi-channels.
本申请实施例中,通过确定用户的第二图像中满足预设启动动作的控屏对象的个数,在显示屏幕上呈现多个显示子屏幕,实现了对屏幕的切分以及对多手势控屏模式的开启,在获取用户的第一图像之前,可以通过检测多手势控屏模式是否开启,进而判断是否可以对显示屏幕进行分屏控制。若多手势控屏模式未开启,则需要用户根据预设启动动作,开启多手势控屏模式,然后再对显示屏幕进行分屏控制,提高了用户分屏控制的效率。In the embodiment of the present application, by determining the number of screen control objects that meet the preset activation action in the user's second image, multiple display sub-screens are presented on the display screen, which realizes screen segmentation and multi-gesture control. When the screen mode is turned on, before the first image of the user is acquired, it can be determined whether the display screen can be split-screen control by detecting whether the multi-gesture control screen mode is turned on. If the multi-gesture control screen mode is not enabled, the user is required to start the multi-gesture control screen mode according to the preset start action, and then perform split-screen control of the display screen, which improves the efficiency of the user's split-screen control.
可选的,图6是本申请又一实施例提供的控屏方法的流程图,该方法可以通过本申请实施例提供的控屏装置执行,控屏装置可以是终端设备的部分或全部,例如可以是终端设备中的处理器,下面以终端设备为执行主体为例,对本申请实施例提供的控屏方法进行介绍。如图6所示,在步骤S101之前,本申请实施例提供的控屏方法还可以包括:Optionally, FIG. 6 is a flowchart of a screen control method provided by another embodiment of the present application. The method can be executed by the screen control device provided in the embodiment of the present application, and the screen control device may be part or all of a terminal device, for example It may be a processor in a terminal device. The following takes the terminal device as an execution subject as an example to introduce the screen control method provided in the embodiment of the present application. As shown in FIG. 6, before step S101, the screen control method provided in the embodiment of the present application may further include:
步骤S301:建立N个满足预设启动动作的控屏对象的特征信息与N个显示子屏幕的第一对应关系,第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系。Step S301: Establish a first corresponding relationship between the feature information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens .
在根据第二图像中的满足预设启动动作的控屏对象的个数,将显示屏幕切分为N个显示子屏幕之后,还需要建立N个满足预设启动动作的控屏对象与N个显示子屏幕之间第一对应关系,第一对应关系包括控屏对象的特征信息与显示子屏幕的一一对应关系。通过建立控屏对象的特征信息与显示子屏幕的一一对应关系,可以实现根据控屏对象的特征信息,确定控屏对象对应的显示子屏幕。After dividing the display screen into N display sub-screens according to the number of screen control objects that meet the preset startup action in the second image, it is also necessary to create N screen control objects that meet the preset startup action and N The first correspondence between the sub-screens is displayed, and the first correspondence includes a one-to-one correspondence between the feature information of the screen control object and the displayed sub-screens. By establishing a one-to-one correspondence between the feature information of the screen control object and the display sub-screen, it is possible to determine the display sub-screen corresponding to the screen control object according to the feature information of the screen control object.
示例性的,第二图像中满足预设启动动作的控屏对象的个数为4,控屏对象分别为人手1、人手2、人手3和人手4,将显示屏幕切分为4个显示子屏幕,分别为显示子屏幕1、显示子屏幕2、显示子屏幕3和显示子屏幕4,分别获取4个控屏对象的特 征信息,即,人手1的特征信息、人手2的特征信息、人手3的特征信息和人手4的特征信息,建立人手的特征信息与显示子屏幕之间的一一对应关系,例如,人手1的特征信息与显示子屏幕1对应,人手2的特征信息与显示子屏幕2对应,人手3的特征信息与显示子屏幕3对应,人手4的特征信息与显示子屏幕4对应,本申请实施例不限于此。Exemplarily, in the second image, the number of screen control objects that meet the preset start action is 4, and the screen control objects are human hand 1, human hand 2, human hand 3, and human hand 4, and the display screen is divided into 4 display sub The screens are display sub-screen 1, display sub-screen 2, display sub-screen 3, and display sub-screen 4, respectively, to obtain the characteristic information of 4 control screen objects, namely, the characteristic information of human hand 1, the characteristic information of human hand 2, and the human hand The feature information of human hand 3 and the feature information of human hand 4 establish a one-to-one correspondence between the feature information of human hand and the display sub-screen. For example, the feature information of human hand 1 corresponds to display sub-screen 1, and the feature information of human hand 2 corresponds to the display sub-screen. The screen 2 corresponds, the characteristic information of the human hand 3 corresponds to the display sub-screen 3, and the characteristic information of the human hand 4 corresponds to the display sub-screen 4. The embodiment of the present application is not limited to this.
相应的,步骤S102可以为:Correspondingly, step S102 may be:
步骤S302:根据第一对应关系和N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕。Step S302: According to the first correspondence and the respective characteristic information of the N screen control objects, determine the display sub-screens corresponding to each of the N screen control objects.
本申请实施例对如何根据第一对应关系和N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕的方式不做限制,在一种可能的实施方式中,可以通过获取N个控屏对象各自的特征信息,然后分别与N个满足预设启动动作的控屏对象的特征信息进行匹配,最后根据第一对应关系以及匹配结果确定N个控屏对象各自对应的显示子屏幕。The embodiment of this application does not limit how to determine the manner of displaying the sub-screen corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects. In a possible implementation manner, By acquiring the respective characteristic information of the N screen control objects, and then respectively match the characteristic information of the N screen control objects that meet the preset startup action, and finally determine the corresponding N screen control objects according to the first correspondence and the matching result The sub-screen is displayed.
示例性的,以步骤301中满足预设启动动作的控屏对象的个数为4为例,第一图像中包括4只人手,分别为人手A,人手B,人手C和人手D,分别获取该4只人手的特征信息,并与第二图像中的人手1,人手2、人手3和人手4中的特征信息进行匹配,若经过匹配,人手A的特征信息与人手1的特征信息一致,则确定人手1的特征信息对应的显示子屏幕1为人手A对应的显示子屏幕,并通过人手A的动作信息控制显示子屏幕,以此类推,不再赘述。Exemplarily, taking the number of screen control objects satisfying the preset activation action in step 301 as an example, the first image includes 4 human hands, namely, human hand A, human hand B, human hand C, and human hand D, which are obtained separately The feature information of the four human hands is matched with the feature information of human hand 1, human hand 2, human hand 3, and human hand 4 in the second image. After matching, the feature information of human hand A is consistent with the feature information of human hand 1. It is determined that the display sub-screen 1 corresponding to the characteristic information of the human hand 1 is the display sub-screen corresponding to the human hand A, and the display sub-screen is controlled by the motion information of the human hand A, and so on, and will not be repeated.
本申请实施例中,通过建立多个满足预设启动动作的控屏对象的特征信息与多个显示子屏幕之间的对应关系,并根据该对应关系和控屏对象的特征信息,确定控屏对象各自对应的显示子屏幕,实现了控屏对象对显示子屏幕的分别控制,提高了控屏对象对显示子屏幕的控制的准确性。In the embodiment of the present application, the corresponding relationship between the feature information of a plurality of screen control objects satisfying the preset startup action and the multiple display sub-screens is established, and the control screen is determined according to the corresponding relationship and the feature information of the screen control object The respective display sub-screens corresponding to the objects realize the separate control of the display sub-screens by the control object, and improve the accuracy of the control object's control of the display sub-screens.
下面介绍本申请实施例提供的控屏装置、设备、存储介质以及计算机程序产品,其内容和效果可参考本申请上述实施例提供的控屏方法,不再赘述。The following describes the screen control device, equipment, storage medium, and computer program product provided by the embodiments of the present application. For the content and effects, please refer to the screen control method provided in the foregoing embodiments of the present application, and will not be repeated.
本申请实施例提供一种控屏装置,图7是本申请一实施例提供的控屏装置的结构示意图,控屏装置可以是终端设备的部分或全部,下面以终端设备为执行主体为例,如图7所示,本申请实施例提供的控屏装置可以包括:An embodiment of the present application provides a screen control device. FIG. 7 is a schematic structural diagram of the screen control device provided by an embodiment of the present application. The screen control device may be part or all of a terminal device. The following takes the terminal device as the execution body as an example. As shown in FIG. 7, the screen control device provided by the embodiment of the present application may include:
第一获取模块71,用于获取第一图像中N个控屏对象各自的特征信息和N个控屏对象各自的动作信息,控屏对象为第一图像中用户的身体部位,N为大于等于2的正整数;The first acquisition module 71 is configured to acquire the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image. The screen control object is the body part of the user in the first image, and N is greater than or equal to Positive integer of 2;
第一确定模块72,用于根据N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕,显示子屏幕为显示屏幕的部分区域;The first determining module 72 is configured to determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, and the display sub-screen is a partial area of the display screen;
控制模块73,用于根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕。The control module 73 is configured to control the display sub-screens corresponding to the N screen control objects according to the respective action information of the N screen control objects.
在一种可选的情况中,第一确定模块和控制模块的功能也可以都是由处理模块完成的,该处理模块例如可以是处理器,第一获取模块可以是处理器的传输接口,或者也可以说第一获取模块是处理器的接收接口,此时,第一确定模块和控制模块的功能也可以都是该处理器完成的。In an optional situation, the functions of the first determining module and the control module may also be performed by a processing module. The processing module may be, for example, a processor, and the first acquiring module may be a transmission interface of the processor, or It can also be said that the first acquiring module is the receiving interface of the processor. In this case, the functions of the first determining module and the control module may also be performed by the processor.
可选的,如图7所示,本申请实施例提供的控屏装置还可以包括:Optionally, as shown in FIG. 7, the screen control device provided in the embodiment of the present application may further include:
第二获取模块74,用于获取用户的第二图像;The second acquisition module 74 is configured to acquire a second image of the user;
第二确定模块75,用于确定第二图像中的满足预设启动动作的控屛对象的个数N,预设启动动作用于启动多手势控屛模式;The second determining module 75 is configured to determine the number N of control objects in the second image that meet the preset activation action, and the preset activation action is used to activate the multi-gesture control mode;
切分模块76,用于得到在显示屏幕上呈现的N个显示子屏幕。示例性的,切分模块根据第二确定模块确定的满足预设启动动作的控屛对象的个数,将屏幕切分成对应个数的显示子屏幕。The segmentation module 76 is used to obtain N display sub-screens presented on the display screen. Exemplarily, the dividing module divides the screen into a corresponding number of display sub-screens according to the number of control objects that meet the preset activation action determined by the second determining module.
在一种可选的情况中,第二获取模块和第一获取模块可以都是处理器的传输接口或者接收接口,第二确定模块和切分模块的功能可以都是由处理模块完成,该处理模块例如可以是处理器,此时,第二确定模块和切分模块的功能可以都是由处理器完成的。In an optional situation, the second acquisition module and the first acquisition module may both be the transmission interface or the reception interface of the processor, and the functions of the second determination module and the segmentation module may both be completed by the processing module. The module may be a processor, for example. In this case, the functions of the second determining module and the splitting module may both be completed by the processor.
可选的,如图7所示,本申请实施例提供的控屏装置还可以包括:Optionally, as shown in FIG. 7, the screen control device provided in the embodiment of the present application may further include:
建立模块77,用于建立N个满足预设启动动作的控屏对象的特征信息与N个显示子屏幕的第一对应关系,第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;The establishment module 77 is used to establish a first correspondence between the characteristic information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, and the first correspondence includes one of the characteristic information of the control object and the display sub-screens. One correspondence
第一确定模块72具体用于:The first determining module 72 is specifically configured to:
根据第一对应关系和N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕。According to the first correspondence and the respective characteristic information of the N screen control objects, the display sub-screens corresponding to the N screen control objects are determined.
可选的,控制模块73具体用于:Optionally, the control module 73 is specifically used for:
在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,目标控屏对象为N个控屏对象中的任一个,第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;根据目标控屛操作控制目标控屏对象对应的显示子屏幕。In the second correspondence, the target control operation that matches the action information of the target control object is determined. The target control object is any one of the N control objects. The second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
本申请所提供的装置实施例仅仅是示意性的,图7中的模块划分仅仅是一种逻辑功能划分,实际实现时可以有另外的划分方式。例如多个模块可以结合或者可以集成到另一个系统。各个模块相互之间的耦合可以是通过一些接口实现,这些接口通常是电性通信接口,但是也不排除可能是机械接口或其它的形式接口。因此,作为分离部件说明的模块可以是或者也可以不是物理上分开的,既可以位于一个地方,也可以分布到同一个或不同设备的不同位置上。The device embodiments provided in this application are merely illustrative, and the module division in FIG. 7 is only a logical function division, and there may be other division methods in actual implementation. For example, multiple modules can be combined or integrated into another system. The mutual coupling between the various modules can be realized through some interfaces. These interfaces are usually electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces. Therefore, the modules described as separate components may or may not be physically separated, and may be located in one place or distributed to different locations on the same or different devices.
本申请实施例提供一种设备,图8A是本申请一实施例提供的终端设备的结构示意图,如图8A所示,本申请提供的终端设备包括处理器81、存储器82、收发器83,该存储器中存储有软件指令或者说计算机程序;处理器可以是芯片,收发器83实现终端设备对通信数据的发送和接收,处理器81被配置为调用存储器中的软件指令以实现上述控屏方法,其内容及效果请参考方法实施例。An embodiment of the present application provides a device. FIG. 8A is a schematic structural diagram of a terminal device provided by an embodiment of the present application. As shown in FIG. 8A, the terminal device provided by the present application includes a processor 81, a memory 82, and a transceiver 83. The memory stores software instructions or computer programs; the processor may be a chip, the transceiver 83 implements the sending and receiving of communication data by the terminal device, and the processor 81 is configured to call the software instructions in the memory to implement the above-mentioned screen control method, Please refer to the method embodiment for the content and effect.
本申请实施例提供一种设备,图8B是本申请另一实施例提供的终端设备的结构示意图,如图8B所示,本申请提供的终端设备包括处理器84和传输接口85,传输接口85,用于接收摄像头获取的用户的第一图像;处理器84,用于调用存储在存储器中的软件指令以执行如下步骤:获取第一图像中N个控屏对象各自的特征信息和N个控屏对象各自的动作信息,控屏对象为第一图像中用户的身体部位,N为大于等于2的正整数;根据N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕, 显示子屏幕为显示屏幕的部分区域;根据N个控屏对象各自的动作信息,控制N个控屏对象各自对应的显示子屏幕。An embodiment of the present application provides a device. FIG. 8B is a schematic structural diagram of a terminal device provided by another embodiment of the present application. As shown in FIG. 8B, the terminal device provided by the present application includes a processor 84 and a transmission interface 85. The transmission interface 85 , Is used to receive the first image of the user obtained by the camera; the processor 84 is used to call the software instructions stored in the memory to perform the following steps: obtain the respective feature information and N control objects of the N control screen objects in the first image The respective action information of the screen objects, the screen control object is the user's body part in the first image, and N is a positive integer greater than or equal to 2; according to the respective characteristic information of the N screen control objects, the corresponding display of the N screen control objects is determined Sub-screen, the display sub-screen is a partial area of the display screen; according to the respective action information of the N screen-control objects, the display sub-screens corresponding to the N screen-control objects are controlled.
可选的,传输接口85,还用于接收摄像头获取的用户的第二图像;处理器84还用于:确定第二图像中的满足预设启动动作的控屛对象的个数N,预设启动动作用于启动多手势控屛模式;得到显示屏幕上呈现的N个显示子屏幕。Optionally, the transmission interface 85 is further configured to receive a second image of the user acquired by the camera; the processor 84 is further configured to: determine the number N of control objects in the second image that satisfy the preset activation action, and preset The start action is used to start the multi-gesture control mode; N display sub-screens presented on the display screen are obtained.
可选的,处理器84还用于:Optionally, the processor 84 is also used to:
建立N个满足预设启动动作的控屏对象的特征信息与N个显示子屏幕的第一对应关系,第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;处理器84,具体用于:根据第一对应关系和N个控屏对象各自的特征信息,确定N个控屏对象各自对应的显示子屏幕。Establish a first corresponding relationship between the feature information of the N screen control objects that satisfy the preset startup action and the N display sub-screens, the first corresponding relationship includes a one-to-one correspondence between the feature information of the control object and the display sub-screens; the processor; 84. It is specifically configured to: determine the display sub-screens corresponding to each of the N screen control objects according to the first correspondence and the respective characteristic information of the N screen control objects.
可选的,处理器84还用于:Optionally, the processor 84 is also used to:
在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,目标控屏对象为N个控屏对象中的任一个,第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;根据目标控屛操作控制目标控屏对象对应的显示子屏幕。In the second correspondence, the target control operation that matches the action information of the target control object is determined. The target control object is any one of the N control objects. The second correspondence includes multiple action information and multiple One-to-one correspondence of the control operation; according to the target control operation, the display sub-screen corresponding to the target control object is controlled.
图9是本申请实施例提供的一种示例性的控屏装置的硬件架构示意图。如图9所示,控屏装置900的硬件架构可以适用于SOC和应用处理器(application processor,AP)。FIG. 9 is a schematic diagram of the hardware architecture of an exemplary screen control device provided by an embodiment of the present application. As shown in FIG. 9, the hardware architecture of the screen control device 900 may be applicable to SOC and application processor (AP).
示例性的,该控屏装置900包括至少一个中央处理单元(Central Processing Unit,CPU)、至少一个存储器、图形处理器(graphics processing unit,GPU)、解码器、专用的视频或图形处理器、接收接口和发送接口等。可选的,控屏装置900还可以包括微处理器和微控制器MCU等。在一种可选的情况中,控屏装置900的上述各个部分通过连接器相耦合,应当理解,本申请的各个实施例中,耦合是指通过特定方式的相互联系,包括直接相连或者通过其他设备间接相连,例如可以通过各类接口、传输线或总线等相连,这些接口通常是电性通信接口,但是也不排除可能是机械接口或其它形式的接口,本实施例对此不做限定。在一种可选的情况中,上述各部分集成在同一个芯片上;在另一种可选的情况中,CPU、GPU、解码器、接收接口以及发送接口集成在一个芯片上,该芯片内部的各部分通过总线访问外部的存储器。专用视频/图形处理器可以与CPU集成在同一个芯片上,也可以作为单独的处理器芯片存在,例如专用视频/图形处理器可以为专用图像信号处理器(image signal processor,ISP)。在本申请实施例中涉及的芯片是以集成电路工艺制造在同一个半导体衬底上的系统,也叫半导体芯片,其可以是利用集成电路工艺制作在衬底(通常是例如硅一类的半导体材料)上形成的集成电路的集合,其外层通常被半导体封装材料封装。所述集成电路可以包括各类功能器件,每一类功能器件包括逻辑门电路、金属氧化物半导体(Metal-Oxide-Semiconductor,MOS)晶体管、双极晶体管或二极管等晶体管,也可包括电容、电阻或电感等其他部件。每个功能器件可以独立工作或者在必要的驱动软件的作用下工作,可以实现通信、运算、或存储等各类功能。Exemplarily, the screen control device 900 includes at least one central processing unit (CPU), at least one memory, a graphics processing unit (GPU), a decoder, a dedicated video or graphics processor, and a receiver. Interface and sending interface, etc. Optionally, the screen control device 900 may also include a microprocessor and a microcontroller MCU, etc. In an optional situation, the above-mentioned parts of the screen control device 900 are coupled through a connector. It should be understood that, in the various embodiments of the present application, coupling refers to mutual connection in a specific manner, including direct connection or through other The devices are indirectly connected, such as through various interfaces, transmission lines or buses. These interfaces are usually electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces, which are not limited in this embodiment. In an optional case, the above-mentioned parts are integrated on the same chip; in another optional case, the CPU, GPU, decoder, receiving interface, and transmitting interface are integrated on one chip, and the chip is The various parts of the bus access external memory. The dedicated video/graphics processor may be integrated with the CPU on the same chip, or may exist as a separate processor chip. For example, the dedicated video/graphics processor may be a dedicated image signal processor (ISP). The chip involved in the embodiments of this application is a system manufactured on the same semiconductor substrate by an integrated circuit process, also called a semiconductor chip, which can be manufactured on a substrate using an integrated circuit process (usually a semiconductor such as silicon) The outer layer of the integrated circuit formed on the material) is usually encapsulated by a semiconductor packaging material. The integrated circuit may include various types of functional devices, and each type of functional device includes transistors such as logic gate circuits, Metal-Oxide-Semiconductor (MOS) transistors, bipolar transistors or diodes, and may also include capacitors and resistors. Or inductance and other components. Each functional device can work independently or under the action of necessary driver software, and can realize various functions such as communication, calculation, or storage.
可选的,CPU可以是一个单核(single-CPU)处理器或多核(multi-CPU)处理器;可选的,CPU可以是多个处理器构成的处理器组,多个处理器之间通过一个或多个总线彼此耦合。在一种可选的情况中,对于图像信号或视频信号的处理一部分由GPU完 成,一部分由专用视频/图形处理器完成,还有可能由跑在通用CPU或GPU上的软件代码完成。Optionally, the CPU may be a single-CPU processor or a multi-CPU processor; optionally, the CPU may be a processor group composed of multiple processors, between multiple processors Coupled to each other through one or more buses. In an optional situation, part of the processing of the image signal or video signal is done by the GPU, part is done by a dedicated video/graphics processor, and it may also be done by software code running on a general-purpose CPU or GPU.
该装置还可以包括存储器,可用于存储计算机程序指令,包括操作系统(Operation System,OS)、各种用户应用程序、以及用于执行本申请方案的程序代码在内的各类计算机程序代码;存储器还可以用于存储视频数据、图像数据等;CPU可以用于执行存储器中存储的计算机程序代码,以实现本申请实施例中的方法。可选的,存储器可以是非掉电易失性存储器,例如是嵌入式多媒体卡(Embedded Multi Media Card,EMMC)、通用闪存存储(Universal Flash Storage,UFS)或只读存储器(Read-Only Memory,ROM),或者是可存储静态信息和指令的其他类型的静态存储设备,还可以是掉电易失性存储器(volatile memory),例如随机存取存储器(Random Access Memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的程序代码并能够由计算机存取的任何其他计算机可读存储介质,但不限于此。The device may also include a memory, which can be used to store computer program instructions, including various computer program codes including an operating system (Operation System, OS), various user application programs, and program codes used to execute the solutions of the present application; the memory; It can also be used to store video data, image data, etc.; the CPU can be used to execute computer program codes stored in the memory to implement the methods in the embodiments of the present application. Optionally, the memory may be a non-power-down volatile memory, such as Embedded MultiMedia Card (EMMC), Universal Flash Storage (UFS) or Read-Only Memory (ROM) ), or other types of static storage devices that can store static information and instructions, or volatile memory (volatile memory), such as random access memory (Random Access Memory, RAM), or can store information and instructions Other types of dynamic storage devices can also be Electrically Erasable Programmable Read-Only Memory (EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other optical disk storage , CD storage (including compressed CDs, laser disks, CDs, digital versatile CDs, Blu-ray CDs, etc.), disk storage media or other magnetic storage devices, or can be used to carry or store program codes in the form of instructions or data structures and can be used by Any other computer-readable storage medium accessed by the computer, but not limited to this.
该接收接口可以为处理器芯片的数据输入的接口,在一种可选的情况下,该接收接口可以是移动产业处理器接口(mobile industry processor interface,MIPI)、高清晰度多媒体接口(High Definition Multimedia Interface,HDMI)或Display Port(DP)等。The receiving interface may be a data input interface of the processor chip. In an optional case, the receiving interface may be a mobile industry processor interface (MIPI) or a high-definition multimedia interface (High Definition). Multimedia Interface, HDMI) or Display Port (DP), etc.
示例性的,图10是本申请又一实施例提供的终端设备的结构示意图,如图10所示,终端设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。可以理解的是,本实施例示意的结构并不构成对终端设备100的具体限定。在本申请另一些实施例中,终端设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件,或软件和硬件的组合实现。Exemplarily, FIG. 10 is a schematic structural diagram of a terminal device provided by another embodiment of the present application. As shown in FIG. 10, the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, and a universal serial bus ( universal serial bus, USB) interface 130, charging management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C , A headset interface 170D, a sensor 180, a button 190, a motor 191, an indicator 192, a camera 193, a display screen 194, and a subscriber identification module (SIM) card interface 195, etc. It can be understood that the structure illustrated in this embodiment does not constitute a specific limitation on the terminal device 100. In other embodiments of the present application, the terminal device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components can be implemented by hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括AP,调制解调处理器,GPU,ISP,控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。在一些实施例中,终端设备100也可以包括一个或多个处理器110。其中,控制器可以是终端设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理 器110需要再次使用该指令或数据,可从所述存储器中直接调用。这就避免了重复存取,减少了处理器110的等待时间,因而提高了终端设备100系统的效率。The processor 110 may include one or more processing units. For example, the processor 110 may include an AP, a modem processor, a GPU, an ISP, a controller, a video codec, and a digital signal processor (DSP). , Baseband processor, and/or neural-network processing unit (NPU), etc. Among them, the different processing units may be independent devices or integrated in one or more processors. In some embodiments, the terminal device 100 may also include one or more processors 110. The controller may be the nerve center and command center of the terminal device 100. The controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions. A memory may also be provided in the processor 110 to store instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. This avoids repeated accesses, reduces the waiting time of the processor 110, and improves the efficiency of the terminal device 100 system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,MIPI,通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或USB接口、HDMI、V-By-One接口,DP等,其中,V-By-One接口是一种面向图像传输开发的数字接口标准。其中,USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为终端设备100充电,也可以用于终端设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。In some embodiments, the processor 110 may include one or more interfaces. The interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface. receiver/transmitter, UART) interface, MIPI, general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and/or USB interface, HDMI, V-By-One Interface, DP, etc., among them, V-By-One interface is a digital interface standard for image transmission development. Among them, the USB interface 130 is an interface that complies with the USB standard specification, and specifically may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on. The USB interface 130 can be used to connect a charger to charge the terminal device 100, and can also be used to transfer data between the terminal device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端设备100的结构限定。在本申请另一些实施例中,终端设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the terminal device 100. In other embodiments of the present application, the terminal device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过终端设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为终端设备100供电。The charging management module 140 is used to receive charging input from the charger. Among them, the charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive the charging input of the wired charger through the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive wireless charging input through the wireless charging coil of the terminal device 100. While the charging management module 140 charges the battery 142, it can also supply power to the terminal device 100 through the power management module 141.
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 194, the camera 193, and the wireless communication module 160. The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be provided in the same device.
终端设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。终端设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。The wireless communication function of the terminal device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor. The antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in the terminal device 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna can be used in combination with a tuning switch.
移动通信模块150可以提供应用在终端设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the terminal device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier, etc. The mobile communication module 150 can receive electromagnetic waves by the antenna 1, and perform processing such as filtering, amplifying and transmitting the received electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110. In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194. In some embodiments, the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在终端设备100上的包括无线局域网(wireless local area networks,WLAN),蓝牙,全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),NFC,红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide applications on the terminal device 100, including wireless local area networks (wireless local area networks, WLAN), Bluetooth, global navigation satellite system (GNSS), frequency modulation (FM), NFC, Infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110. The wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
在一些实施例中,终端设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得终端设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括GSM,GPRS,CDMA,WCDMA,TD-SCDMA,LTE,GNSS,WLAN,NFC,FM,和/或IR技术等。上述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the terminal device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include GSM, GPRS, CDMA, WCDMA, TD-SCDMA, LTE, GNSS, WLAN, NFC, FM, and/or IR technology. The aforementioned GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi- Zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
终端设备100通过GPU,显示屏194,以及应用处理器等可以实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行指令以生成或改变显示信息。The terminal device 100 can implement a display function through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs, which execute instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos, etc. The display screen 194 includes a display panel. The display panel can adopt liquid crystal display (LCD), organic light-emitting diode (OLED), active-matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode). AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc. In some embodiments, the terminal device 100 may include one or N display screens 194, and N is a positive integer greater than one.
终端设备100可以通过ISP,一个或多个摄像头193,视频编解码器,GPU,一个或多个显示屏194以及应用处理器等实现拍摄功能。The terminal device 100 may implement a shooting function through an ISP, one or more cameras 193, a video codec, a GPU, one or more display screens 194, and an application processor.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现终端设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, for example, the transfer mode between human brain neurons, it can quickly process input information and can continuously learn by itself. The NPU can realize applications such as intelligent cognition of the terminal device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐、照片、视频等数据文件保存在外部存储卡中。The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data files in an external memory card.
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得终端设备100执行本申请一些实施例中所提供的控屏方法,以及各种功能应用以及数据处理等。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统;该存储程序区还可以存储一个或多个应用程序(比如图库、联系人等)等。存储数据区可存储终端设备100使用过程中所创建的数据(比如照片,联系人等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。在一些实施例中,处理器110可以通过运行存储在内部存储器121的指令,和/或存储在设置于处理器110中的存储器的指令,来使得终端设备100执行本申请实施例中所提供的控屏方法,以及各种功能应用及数据处理。The internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions. The processor 110 can run the above-mentioned instructions stored in the internal memory 121 to enable the terminal device 100 to execute the screen control methods provided in some embodiments of the present application, as well as various functional applications and data processing. The internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on. The data storage area can store data (such as photos, contacts, etc.) created during the use of the terminal device 100. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc. In some embodiments, the processor 110 may execute instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor 110 to cause the terminal device 100 to execute the instructions provided in the embodiments of the present application. Screen control methods, as well as various functional applications and data processing.
终端设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。其中,音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。终端设备100可以通过扬声器170A收听音乐,或收听免提通话。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当终端设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。终端设备100可以设置至少一个麦克风170C。在另一些实施例中,终端设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,终端设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动终端设备平台(open mobile terminal platform,OMTP)标准接口,还可以是美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The terminal device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc. Among them, the audio module 170 is used to convert digital audio information into an analog audio signal for output, and also used to convert an analog audio input into a digital audio signal. The audio module 170 can also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110. The speaker 170A, also called a "speaker", is used to convert audio electrical signals into sound signals. The terminal device 100 can listen to music through the speaker 170A, or listen to a hands-free call. The receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the terminal device 100 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear. The microphone 170C, also called "microphone", "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can approach the microphone 170C through the mouth to make a sound, and input the sound signal to the microphone 170C. The terminal device 100 may be provided with at least one microphone 170C. In other embodiments, the terminal device 100 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the terminal device 100 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions. The earphone interface 170D is used to connect wired earphones. The earphone interface 170D can be a USB interface 130, a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) Standard interface.
传感器180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The sensor 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and an ambient light sensor 180L , Bone conduction sensor 180M and so on.
其中,压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器 可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。终端设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,终端设备100根据压力传感器180A检测所述触摸操作强度。终端设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。Among them, the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals. In some embodiments, the pressure sensor 180A may be provided on the display screen 194. There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors and so on. The capacitive pressure sensor may be composed of at least two parallel plates with conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The terminal device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the terminal device 100 detects the intensity of the touch operation according to the pressure sensor 180A. The terminal device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation whose intensity of the touch operation is less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
陀螺仪传感器180B可以用于确定终端设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定终端设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测终端设备100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消终端设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景等。The gyro sensor 180B may be used to determine the movement posture of the terminal device 100. In some embodiments, the angular velocity of the terminal device 100 around three axes (ie, x, y, and z axes) can be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyroscope sensor 180B detects the shake angle of the terminal device 100, calculates the distance that the lens module needs to compensate according to the angle, and allows the lens to counteract the shake of the terminal device 100 through a reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation, somatosensory game scenes and so on.
加速度传感器180E可检测终端设备100在各个方向上(一般为三轴)加速度的大小。当终端设备100静止时可检测出重力的大小及方向。还可以用于识别终端设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the magnitude of the acceleration of the terminal device 100 in various directions (generally three-axis). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the terminal device, and is used in applications such as horizontal and vertical screen switching, and pedometer.
距离传感器180F,用于测量距离。终端设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,终端设备100可以利用距离传感器180F测距以实现快速对焦。Distance sensor 180F, used to measure distance. The terminal device 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the terminal device 100 may use the distance sensor 180F to measure the distance to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。终端设备100通过发光二极管向外发射红外光。终端设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定终端设备100附近有物体。当检测到不充分的反射光时,终端设备100可以确定终端设备100附近没有物体。终端设备100可以利用接近光传感器180G检测用户手持终端设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。The proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode. The light emitting diode may be an infrared light emitting diode. The terminal device 100 emits infrared light to the outside through the light emitting diode. The terminal device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the terminal device 100. When insufficient reflected light is detected, the terminal device 100 can determine that there is no object near the terminal device 100. The terminal device 100 can use the proximity light sensor 180G to detect that the user holds the terminal device 100 close to the ear to talk, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode, and the pocket mode will automatically unlock and lock the screen.
环境光传感器180L用于感知环境光亮度。终端设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测终端设备100是否在口袋里,以防误触。The ambient light sensor 180L is used to sense the brightness of the ambient light. The terminal device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the terminal device 100 is in a pocket to prevent accidental touch.
指纹传感器180H(也称为指纹识别器),用于采集指纹。终端设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。另外,关于指纹传感器的其他记载可以参见名称为“处理通知的方法及终端设备”的国际专利申请PCT/CN2017/082773,其全部内容通过引用结合在本申请中。The fingerprint sensor 180H (also called a fingerprint reader) is used to collect fingerprints. The terminal device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, etc. In addition, other descriptions of the fingerprint sensor can be found in the international patent application PCT/CN2017/082773 entitled "Method and Terminal Device for Processing Notification", the entire content of which is incorporated in this application by reference.
触摸传感器180K,也可称触控面板或触敏表面。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称触控屏。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉 输出。在另一些实施例中,触摸传感器180K也可以设置于终端设备100的表面,与显示屏194所处的位置不同。The touch sensor 180K can also be called a touch panel or a touch-sensitive surface. The touch sensor 180K may be disposed on the display screen 194, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a touch screen. The touch sensor 180K is used to detect touch operations acting on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. The display screen 194 may provide visual output related to touch operations. In other embodiments, the touch sensor 180K may also be disposed on the surface of the terminal device 100, which is different from the position of the display screen 194.
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can obtain the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the human pulse and receive the blood pressure pulse signal. In some embodiments, the bone conduction sensor 180M may also be provided in the earphone, combined with the bone conduction earphone. The audio module 170 can parse the voice signal based on the vibration signal of the vibrating bone block of the voice obtained by the bone conduction sensor 180M, and realize the voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the heart rate detection function.
按键190包括开机键,音量键等。按键190可以是机械按键,也可以是触摸式按键。终端设备100可以接收按键输入,产生与终端设备100的用户设置以及功能控制有关的键信号输入。The button 190 includes a power button, a volume button, and so on. The button 190 may be a mechanical button or a touch button. The terminal device 100 may receive key input, and generate key signal input related to user settings and function control of the terminal device 100.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和终端设备100的接触和分离。终端设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。终端设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,终端设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在终端设备100中,不能和终端设备100分离。The SIM card interface 195 is used to connect to the SIM card. The SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the terminal device 100. The terminal device 100 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1. The SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc. The same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards can be the same or different. The SIM card interface 195 can also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The terminal device 100 interacts with the network through the SIM card to realize functions such as call and data communication. In some embodiments, the terminal device 100 uses an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the terminal device 100 and cannot be separated from the terminal device 100.
此外,本申请实施例还提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机执行指令,当用户设备的至少一个处理器执行该计算机执行指令时,用户设备执行上述各种可能的方法。In addition, the embodiments of the present application also provide a computer-readable storage medium. The computer-readable storage medium stores computer-executable instructions. When at least one processor of the user equipment executes the computer-executable instructions, the user equipment executes the aforementioned various possibilities. Methods.
其中,计算机可读介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于用户设备中。当然,处理器和存储介质也可以作为分立组件存在于通信设备中。Among them, the computer-readable medium includes a computer storage medium and a communication medium, and the communication medium includes any medium that facilitates the transfer of a computer program from one place to another. The storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer. An exemplary storage medium is coupled to the processor, so that the processor can read information from the storage medium and can write information to the storage medium. Of course, the storage medium may also be an integral part of the processor. The processor and the storage medium may be located in the ASIC. In addition, the ASIC may be located in the user equipment. Of course, the processor and the storage medium may also exist as discrete components in the communication device.
本领域普通技术人员可以理解:实现上述各方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成。前述的程序可以存储于一计算机可读取存储介质中。该程序在执行时,执行包括上述各方法实施例的步骤;而前述的存储介质包括:ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。A person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware. The aforementioned program can be stored in a computer readable storage medium. When the program is executed, the steps including the foregoing method embodiments are executed; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application, not to limit them; although the application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: It is still possible to modify the technical solutions described in the foregoing embodiments, or equivalently replace some or all of the technical features; these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the application range.

Claims (13)

  1. 一种控屏方法,其特征在于,包括:A method for controlling a screen, characterized in that it comprises:
    获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息,所述控屏对象为所述第一图像中用户的身体部位,N为大于等于2的正整数;Acquire the respective feature information of the N screen control objects in the first image and the respective action information of the N screen control objects, where the screen control objects are the body parts of the user in the first image, and N is greater than or equal to 2 Positive integer;
    根据所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕,所述显示子屏幕为显示屏幕的部分区域;Determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, where the display sub-screen is a partial area of the display screen;
    根据所述N个控屏对象各自的动作信息,控制所述N个控屏对象各自对应的显示子屏幕。According to the respective action information of the N screen control objects, the display sub-screens corresponding to the N screen control objects are controlled.
  2. 根据权利要求1所述的方法,其特征在于,在所述获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息之前,所述方法还包括:The method according to claim 1, characterized in that, before said acquiring the respective feature information of the N screen control objects and the respective action information of the N screen control objects in the first image, the method further comprises:
    获取所述用户的第二图像;Acquiring a second image of the user;
    确定所述第二图像中的满足预设启动动作的控屛对象的个数N,所述预设启动动作用于启动多手势控屛模式;Determining the number N of control objects in the second image that satisfy a preset activation action, where the preset activation action is used to activate a multi-gesture control mode;
    在所述显示屏幕上呈现N个显示子屏幕。N display sub-screens are presented on the display screen.
  3. 根据权利要求2所述的方法,其特征在于,所述获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息之前,所述方法还包括:The method according to claim 2, characterized in that before said acquiring the respective characteristic information of the N screen control objects and the respective action information of the N screen control objects in the first image, the method further comprises:
    建立所述N个满足预设启动动作的控屏对象的特征信息与所述N个显示子屏幕的第一对应关系,所述第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;Establish a first correspondence between the characteristic information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, and the first correspondence includes the characteristic information of the control object and the display sub-screen. One correspondence
    所述根据所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕,包括:The determining a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects includes:
    根据所述第一对应关系和所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕。According to the first correspondence and the respective characteristic information of the N screen control objects, the display sub-screens corresponding to the N screen control objects are determined.
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述根据所述N个控屏对象各自的动作信息,控制所述N个控屏对象各自对应的显示子屏幕,包括:The method according to any one of claims 1 to 3, wherein the controlling the display sub-screens corresponding to each of the N screen control objects according to the respective action information of the N screen control objects comprises:
    在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,所述目标控屏对象为所述N个控屏对象中的任一个,所述第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;The target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple One-to-one correspondence between one action information and multiple control operations;
    根据所述目标控屛操作控制所述目标控屏对象对应的显示子屏幕。Control the display sub-screen corresponding to the target control screen object according to the target control operation.
  5. 一种控屏装置,其特征在于,包括:A screen control device, characterized by comprising:
    第一获取模块,用于获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息,所述控屏对象为所述第一图像中用户的身体部位,N为大于等于2的正整数;The first acquisition module is configured to acquire the respective feature information of the N screen control objects in the first image and the respective action information of the N screen control objects, where the screen control objects are the body parts of the user in the first image , N is a positive integer greater than or equal to 2;
    第一确定模块,用于根据所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕,所述显示子屏幕为显示屏幕的部分区域;A first determining module, configured to determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, the display sub-screen being a partial area of the display screen;
    控制模块,用于根据所述N个控屏对象各自的动作信息,控制所述N个控屏对象各自对应的显示子屏幕。The control module is configured to control the display sub-screens corresponding to each of the N screen control objects according to the respective action information of the N screen control objects.
  6. 根据权利要求5所述的装置,其特征在于,还包括:The device according to claim 5, further comprising:
    第二获取模块,用于获取所述用户的第二图像;The second acquisition module is used to acquire the second image of the user;
    第二确定模块,用于确定所述第二图像中的满足预设启动动作的控屛对象的个数N,所述预设启动动作用于启动多手势控屛模式;A second determining module, configured to determine the number N of control objects in the second image that satisfy a preset activation action, and the preset activation action is used to activate a multi-gesture control mode;
    切分模块,用于得到在所述显示屏幕上呈现的N个显示子屏幕。The segmentation module is used to obtain N display sub-screens presented on the display screen.
  7. 根据权利要求6所述的装置,其特征在于,还包括:The device according to claim 6, further comprising:
    建立模块,用于建立所述N个满足预设启动动作的控屏对象的特征信息与所述N个显示子屏幕的第一对应关系,所述第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;The establishment module is configured to establish a first corresponding relationship between the feature information of the N screen control objects that meet the preset activation action and the N display sub-screens, and the first corresponding relationship includes the feature information of the control object and Display the one-to-one correspondence of the sub-screens;
    所述第一确定模块具体用于:The first determining module is specifically configured to:
    根据所述第一对应关系和所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕。According to the first correspondence and the respective characteristic information of the N screen control objects, the display sub-screens corresponding to the N screen control objects are determined.
  8. 根据权利要求5-7任一项所述的装置,其特征在于,所述控制模块具体用于:The device according to any one of claims 5-7, wherein the control module is specifically configured to:
    在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,所述目标控屏对象为所述N个控屏对象中的任一个,所述第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;The target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple One-to-one correspondence between one action information and multiple control operations;
    根据所述目标控屛操作控制所述目标控屏对象对应的显示子屏幕。Control the display sub-screen corresponding to the target control screen object according to the target control operation.
  9. 一种设备,其特征在于,包括:处理器和传输接口,A device, characterized by comprising: a processor and a transmission interface,
    所述传输接口,用于接收摄像头获取的用户的第一图像;The transmission interface is used to receive the first image of the user obtained by the camera;
    所述处理器,用于调用存储在存储器中的软件指令以执行如下步骤:The processor is configured to call software instructions stored in the memory to execute the following steps:
    获取第一图像中N个控屏对象各自的特征信息和所述N个控屏对象各自的动作信息,所述控屏对象为所述第一图像中用户的身体部位,N为大于等于2的正整数;Acquire the respective feature information of the N screen control objects in the first image and the respective action information of the N screen control objects, where the screen control objects are the body parts of the user in the first image, and N is greater than or equal to 2 Positive integer;
    根据所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕,所述显示子屏幕为显示屏幕的部分区域;Determine a display sub-screen corresponding to each of the N screen control objects according to the respective characteristic information of the N screen control objects, where the display sub-screen is a partial area of the display screen;
    根据所述N个控屏对象各自的动作信息,控制所述N个控屏对象各自对应的显示子屏幕。According to the respective action information of the N screen control objects, the display sub-screens corresponding to the N screen control objects are controlled.
  10. 根据权利要求9所述的设备,其特征在于,The device according to claim 9, wherein:
    所述传输接口,还用于接收摄像头获取的用户的第二图像;The transmission interface is also used to receive the second image of the user obtained by the camera;
    所述处理器还用于:The processor is also used for:
    确定所述第二图像中的满足预设启动动作的控屛对象的个数N,所述预设启动动作用于启动多手势控屛模式;Determining the number N of control objects in the second image that satisfy a preset activation action, where the preset activation action is used to activate a multi-gesture control mode;
    得到在所述显示屏幕上呈现的N个显示子屏幕。Obtain N display sub-screens presented on the display screen.
  11. 根据权利要求10所述的设备,其特征在于,所述处理器还用于:The device according to claim 10, wherein the processor is further configured to:
    建立所述N个满足预设启动动作的控屏对象的特征信息与所述N个显示子屏幕的第一对应关系,所述第一对应关系包括控屛对象的特征信息与显示子屏幕的一一对应关系;Establish a first correspondence between the characteristic information of the N screen control objects that satisfy the preset activation action and the N display sub-screens, and the first correspondence includes the characteristic information of the control object and the display sub-screen. One correspondence
    所述处理器,具体用于:The processor is specifically used for:
    根据所述第一对应关系和所述N个控屏对象各自的特征信息,确定所述N个控屏对象各自对应的显示子屏幕。According to the first correspondence and the respective characteristic information of the N screen control objects, the display sub-screens corresponding to the N screen control objects are determined.
  12. 根据权利要求9-11任一项所述的设备,其特征在于,所述处理器还用于:The device according to any one of claims 9-11, wherein the processor is further configured to:
    在第二对应关系中确定与目标控屏对象的动作信息相匹配的目标控屛操作,所述 目标控屏对象为所述N个控屏对象中的任一个,所述第二对应关系包括多个动作信息和多个控屛操作的一一对应关系;The target control operation matching the action information of the target control object is determined in the second correspondence, and the target control object is any one of the N control objects, and the second correspondence includes multiple One-to-one correspondence between one action information and multiple control operations;
    根据所述目标控屛操作控制所述目标控屏对象对应的显示子屏幕。Control the display sub-screen corresponding to the target control screen object according to the target control operation.
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有指令,当所述指令在计算机或处理器上运行时,使得所述计算机或处理器执行如权利要求1至4任一项所述的方法。A computer-readable storage medium, characterized in that the computer-readable storage medium stores instructions, and when the instructions run on a computer or a processor, the computer or the processor executes as claimed in claims 1 to 4 Any one of the methods.
PCT/CN2019/089489 2019-05-31 2019-05-31 Screen control method, device and apparatus, and storage medium WO2020237617A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980095746.6A CN113728295A (en) 2019-05-31 2019-05-31 Screen control method, device, equipment and storage medium
PCT/CN2019/089489 WO2020237617A1 (en) 2019-05-31 2019-05-31 Screen control method, device and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/089489 WO2020237617A1 (en) 2019-05-31 2019-05-31 Screen control method, device and apparatus, and storage medium

Publications (1)

Publication Number Publication Date
WO2020237617A1 true WO2020237617A1 (en) 2020-12-03

Family

ID=73552477

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089489 WO2020237617A1 (en) 2019-05-31 2019-05-31 Screen control method, device and apparatus, and storage medium

Country Status (2)

Country Link
CN (1) CN113728295A (en)
WO (1) WO2020237617A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112860367A (en) * 2021-03-04 2021-05-28 康佳集团股份有限公司 Equipment interface visualization method, intelligent terminal and computer readable storage medium
CN114527922A (en) * 2022-01-13 2022-05-24 珠海视熙科技有限公司 Method for realizing touch control based on screen identification and screen control equipment
CN114915721A (en) * 2021-02-09 2022-08-16 华为技术有限公司 Method for establishing connection and electronic equipment
CN115113797A (en) * 2022-08-29 2022-09-27 深圳市优奕视界有限公司 Intelligent partition display method of control panel and related product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103207741A (en) * 2012-01-12 2013-07-17 飞宏科技股份有限公司 Multi-user touch control method and system of computer virtual object
US20150200985A1 (en) * 2013-11-13 2015-07-16 T1visions, Inc. Simultaneous input system for web browsers and other applications
CN105138122A (en) * 2015-08-12 2015-12-09 深圳市卡迪尔通讯技术有限公司 Method for remotely controlling screen equipment through gesture identification
CN105653024A (en) * 2015-12-22 2016-06-08 深圳市金立通信设备有限公司 Terminal control method and terminal
CN106569596A (en) * 2016-10-20 2017-04-19 努比亚技术有限公司 Gesture control method and equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104572004B (en) * 2015-02-02 2019-01-15 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN107479815A (en) * 2017-06-29 2017-12-15 努比亚技术有限公司 Realize the method, terminal and computer-readable recording medium of split screen screen control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103207741A (en) * 2012-01-12 2013-07-17 飞宏科技股份有限公司 Multi-user touch control method and system of computer virtual object
US20150200985A1 (en) * 2013-11-13 2015-07-16 T1visions, Inc. Simultaneous input system for web browsers and other applications
CN105138122A (en) * 2015-08-12 2015-12-09 深圳市卡迪尔通讯技术有限公司 Method for remotely controlling screen equipment through gesture identification
CN105653024A (en) * 2015-12-22 2016-06-08 深圳市金立通信设备有限公司 Terminal control method and terminal
CN106569596A (en) * 2016-10-20 2017-04-19 努比亚技术有限公司 Gesture control method and equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114915721A (en) * 2021-02-09 2022-08-16 华为技术有限公司 Method for establishing connection and electronic equipment
CN112860367A (en) * 2021-03-04 2021-05-28 康佳集团股份有限公司 Equipment interface visualization method, intelligent terminal and computer readable storage medium
CN112860367B (en) * 2021-03-04 2023-12-12 康佳集团股份有限公司 Equipment interface visualization method, intelligent terminal and computer readable storage medium
CN114527922A (en) * 2022-01-13 2022-05-24 珠海视熙科技有限公司 Method for realizing touch control based on screen identification and screen control equipment
CN115113797A (en) * 2022-08-29 2022-09-27 深圳市优奕视界有限公司 Intelligent partition display method of control panel and related product

Also Published As

Publication number Publication date
CN113728295A (en) 2021-11-30

Similar Documents

Publication Publication Date Title
WO2020168965A1 (en) Method for controlling electronic device having folding screen, and electronic device
EP3974970A1 (en) Full-screen display method for mobile terminal, and apparatus
WO2020177619A1 (en) Method, device and apparatus for providing reminder to charge terminal, and storage medium
WO2021052214A1 (en) Hand gesture interaction method and apparatus, and terminal device
WO2021036770A1 (en) Split-screen processing method and terminal device
US11930130B2 (en) Screenshot generating method, control method, and electronic device
WO2021213164A1 (en) Application interface interaction method, electronic device, and computer readable storage medium
WO2020237617A1 (en) Screen control method, device and apparatus, and storage medium
US20230117194A1 (en) Communication Service Status Control Method, Terminal Device, and Readable Storage Medium
WO2021073448A1 (en) Picture rendering method and device, electronic equipment and storage medium
WO2020062310A1 (en) Stylus detection method, system, and related device
WO2020019355A1 (en) Touch control method for wearable device, and wearable device and system
CN114090102B (en) Method, device, electronic equipment and medium for starting application program
WO2022095744A1 (en) Vr display control method, electronic device, and computer readable storage medium
JP2023503281A (en) Energy efficient display processing method and device
CN115589051B (en) Charging method and terminal equipment
WO2021052407A1 (en) Electronic device control method and electronic device
WO2021052170A1 (en) Motor vibration control method and electronic device
WO2020221062A1 (en) Navigation operation method and electronic device
WO2023216930A1 (en) Wearable-device based vibration feedback method, system, wearable device and electronic device
WO2022170854A1 (en) Video call method and related device
WO2022170856A1 (en) Method for establishing connection, and electronic device
CN113610943B (en) Icon rounded angle processing method and device
CN112527220B (en) Electronic equipment display method and electronic equipment
CN114637392A (en) Display method and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19930974

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19930974

Country of ref document: EP

Kind code of ref document: A1