CN107450830B - Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode - Google Patents

Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode Download PDF

Info

Publication number
CN107450830B
CN107450830B CN201710590592.0A CN201710590592A CN107450830B CN 107450830 B CN107450830 B CN 107450830B CN 201710590592 A CN201710590592 A CN 201710590592A CN 107450830 B CN107450830 B CN 107450830B
Authority
CN
China
Prior art keywords
window
operation window
instruction
processing
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710590592.0A
Other languages
Chinese (zh)
Other versions
CN107450830A (en
Inventor
吕金华
董正勇
郑炎坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rockchip Electronics Co Ltd
Original Assignee
Rockchip Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockchip Electronics Co Ltd filed Critical Rockchip Electronics Co Ltd
Priority to CN201710590592.0A priority Critical patent/CN107450830B/en
Publication of CN107450830A publication Critical patent/CN107450830A/en
Application granted granted Critical
Publication of CN107450830B publication Critical patent/CN107450830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Abstract

The invention provides a method and a device for simultaneously supporting multi-point input by a single screen and multiple windows, wherein the method comprises the following steps: receiving a screen splitting instruction, displaying a plurality of operation windows on a display unit, and setting the current operation states of the first operation window and the second operation window as running states. Through the adjustment of the operation states of the first operation window and the second operation window, the two windows are in the states capable of receiving touch control commands in the split screen mode, the two original operation windows which are mutually interfered are adjusted into two mutually independent and non-interfering states, a user can simultaneously input different operation instructions in the two different and mutually independent operation windows, the processing unit sequentially executes the two operation instructions, the requirement of the user for synchronously operating different applications is met, and the user experience is greatly enhanced.

Description

Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode
Technical Field
The invention relates to the field of computer technical safety, in particular to a method and a device for simultaneously supporting multi-point input by a single screen and multiple windows.
Background
With the development of science and technology and the progress of society, mobile terminals are widely used and have more and more abundant functions. Taking Android devices as an example, with the popularization of Android system devices and the continuous improvement of hardware conditions, the requirements of various industries on the Android customization function become more and more complex, and the input control of the display screen is an important item.
At present, a single-screen mobile phone in an android system usually only supports touch input of one window, and sensory experience of a user is greatly influenced. For example, in an actual application scenario, if a user likes to watch a movie while chatting, when the conventional android system executes a touch command of one window, touch commands of other windows are all in a suspended state, and cannot receive an operation instruction of the user. That is, when the user inputs the chat information in the chat window, the user cannot operate in the window for playing the video at the same time; on the contrary, when the window for playing the video is operated, the chat information cannot be input in the chat window at the same time.
Disclosure of Invention
Therefore, a technical scheme that a single-screen multi-window simultaneously supports multi-point input is required to be provided, so that the problem that user experience is influenced due to the fact that different windows cannot be independently operated because the multiple windows of the android system are influenced when a touch event is received is solved.
To achieve the above object, the inventors provide a method for simultaneously supporting multi-point input in a single-screen multi-window, the method comprising the steps of:
receiving a screen splitting instruction, displaying a plurality of operation windows on a display unit, wherein the operation windows comprise a first operation window and a second operation window, and setting the current operation states of the first operation window and the second operation window to be running states; the running state is a state capable of receiving touch events;
receiving a first touch instruction for a first operation window and a second touch instruction for a second operation window;
performing first processing on a first picture displayed in a first operation window to obtain a first processing picture, and displaying the first processing picture in the first operation window; and performing second processing on a second picture displayed in the second operation window to obtain a second processed picture, and displaying the second processed picture in the second operation window.
Further, the method comprises:
caching the received first touch event and the second touch event;
according to the processing priorities of the first touch event and the second touch event, performing first processing on a first picture displayed in a first operation window to obtain a first processing picture, and displaying the first processing picture in the first operation window; and performing second processing on a second picture displayed in the second operation window to obtain a second processed picture, and displaying the second processed picture in the second operation window.
Further, the method comprises:
receiving a state setting instruction, and adjusting the operation state of the current operation window to an operation state corresponding to the state setting instruction, wherein the operation state of the operation window comprises one of a creation state, a running state, a pause state and a stop state.
Further, the number of the operation windows is more than two, and the method comprises the following steps:
receiving a screen splitting instruction, and adjusting the operation states of all operation windows to be running states; or receiving a screen splitting instruction, setting the current operation states of the first operation window and the second operation window to be running states, and enabling other operation windows to be in a pause state.
Further, the step "receiving a split screen instruction" method further includes:
identifying characteristic information and a motion track of a user;
and receiving a screen splitting instruction when the motion trail of the identified characteristic information accords with preset information.
Further, the method further comprises:
and receiving a conversion instruction for the operation window, and executing adjustment corresponding to the conversion instruction for the operation window.
Further, the conversion instruction comprises a rotation instruction, and each operation window corresponds to one operation button; the method comprises the following steps:
and receiving a touch control command of the operation button, triggering the rotation instruction, and performing corresponding rotation operation on the operation window.
The inventor also provides a device for simultaneously supporting multi-point input by a single-screen and multi-window, which comprises an instruction receiving unit, a display unit, an operation state setting unit and a processing unit;
the instruction receiving unit is used for receiving a split-screen instruction, the processing unit is used for controlling the display unit to display a plurality of operation windows, and the operation state setting unit is used for setting the current operation states of the first operation window and the second operation window to be running states; the operation window comprises a first operation window and a second operation window, and the operation state is a state capable of receiving a touch event;
the instruction receiving unit is further used for receiving a first touch instruction for the first operation window and receiving a second touch instruction for the second operation window;
the processing unit is used for carrying out first processing on a first picture displayed in a first operation window to obtain a first processing picture and displaying the first processing picture in the first operation window; and the second processing module is used for carrying out second processing on a second picture displayed in the second operation window to obtain a second processing picture and displaying the second processing picture in the second operation window.
Further, the processing unit comprises a touch event cache module and a touch event execution module;
the touch event caching module is used for caching the received first touch event and the second touch event;
the touch event execution module is used for carrying out first processing on a first picture displayed in a first operation window according to the processing priority of the first touch event and the second touch event to obtain a first processing picture, and displaying the first processing picture in the first operation window; and the second processing window is used for carrying out second processing on the second picture displayed in the second operation window to obtain a second processing picture, and the second processing picture is displayed in the second operation window.
Further, the instruction receiving unit is configured to receive a state setting instruction, and the operation state setting unit is configured to adjust an operation state of a current operation window to an operation state corresponding to the state setting instruction, where the operation state of the operation window includes one of a creation state, a running state, a pause state, and a stop state.
Further, the number of the operation windows is more than two;
the instruction receiving unit is used for receiving a screen splitting instruction, and the operation state setting unit is used for adjusting the operation states of all the operation windows to be running states; or, the instruction receiving unit is configured to receive a split-screen instruction, and the operation state setting unit is configured to set the current operation states of the first operation window and the second operation window to be an operation state, and enable the other operation windows to be in a suspended state.
Further, the apparatus comprises a feature recognition unit; the instruction receiving unit is used for receiving the split screen instruction and comprises the following steps: the characteristic identification unit is used for identifying characteristic information and a motion track of the user; and when the motion trail of the identified characteristic information accords with preset information, the instruction receiving unit is used for receiving a screen splitting instruction.
Further, the instruction receiving unit is further configured to receive a transformation instruction for the operation window, and the processing unit is configured to perform an adjustment corresponding to the transformation instruction on the operation window.
Further, the conversion instruction comprises a rotation instruction, and each operation window corresponds to one operation button; the instruction receiving unit is used for receiving a touch control command of an operation button and triggering the rotation instruction, and the processing unit is used for performing corresponding rotation operation on the operation window.
The method and the device for simultaneously supporting multi-point input by a single-screen and multi-window in the technical scheme comprise the following steps: receiving a screen splitting instruction, displaying a plurality of operation windows on a display unit, wherein the operation windows comprise a first operation window and a second operation window, and setting the current operation states of the first operation window and the second operation window to be running states; the running state is a state capable of receiving touch events; receiving a first touch instruction for a first operation window and a second touch instruction for a second operation window; performing first processing on a first picture displayed in a first operation window to obtain a first processing picture, and displaying the first processing picture in the first operation window; and performing second processing on a second picture displayed in the second operation window to obtain a second processed picture, and displaying the second processed picture in the second operation window. Through the adjustment of the operation states of the first operation window and the second operation window, the two windows are both in a state capable of receiving touch control commands in a split screen mode, the two original operation windows which are mutually interfered are adjusted into two mutually independent and mutually non-interfered states, a user can simultaneously input different operation instructions in the two different operation windows, the processing unit can sequentially execute the two operation instructions, and the user experience is greatly enhanced.
Drawings
FIG. 1 is a flowchart illustrating a method for supporting multi-point input simultaneously in a single-screen and multi-window manner according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an apparatus for supporting multi-point input simultaneously in a single-screen and multi-window manner according to an embodiment of the present invention;
description of reference numerals:
101. an instruction receiving unit;
102. a display unit;
103. an operation state setting unit;
104. a processing unit; 114. a touch event cache module; 124. a touch event execution module;
105. and a feature identification unit.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Fig. 1 is a flowchart illustrating a method for supporting multi-point input simultaneously in a single-screen multi-window according to an embodiment of the present invention. The method is applied to terminal equipment, and the terminal equipment is electronic equipment with a display function and a data processing function, such as a mobile phone. The method comprises the following steps:
the method firstly proceeds to step S101 to receive a split screen instruction, display a plurality of operation windows on a display unit, and set the current operation states of the first operation window and the second operation window to be running states. The operation window comprises a first operation window and a second operation window, and the operation state is a state capable of receiving touch events. The display unit is an electronic element with a display function, and can be an AMOLED display screen, an LCD liquid crystal display screen, a micro light emitting diode display screen, a quantum dot display screen or an electronic ink display screen. In other embodiments, the display unit may further include a touch unit, such that the user may trigger the touch command by clicking on the screen, and the touch unit may be a touch screen. Of course, the user may also trigger the touch event by other means, such as pressing a button, remote controlling, and the like. The display unit may be a foldable display screen or a non-foldable display screen. The operating window is a window for running an application program, and is usually rectangular, and can also be in other shapes. The user can perform touch operation on the operation window to realize interaction with the terminal.
The screen splitting instruction is an operation instruction for enabling the terminal to be in a screen splitting mode, and can be triggered by clicking a screen splitting button on the display unit by a user or clicking a mechanical key on the terminal by the user. In order to further improve the user experience, in this embodiment, the step "receiving a split screen instruction" method further includes: identifying characteristic information and a motion track of a user; and receiving a screen splitting instruction when the motion trail of the identified characteristic information accords with preset information. The characteristic information of the user includes, but is not limited to, hand information, head information, eyeball information, leg information, etc. of the user. The characteristic information and the motion trail of the user can be collected through a camera on the terminal. Taking the hand information as an example, the user may use the motion trajectory of one or more fingers to trigger the split-screen instruction. For example, the preset information configured in advance by the terminal is a multi-finger left-right sliding track of the user, and when the touch event received by the terminal processor is multi-finger left-right sliding, the terminal receives a screen splitting instruction and is in a screen splitting state; otherwise, when the touch event received by the terminal processor is multi-finger sliding from right to left, the terminal receives a split-screen quitting instruction, and the terminal quits the split-screen mode. The user can also realize the switching between the operating window running in the foreground and the operating window running in the background on the display unit through the sliding track of the characteristic information. The feature information is obtained in the same manner as other information, and is not described herein again. By corresponding the switching of whether the device is in the split screen mode or not to the characteristic information and the motion trail of the user, the operation experience of the user can be effectively improved.
After step S101, step S102 may be performed to receive a first touch instruction for the first operation window, perform first processing on a first image originally displayed in the first operation window to obtain a first processing image, and display the first processing image in the first operation window. Or, the step may also be entered to receive a second touch instruction for the second operation window, perform second processing on a second picture originally displayed in the second operation window to obtain a second processed picture, and display the second processed picture in the second operation window. Because the first operation window and the second operation window are in a state capable of receiving the touch event in the split screen mode, the two operation windows are independent from each other and do not interfere with each other in receiving the touch event, a user can simultaneously input different operation instructions in the two different operation windows, the processing unit sequentially executes processing on the touch events received in the two different windows, and displays the processed pictures in the corresponding windows, thereby greatly enhancing user experience.
In certain embodiments, the method comprises: caching the received first touch event and the second touch event; according to the processing priorities of the first touch event and the second touch event, performing first processing on a first picture displayed in a first operation window to obtain a first processing picture, and displaying the first processing picture in the first operation window; and performing second processing on a second picture displayed in the second operation window to obtain a second processed picture, and displaying the second processed picture in the second operation window. The processor of the terminal can execute the corresponding touch event on the display picture in the operation window according to the processing priority of the touch event. The processing priority of the touch event may be determined according to the buffering time, that is, the touch event buffered in the buffering unit is executed first, or according to the priority of the operation window. Preferably, the user may configure different manipulation levels for different operation windows in the split-screen mode in advance, store a corresponding relationship between the operation window ID and the manipulation levels, and when a touch event is received on a certain operation window, send the operation window ID and the touch event to the processing unit (i.e., the terminal processor). The processing unit calls corresponding control levels according to the operation window IDs, sequentially executes the touch events corresponding to the operation windows according to the priority sequence of the operation levels, and then pushes the pictures of the executed touch events to the corresponding operation windows for displaying. Of course, the processing priority of the touch event may also be determined according to the user level, which is similar to the method of determining the processing priority of the touch event according to the operation window ID, and is not described herein again.
In certain embodiments, the method comprises: receiving a state setting instruction, and adjusting the operation state of the current operation window to an operation state corresponding to the state setting instruction, wherein the operation state of the operation window comprises one of a creation state, a running state, a pause state and a stop state. The number of the operation windows is more than two, and the method comprises the following steps: receiving a screen splitting instruction, and adjusting the operation states of all the operation windows to be running states, which can be specifically realized by the following modes: the processor processes the life cycle corresponding to each operation window, and modifies a series of Android mechanism changes caused by the processing, for example, modifies an "activtypausedLocked" function in the frame, judges whether the terminal is in a split-screen mode through a processed marker "isshwalScreen ()", if so, performs state change processing on the original life cycle flow of the operation window, that is, when the first operation window receives a first touch event, if the second operation window is in an original running state, the running state is synchronously maintained, and the state of the second operation window is not adjusted to be a pause state, so that the second operation window can receive a second touch event input by a user while the first operation window receives the first touch event, and the requirement of the user on synchronous operation of different windows is met.
For the case that the number of the operation windows is more than two, in the split-screen mode, all the operation windows displayed on the current display unit can be set to be in the running state by default, so that the windows can receive the touch event. Of course, the user can also set some operation windows in the running state and other operation windows in the non-running state (such as the pause state) by self-definition. For example, when the terminal receives a screen splitting instruction and is in the screen splitting mode, the current operation states of the first operation window and the second operation window may be set to be the running state, and the other operation windows may be in the suspended state. Through diversified selection design, user experience is effectively improved.
In certain embodiments, the method further comprises: and receiving a conversion instruction for the operation window, and executing adjustment corresponding to the conversion instruction for the operation window. The transformation instruction comprises rotation, translation, scaling, closing and other instructions of the operation window, and the user can adjust the position and the size of the operation window to a state which is most suitable for the user to perform subsequent operation.
In this embodiment, the conversion command is a rotation command, and each operation window corresponds to one operation button; the method comprises the following steps: and receiving a touch control command of the operation button, triggering the rotation instruction, and performing corresponding rotation operation on the operation window. Preferably, each time the user clicks the selection button, the operation window corresponding to the rotation button may be rotated by 90 ° clockwise or counterclockwise. For example, when the number of the displayed operation windows is two, the first window can be rotated by 180 degrees clockwise or counterclockwise, two users can face different windows to operate, the two windows are independent of each other, the two users can independently perform corresponding operations, for example, one user can play games, the other user can watch videos, and the two users do not influence each other. By corresponding each operation window to one operation button, a user can rotate different operation windows at will according to the position of the user until the operation window best conforms to the operation habit of the user, and better human-computer interaction experience is achieved.
As shown in fig. 2, the present invention further provides a device for supporting multi-point input simultaneously in a single screen and multiple windows, wherein the device comprises an instruction receiving unit 101, a display unit 102, an operation state setting unit 103, and a processing unit 104. The processing unit is an electronic component with a data processing function, and can be a Central Processing Unit (CPU), a Microprocessor (MCU) and the like.
The instruction receiving unit 101 is configured to receive a split-screen instruction, the processing unit 104 is configured to control the display unit 102 to display a plurality of operation windows, and the operation state setting unit 103 is configured to set current operation states of the first operation window and the second operation window to an operating state; the operation window comprises a first operation window and a second operation window, and the operation state is a state capable of receiving touch events.
The instruction receiving unit 101 is further configured to receive a first touch instruction for the first operation window, and receive a second touch instruction for the second operation window;
the processing unit 104 is configured to perform first processing on a first image originally displayed in a first operation window to obtain a first processed image, and display the first processed image in the first operation window; and the second processing module is used for carrying out second processing on a second picture displayed in the second operation window to obtain a second processing picture and displaying the second processing picture in the second operation window.
Because the first operation window and the second operation window are in a state capable of receiving the touch event in the split screen mode, the two operation windows are independent from each other and do not interfere with each other in receiving the touch event, a user can simultaneously input different operation instructions in the two different operation windows, the processing unit sequentially executes processing on the touch events received in the two different windows, and displays the processed pictures in the corresponding windows, thereby greatly enhancing user experience.
In some embodiments, the processing unit includes a touch event cache module 114 and a touch event execution module 124. The touch event caching module 114 is configured to cache the received first touch event and the second touch event; the touch event executing module 124 is configured to perform a first process on a first image originally displayed in a first operation window according to the processing priorities of the first touch event and the second touch event to obtain a first process image, and display the first process image in the first operation window; and the second processing window is used for carrying out second processing on the second picture displayed in the second operation window to obtain a second processing picture, and the second processing picture is displayed in the second operation window. The touch event execution module can be realized through an Android Framework process function, when different operation windows receive different touch events at the same time, the system can report the touch events to the Android Framework through EventHub, the Android Framework enables the different operation windows to be in running states through processing, the states of reported touch points (namely touch events) can be received and analyzed at the same time, and then the obtained display pictures are sent to the corresponding operation windows for display after processing.
In some embodiments, the instruction receiving unit is configured to receive a state setting instruction, and the operating state setting unit is configured to adjust an operating state of a current operating window to an operating state corresponding to the state setting instruction, where the operating state of the operating window includes one of a creation state, a running state, a pause state, and a stop state. The number of the operation windows is more than two; the instruction receiving unit is used for receiving a screen splitting instruction, and the operation state setting unit is used for adjusting the operation states of all the operation windows to be running states; or, the instruction receiving unit is configured to receive a split-screen instruction, and the operation state setting unit is configured to set the current operation states of the first operation window and the second operation window to be an operation state, and enable the other operation windows to be in a suspended state.
In certain embodiments, the apparatus comprises a feature identification unit; the instruction receiving unit is used for receiving the split screen instruction and comprises the following steps: the characteristic identification unit is used for identifying characteristic information and a motion track of the user; and when the motion trail of the identified characteristic information accords with preset information, the instruction receiving unit is used for receiving a screen splitting instruction. By corresponding the switching of whether the device is in the split screen mode or not to the characteristic information and the motion trail of the user, the operation experience of the user can be effectively improved.
In some embodiments, the instruction receiving unit is further configured to receive a transformation instruction for the operation window, and the processing unit is configured to perform an adjustment corresponding to the transformation instruction on the operation window. The conversion instruction comprises a rotation instruction, and each operation window corresponds to one operation button; the instruction receiving unit is used for receiving a touch control command of an operation button and triggering the rotation instruction, and the processing unit is used for performing corresponding rotation operation on the operation window. By corresponding each operation window to one operation button, a user can rotate different operation windows at will according to the position of the user until the operation window best conforms to the operation habit of the user, and better human-computer interaction experience is achieved.
The method and the device for simultaneously supporting multi-point input by a single-screen and multi-window in the technical scheme comprise the following steps: receiving a screen splitting instruction, displaying a plurality of operation windows on a display unit, wherein the operation windows comprise a first operation window and a second operation window, and setting the current operation states of the first operation window and the second operation window to be running states; the running state is a state capable of receiving touch events; receiving a first touch instruction for a first operation window and a second touch instruction for a second operation window; performing first processing on a first picture displayed in a first operation window to obtain a first processing picture, and displaying the first processing picture in the first operation window; and performing second processing on a second picture displayed in the second operation window to obtain a second processed picture, and displaying the second processed picture in the second operation window. Through the adjustment of the operation states of the first operation window and the second operation window, the two windows are both in a state capable of receiving touch control commands in a split screen mode, the two original operation windows which are mutually interfered are adjusted into two mutually independent and mutually non-interfered states, a user can simultaneously input different operation instructions in the two different operation windows, the processing unit can sequentially execute the two operation instructions, and the user experience is greatly enhanced.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
As will be appreciated by one skilled in the art, the above-described embodiments may be provided as a method, apparatus, or computer program product. These embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. All or part of the steps in the methods according to the embodiments may be implemented by a program instructing associated hardware, where the program may be stored in a storage medium readable by a computer device and used to execute all or part of the steps in the methods according to the embodiments. The computer devices, including but not limited to: personal computers, servers, general-purpose computers, special-purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, intelligent home devices, wearable intelligent devices, vehicle-mounted intelligent devices, and the like; the storage medium includes but is not limited to: RAM, ROM, magnetic disk, magnetic tape, optical disk, flash memory, U disk, removable hard disk, memory card, memory stick, network server storage, network cloud storage, etc.
The various embodiments described above are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer apparatus to produce a machine, such that the instructions, which execute via the processor of the computer apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer apparatus to cause a series of operational steps to be performed on the computer apparatus to produce a computer implemented process such that the instructions which execute on the computer apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (10)

1. A method for supporting multi-point input in a single-screen multi-window simultaneously, the method comprising the steps of:
receiving a screen splitting instruction, displaying a plurality of operation windows on a display unit, wherein the operation windows comprise a first operation window and a second operation window, and setting the current operation states of the first operation window and the second operation window to be running states; the running state is a state capable of receiving touch events;
receiving a first touch instruction for a first operation window and a second touch instruction for a second operation window;
performing first processing on a first picture displayed in a first operation window to obtain a first processing picture, and displaying the first processing picture in the first operation window; performing second processing on a second picture displayed in the second operation window to obtain a second processed picture, and displaying the second processed picture in the second operation window;
the method comprises the following steps:
receiving a state setting instruction, and adjusting the operation state of the current operation window to an operation state corresponding to the state setting instruction, wherein the operation state of the operation window comprises one of a creation state, a running state, a pause state and a stop state;
the number of the operation windows is more than two, and the method comprises the following steps:
receiving a screen splitting instruction, setting the current operation states of the first operation window and the second operation window to be running states, and enabling other operation windows to be in a pause state.
2. The method of claim 1, wherein the method comprises:
caching the received first touch event and the second touch event;
according to the processing priorities of the first touch event and the second touch event, performing first processing on a first picture displayed in a first operation window to obtain a first processing picture, and displaying the first processing picture in the first operation window; and performing second processing on a second picture displayed in the second operation window to obtain a second processed picture, and displaying the second processed picture in the second operation window.
3. The method for supporting multi-point input in a single-screen and multi-window mode as claimed in claim 1, wherein the step of receiving a split-screen command further comprises:
identifying characteristic information and a motion track of a user;
and receiving a screen splitting instruction when the motion trail of the identified characteristic information accords with preset information.
4. The method for supporting multi-point input in a single-screen multi-window simultaneously as claimed in claim 1, wherein said method further comprises:
and receiving a conversion instruction for the operation window, and executing adjustment corresponding to the conversion instruction for the operation window.
5. The method for supporting multi-point input in a single-screen and multi-window manner as claimed in claim 4, wherein the conversion command comprises a rotation command, and each operation window corresponds to one operation button; the method comprises the following steps:
and receiving a touch control command of the operation button, triggering the rotation instruction, and performing corresponding rotation operation on the operation window.
6. A device for simultaneously supporting multi-point input by a single screen and multiple windows is characterized by comprising an instruction receiving unit, a display unit, an operation state setting unit and a processing unit;
the instruction receiving unit is used for receiving a split-screen instruction, the processing unit is used for controlling the display unit to display a plurality of operation windows, and the operation state setting unit is used for setting the current operation states of the first operation window and the second operation window to be running states; the operation window comprises a first operation window and a second operation window, and the operation state is a state capable of receiving a touch event;
the instruction receiving unit is further used for receiving a first touch instruction for the first operation window and receiving a second touch instruction for the second operation window;
the processing unit is used for carrying out first processing on a first picture displayed in a first operation window to obtain a first processing picture and displaying the first processing picture in the first operation window; the second processing module is used for carrying out second processing on a second picture displayed in the second operation window to obtain a second processing picture and displaying the second processing picture in the second operation window;
the instruction receiving unit is used for receiving a state setting instruction, the operation state setting unit is used for adjusting the operation state of the current operation window to an operation state corresponding to the state setting instruction, and the operation state of the operation window comprises one of a creation state, a running state, a pause state and a stop state;
the number of the operation windows is more than two;
the instruction receiving unit is used for receiving a split-screen instruction, and the operation state setting unit is used for setting the current operation states of the first operation window and the second operation window to be in an operation state and enabling other operation windows to be in a pause state.
7. The apparatus of claim 6, wherein the processing unit comprises a touch event buffer module and a touch event execution module;
the touch event caching module is used for caching the received first touch event and the second touch event;
the touch event execution module is used for carrying out first processing on a first picture displayed in a first operation window according to the processing priority of the first touch event and the second touch event to obtain a first processing picture, and displaying the first processing picture in the first operation window; and the second processing window is used for carrying out second processing on the second picture displayed in the second operation window to obtain a second processing picture, and the second processing picture is displayed in the second operation window.
8. The apparatus for supporting multi-point input in a single-screen and multi-window simultaneously as claimed in claim 6, wherein said apparatus comprises a feature recognition unit; the instruction receiving unit is used for receiving the split screen instruction and comprises the following steps: the characteristic identification unit is used for identifying characteristic information and a motion track of the user; and when the motion trail of the identified characteristic information accords with preset information, the instruction receiving unit is used for receiving a screen splitting instruction.
9. The apparatus for supporting multi-point input simultaneously for single-screen and multi-window as claimed in claim 6, wherein the instruction receiving unit is further configured to receive a transformation instruction for the operation window, and the processing unit is configured to perform an adjustment corresponding to the transformation instruction on the operation window.
10. The apparatus for supporting multi-point input simultaneously in single-screen and multi-window as claimed in claim 9, wherein said transformation command comprises a rotation command, and each operation window corresponds to one operation button; the instruction receiving unit is used for receiving a touch control command of an operation button and triggering the rotation instruction, and the processing unit is used for performing corresponding rotation operation on the operation window.
CN201710590592.0A 2017-07-19 2017-07-19 Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode Active CN107450830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710590592.0A CN107450830B (en) 2017-07-19 2017-07-19 Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710590592.0A CN107450830B (en) 2017-07-19 2017-07-19 Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode

Publications (2)

Publication Number Publication Date
CN107450830A CN107450830A (en) 2017-12-08
CN107450830B true CN107450830B (en) 2020-08-14

Family

ID=60487307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710590592.0A Active CN107450830B (en) 2017-07-19 2017-07-19 Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode

Country Status (1)

Country Link
CN (1) CN107450830B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109739427B (en) * 2018-12-03 2020-12-29 北京梧桐车联科技有限责任公司 Screen operation method and device, display equipment and storage medium
CN110457109B (en) * 2019-08-15 2021-07-23 北京字节跳动网络技术有限公司 Multi-window parallel method and device, terminal and storage medium
CN110851066B (en) * 2019-10-24 2021-12-10 瑞芯微电子股份有限公司 Method and device for supporting touch control of multiple display screens
CN110780969B (en) * 2019-10-31 2022-12-30 抖音视界有限公司 Method and device for operating electronic equipment, electronic equipment and storage medium
CN112639714A (en) * 2020-03-20 2021-04-09 华为技术有限公司 Method, device and system for executing gesture instruction and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102760078B (en) * 2011-11-02 2015-03-25 联想(北京)有限公司 Method and device for switching working mode of data processing terminal, and data processing terminal
US9588674B2 (en) * 2012-11-30 2017-03-07 Qualcomm Incorporated Methods and systems for providing an automated split-screen user interface on a device
KR102153366B1 (en) * 2013-08-30 2020-10-15 삼성전자 주식회사 Method and apparatus for switching screen in electronic device
KR20150049469A (en) * 2013-10-30 2015-05-08 삼성전자주식회사 Device and method for providing user interface on multi windows
CN105760102B (en) * 2014-09-22 2020-01-07 努比亚技术有限公司 Terminal interaction control method and device and application program interaction control method
CN106155554A (en) * 2016-07-06 2016-11-23 深圳市金立通信设备有限公司 A kind of multi-screen display method and terminal

Also Published As

Publication number Publication date
CN107450830A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
CN107450830B (en) Method and device for simultaneously supporting multi-point input in single-screen and multi-window mode
US11831566B2 (en) Method and apparatus for transmitting scene image of virtual scene, computer device, and computer-readable storage medium
CA2942377C (en) Object tracking in zoomed video
US9612730B2 (en) Viewing different window content with different attendees in desktop sharing
WO2015188614A1 (en) Method and device for operating computer and mobile phone in virtual world, and glasses using same
WO2023279705A1 (en) Live streaming method, apparatus, and system, computer device, storage medium, and program
CN105760102B (en) Terminal interaction control method and device and application program interaction control method
WO2015161578A1 (en) Window management method and device
JP5331935B1 (en) Wireless screen display control method and system
CN108228020B (en) Information processing method and terminal
CN112148160B (en) Floating window display method and device, electronic equipment and computer readable storage medium
CN112104915A (en) Video data processing method and device and storage medium
CN107463325A (en) A kind of display methods for supporting multi-screen to input and device
US20130241944A1 (en) Electronic Device and Display Control Method Thereof
WO2023273562A1 (en) Video playback method and apparatus, electronic device, and medium
US11900530B1 (en) Multi-user data presentation in AR/VR
WO2016201803A1 (en) Method and device for controlling display of screen projection, and mobile terminal
CN106855772A (en) A kind of information displaying method and device
WO2023236515A1 (en) Application program display method and apparatus, and computer readable storage medium
WO2015196543A1 (en) Terminal and multi-window display method for terminal, and storage medium
WO2020248682A1 (en) Display device and virtual scene generation method
CN113055707B (en) Video display method and device
CN113271494A (en) Video frame processing method and device and electronic equipment
TW201721368A (en) Terminal equipment and remote controlling method thereof
US20240163516A1 (en) Live streaming interface display method, device, apparatus, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 350003 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China

Applicant after: Ruixin Microelectronics Co., Ltd

Address before: 350003 building, No. 89, software Avenue, Gulou District, Fujian, Fuzhou 18, China

Applicant before: Fuzhou Rockchips Electronics Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant