CN111135557B - Interaction method and device for multiple screens - Google Patents

Interaction method and device for multiple screens Download PDF

Info

Publication number
CN111135557B
CN111135557B CN201911366530.7A CN201911366530A CN111135557B CN 111135557 B CN111135557 B CN 111135557B CN 201911366530 A CN201911366530 A CN 201911366530A CN 111135557 B CN111135557 B CN 111135557B
Authority
CN
China
Prior art keywords
option
user
screen
display interface
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911366530.7A
Other languages
Chinese (zh)
Other versions
CN111135557A (en
Inventor
王斌
王莹
何志刚
赵文芳
苏威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201911366530.7A priority Critical patent/CN111135557B/en
Publication of CN111135557A publication Critical patent/CN111135557A/en
Application granted granted Critical
Publication of CN111135557B publication Critical patent/CN111135557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/24Constructional details thereof, e.g. game controllers with detachable joystick handles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Abstract

The embodiment of the invention discloses an interaction method and device for multiple screens. The method comprises the following steps: responding to receiving the selection operation of the user on the multi-screen interaction options, and projecting a current display interface to a target screen; presenting an option to select an input device; receiving a selection operation of an option of selecting an input device; and responding to the selection operation instruction, adopting a virtual input device, generating an operation interface comprising at least one virtual input option based on the current display interface projected to the target screen, and displaying the operation interface on the local screen and not displaying the current display interface projected to the target screen. The method can realize information sharing, reduce the heating and energy consumption of the electronic equipment where the local screen is located, and improve the accuracy of information input of the local operation interface.

Description

Interaction method and device for multiple screens
Technical Field
The disclosure relates to the technical field of computers, in particular to the technical field of terminal sharing, and particularly relates to an interaction method and device for multiple screens.
Background
In the prior art, when a user plays a game, plays audio and video and performs other interactive operation with a terminal, a screen picture of the intelligent terminal equipment can be transmitted to large screen equipment through a WIFI-P2P technology, and the large screen equipment decodes the picture or the audio and video and then displays the decoded picture or the audio and video.
However, in the prior art, the man-machine interaction picture on the mobile phone is transmitted to the large screen device through the multi-screen sharing technology while the man-machine interaction is performed, the man-machine interaction picture is simultaneously played by the mobile phone terminal device and the large screen device, the virtual input option of the mobile phone cannot generate physical touch feeling, if the man-machine interaction personnel stay the eyes on the large screen device, more misoperation is caused when the virtual input option of the mobile phone is adopted for operation, and the mobile phone terminal is easy to generate heat and has high power consumption.
Disclosure of Invention
The embodiment of the disclosure provides an interaction method and device for multiple screens.
In a first aspect, an embodiment of the present disclosure provides an interaction method for multiple screens, including: responding to receiving the selection operation of the user on the multi-screen interaction options, and projecting a current display interface to a target screen; presenting an option to select an input device; receiving a selection operation of an option of selecting an input device; and responding to the selection operation instruction, adopting a virtual input device, generating an operation interface comprising at least one virtual input option based on the current display interface projected to the target screen, and displaying the operation interface on the local screen and not displaying the current display interface projected to the target screen.
In some embodiments, in response to receiving a user selection of a multi-screen interaction option, projecting a current display interface onto a target screen includes: and in response to receiving the selection operation of the local user on the multi-screen interaction option, projecting a current display interface comprising the interaction operation of the local user and the interaction operation of the remote user to a target screen.
In some embodiments, generating an operation interface including at least one virtual input option based on the current display interface projected to the target screen includes: the AI big data are adopted to predict the user portrait of the user on the current display interface projected to the target screen; an operator interface including at least one virtual input option is generated based on the predicted user representation.
In some embodiments, generating an operation interface including at least one virtual input option based on the current display interface projected to the target screen includes: based on the current display interface projected to the target screen, an operation interface comprising a preset background and at least one virtual input option is generated, wherein the preset background comprises a default background or a background set by a user based on a background setting option.
In some embodiments, the virtual input options include: virtual interactive handles.
In some embodiments, the interaction method further comprises: responsive to a user selection operation of the virtual input option, a vibration is presented with a vibration frequency corresponding to the selected virtual input option.
In some embodiments, the interaction method further comprises: responsive to a user selection of a generated virtual input option, presenting a movable identification of the virtual input option; responding to the moving operation of the virtual input options with movable identifiers by a user, and moving the virtual input options indicated by the moving operation to the target positions indicated by the moving operation; in response to a user input exiting the selected operation, the virtual input option is deselected.
In some embodiments, the interaction method further comprises: and responding to the selection operation instruction by adopting the entity input device, starting to adopt the local entity input device based on the corresponding relation between the preset local entity input device and the current display interface, and blacking out the local screen.
In some embodiments, initiating adoption of the local entity input device comprises: the entity interaction handle is popped up.
In some embodiments, the size of the target screen is larger than the local screen.
In some embodiments, in response to receiving a user selection of a multi-screen interaction option, projecting a current display interface onto a target screen includes: and responding to the receiving operation of the user on the multi-screen interaction option, and projecting the current display interface to a television screen through an adapter of the external equipment in the television.
In a second aspect, embodiments of the present disclosure provide an interaction apparatus for multiple screens, including: the display interface projection unit is configured to respond to receiving the selection operation of the multi-screen interaction option by a user and project a current display interface to the target screen; a selection option presenting unit configured to present options of the selection input device; a selection operation receiving unit configured to receive a selection operation of an option of a selection input device; and an operation interface display unit configured to employ the virtual input device in response to the selection operation instruction, generate an operation interface including at least one virtual input option based on the current display interface projected to the target screen, and display the operation interface on the local screen without displaying the current display interface projected to the target screen.
In some embodiments, the display interface projection unit is further configured to: and in response to receiving the selection operation of the local user on the multi-screen interaction option, projecting a current display interface comprising the interaction operation of the local user and the interaction operation of the remote user to a target screen.
In some embodiments, the operation interface display unit is further configured to: the AI big data are adopted to predict the user portrait of the user on the current display interface projected to the target screen; an operator interface including at least one virtual input option is generated based on the predicted user representation.
In some embodiments, the operation interface display unit is further configured to: based on the current display interface projected to the target screen, an operation interface comprising a preset background and at least one virtual input option is generated, wherein the preset background comprises a default background or a background set by a user based on a background setting option.
In some embodiments, the virtual input options in the operation interface display unit include: virtual interactive handles.
In some embodiments, the interaction device further comprises: and an option vibration presenting unit configured to present vibration with a vibration frequency corresponding to the selected virtual input option in response to a user's selection operation of the virtual input option.
In some embodiments, the interaction device further comprises: a mobile identification presenting unit configured to present a movable identification of the virtual input option in response to a user's selection operation of the generated virtual input option; an input option moving unit configured to move a virtual input option indicated by a movement operation to a target position indicated by the movement operation in response to a movement operation of the virtual input option having a movable identification by a user; an option selection releasing unit configured to release selection of the virtual input option in response to a user input of an exit operation of the selected operation.
In some embodiments, the interaction device further comprises: and the input device starting unit is configured to respond to the selection operation instruction by adopting the entity input device, start adopting the local entity input device based on the corresponding relation between the preset local entity input device and the current display interface and black screen the local screen.
In some embodiments, the input device activation unit is further configured to: the entity interaction handle is popped up.
In some embodiments, the size of the target screen in the display interface projection unit is larger than the local screen.
In some embodiments, the display interface projection unit is further configured to: and responding to the receiving operation of the user on the multi-screen interaction option, and projecting the current display interface to a television screen through an adapter of the external equipment in the television.
In a third aspect, an embodiment of the present disclosure provides an electronic device/terminal/server, including: one or more processors; a storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the interaction method for multiple screens as described in any of the above.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements an interaction method for multiple screens as described in any of the above.
The method and the device for multi-screen interaction provided by the embodiment of the disclosure firstly respond to receiving the selection operation of a user on a multi-screen interaction option, and project a current display interface to a target screen; presenting an option to select an input device; receiving a selection operation of an option of selecting an input device; and responding to the selection operation instruction, adopting a virtual input device, generating an operation interface comprising at least one virtual input option based on the current display interface projected to the target screen, and displaying the operation interface on the local screen and not displaying the current display interface projected to the target screen. In the process, the local current display interface can be projected to the target screen, information sharing is realized, the operation interface comprising at least one virtual input option is displayed by adopting the local screen, the projected current display interface is not displayed, the heating and the energy consumption of the electronic equipment where the local screen is positioned are reduced, and the accuracy of inputting information by the local operation interface is improved.
In some embodiments, the electronic device may send out shock sensations with different frequencies to remind the user according to different selected virtual buttons, so as to enhance the accuracy of user operation.
In some embodiments, AI big data is used for predicting the user portrait of the user on the current display interface, and an operation interface is generated based on the user portrait, so that the pertinence of the operation interface is improved, and the operation interface is more in accordance with the operation habit of the user.
In some embodiments, the accuracy and efficiency of inputting information are improved by adopting the local entity interactive handle to operate the current display interface projected to the target screen.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings in which:
FIG. 1 is an exemplary system architecture diagram to which the present disclosure may be applied;
FIG. 2a is a flow diagram of one embodiment of an interaction method for multiple screens according to an embodiment of the present disclosure;
FIG. 2b schematically illustrates receiving a single user selection of a multi-screen interactive option and projecting a current display interface onto a television screen;
FIG. 2c schematically illustrates receiving a single user selection of a multi-screen interaction option and projecting a current display interface for interaction with a plurality of users onto a television screen;
FIG. 2d schematically illustrates an operation interface generated when the application to which the current display interface belongs is a first application;
FIG. 2e schematically illustrates an operation interface with a background generated when the application to which the current display interface belongs is a second application;
FIG. 2f is a schematic diagram illustrating an operator interface generated when the application to which the current display interface belongs is a third application;
FIG. 2g schematically illustrates a diagram presenting different vibration frequencies for the operator interface of FIG. 2 d;
FIG. 2h schematically illustrates a diagram showing different vibration frequencies for the operator interface of FIG. 2 f;
FIG. 3 is one exemplary application scenario of an interaction method for multiple screens according to an embodiment of the present disclosure;
FIG. 4a is a flow diagram of yet another embodiment of an interaction method for multiple screens according to an embodiment of the present disclosure;
FIG. 4b schematically illustrates an actual input option in the actual interactive handle in a bump mode;
FIG. 4c schematically illustrates an actual input option in the actual interaction grip being a concave pattern;
FIG. 4d is a schematic diagram schematically showing an actual input option in an actual interactive handle in a planar sense mode;
FIG. 5 is an exemplary block diagram of one embodiment of an interactive device for multiple screens of the present disclosure;
FIG. 6 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the disclosed interaction methods for multiple screens or interaction devices for multiple screens may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as game-type applications, browser applications, shopping-type applications, search-type applications, instant messaging tools, mailbox clients, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices supporting browser applications, including but not limited to tablet computers, laptop and desktop computers, and the like. When the terminal devices 101, 102, 103 are software, they can be installed in the above-listed electronic devices. It may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
The server 105 may be a server providing various services, such as a background server providing support for applications running on the terminal devices 101, 102, 103. The background server can analyze and process the received data such as the request and the like, and feed back the processing result to the terminal equipment.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, for providing distributed services, or as a single software or software module. The present invention is not particularly limited herein.
In practice, the interaction method for multiple screens provided by the embodiments of the present disclosure may be performed by the terminal devices 101, 102, 103, and the interaction apparatus for multiple screens may also be provided in the terminal devices 101, 102, 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2a, fig. 2a illustrates a flow 200 of one embodiment of an interaction method for multiple screens according to the present disclosure. The interaction method for the multi-screen comprises the following steps:
in step 201, in response to receiving a user selection of a multi-screen interactive option, a current display interface is projected to a target screen.
In this embodiment, an execution subject (for example, a terminal device shown in fig. 1) of the multi-screen interaction method may receive a user's selection operation of the multi-screen interaction option through an input device.
The multi-screen interaction option refers to an option of projecting a current display interface of software running on a local screen to display on other screens.
The selection operation may be a preset operation that can be recognized as a selection action.
For example, the terminal device may collect a click operation of the user, and if an operation position of the click operation corresponds to the multi-screen interaction option, identify the click operation as a selected operation of the multi-screen interaction option.
For another example, the terminal device may collect an operation track of the user, identify whether the collected operation track matches with a preset operation track, and if so, use the collected operation track as the selected operation of the multi-screen interaction option. The preset operation track is preset selected operation corresponding to the multi-screen interaction options.
The target screen is a screen which is indicated by the multi-screen interaction options and establishes a projection relationship with the local screen, and the size of the target screen can be larger than, equal to or smaller than that of the local screen, which is not limited in the application.
When the execution body projects the current display interface to the target screen, the user terminal which interacts with the current display interface can be a single user terminal or a plurality of user terminals.
In some optional implementations of the present embodiments, in response to receiving a user selection of a multi-screen interaction option, projecting a current display interface onto a target screen includes: and in response to receiving the selection operation of the local user on the multi-screen interaction option, projecting a current display interface comprising the interaction operation of the local user and the interaction operation of the remote user to a target screen.
In this implementation manner, if there are multiple user terminals that interact with the current display interface, the execution body may be used as a terminal device that transmits the current display interface to the target screen, and one of the terminal devices that interacts with the current display interface, and the other terminal devices may be used as other terminal devices that interact with the current display interface, so as to improve adaptability to simultaneous interactions of multiple users.
Specifically, when the user selects the multi-screen interaction option, the execution main body can determine whether the current operation interface supports simultaneous interaction of multiple terminals, if so, a many-to-one control mode can be provided, terminal equipment currently selecting the multi-screen interaction option is allowed to be used as picture output and control end output, and other terminal equipment is allowed to be used as control end output.
In some optional implementations, the projecting the current display interface onto the target screen in response to receiving the user selection of the multi-screen interaction option may include: and responding to the receiving operation of the user on the multi-screen interaction option, and projecting the current display interface to a television screen through an adapter of the external equipment in the television.
In the implementation manner, the television screen is adopted as the target screen, so that the display definition of the current display interface can be improved, and the interaction efficiency is improved.
As shown in fig. 2b, fig. 2b exemplarily shows a case where a single user's selection operation of a multi-screen interactive option is received and a current display interface is projected to a television screen. Wherein the operator 210 projects the current display interface to the television screen and displays an operation interface comprising virtual input options on the terminal device, and the onlookers 211, 212 onlookers the interactive operation of the operator 210 on the television screen.
As shown in fig. 2c, fig. 2c exemplarily shows a case where a single user's selection operation of a multi-screen interaction option is received and a current display interface for interaction with a plurality of users is projected to a television screen. Wherein the operator 220 projects the current display interface to the television screen and displays an operation interface comprising virtual input options on the terminal device, and the interactions of the other operators 221, 222 with the current interaction interface are also displayed on the current interaction interface on the television screen.
Returning to fig. 2a, in step 202, an option is presented to select an input device.
In this embodiment, when the execution body projects the current display interface to the target screen, an option of selecting the input device may be presented on the local screen and/or the target screen. For example, an option to select a virtual input device and an option to select an entity input device are presented.
In step 203, a selection operation of an option of selecting an input device is received.
In this embodiment, when the execution body presents the options of the input device, the selection operation of the options of the input device by the user may be received.
In step 204, an operation interface including at least one virtual input option is generated based on the current display interface projected to the target screen using the virtual input device in response to the selection operation instruction, and the operation interface is displayed on the local screen without displaying the current display interface projected to the target screen.
In this embodiment, if the selection operation instruction adopts the virtual input device, the execution body may generate an operation interface corresponding to the current display interface projected to the target screen. The operator interface may include one, two, or more virtual input options. The virtual input options may be in the form of virtual interactive buttons or may be special virtual input options, such as virtual interactive handles.
When the operation interface is generated, the execution main body can generate different operation interfaces according to different current display interfaces. Or generating different operation interfaces according to different applications or different interaction stages of the same application to which the current display interface belongs. Specifically, the manner in which the execution subject generates the operation interface may be determined by the manner in which the interface is generated in the prior art or in a future developed technology, which is not limited in this application. For example, the operation interface may be generated according to a preset association relationship between the operation interface and the current display interface.
After the execution body generates the operation interface, the operation interface is displayed on the local screen, and the current display interface projected to the target screen is not displayed on the local screen. The local screen at this time is used as a virtual input device to provide an operation interface for inputting information for the user. For example, when the current interactive interface is a game interface, the executing body may generate different key layouts according to different games.
Fig. 2d, 2e and 2f show, by way of example, the case of generating different operation interfaces for different applications to which the current display interface belongs in the virtual interactive apparatus, respectively. Fig. 2d shows an operation interface generated when the application to which the current display interface belongs is a first application (for example, a game GOK of kingdom). Fig. 2e shows a scenario of an operation interface with a background generated when the application to which the current display interface belongs is a second application (e.g. fire shadow Naruto). Fig. 2f shows the case of an operation interface generated when the application to which the current display interface belongs is a third application (e.g., QQ-galloping qqqspeed).
In some optional implementations of this embodiment, generating the operation interface including the at least one virtual input option based on the current display interface projected to the target screen may include: the AI big data are adopted to predict the user portrait of the user on the current display interface projected to the target screen; an operator interface including at least one virtual input option is generated based on the predicted user representation.
In this implementation manner, when the user uses the virtual input device, the execution subject may use the artificial intelligence AI big data locally or in the cloud to predict the user portrait of the user on the current display interface projected to the target screen. User portrayal here is an effective tool for characterizing the target user, contacting the user's appeal and design direction. For example, the user portrayal may include age, gender, interests, hobbies, expertise, character features, etc. of the current user, and may further include a layout of virtual input options of a user category to which the current user belongs in a virtual input device corresponding to the current interactive interface, and a layout of virtual input options of the current user in a virtual input device corresponding to a different interactive interface.
After predicting the user representation, the execution body may generate an operation interface conforming to the user representation, conforming to the current display interface, and including at least one virtual input option based on the predicted user representation.
In the implementation manner, the operation interface comprising at least one virtual input option is generated based on the predicted user portrait, so that the pertinence of the generated operation interface can be improved, and the generated operation interface meets the requirements of users.
In some optional implementations of this embodiment, generating the operation interface including the at least one virtual input option based on the current display interface projected to the target screen may include: based on the current display interface projected to the target screen, an operation interface comprising a preset background and at least one virtual input option is generated, wherein the preset background comprises a default background or a background set by a user based on a background setting option.
In the implementation manner, by introducing a default background or a background set by a user based on a background setting option in the operation interface, the pertinence of the operation interface can be improved, so that the virtual input option is clearer, clearer or more in accordance with the user requirement.
In some optional implementations of the present embodiment, the interaction method further includes: responsive to a user selection operation of the virtual input option, a vibration is presented with a vibration frequency corresponding to the selected virtual input option.
In this implementation manner, when the user selects one or more options of the virtual input options, the executing body may present different vibration frequencies for different options, so as to prompt the user that the option corresponding to the vibration frequency is selected, thereby improving the identifiability of the virtual input options.
Referring to fig. 2g and 2h, fig. 2g schematically illustrates a situation in which different vibration frequencies 231, 232 are presented for the operation interface in fig. 2 d. Fig. 2h shows schematically a situation where different vibration frequencies 233, 234 are presented for the operation interface in fig. 2 f.
In some optional implementations of the present embodiment, the interaction method further includes: responsive to a user selection of a generated virtual input option, presenting a movable identification of the virtual input option; in response to a user moving an input option having a movable identity, the virtual input option indicated by the moving operation is moved to a target location indicated by the moving operation.
In this implementation, the selected operation is a predefined operation for selecting a virtual input option; the move operation is a predefined operation for moving a virtual input option having a movable identification.
When the user selects the virtual input options, the local screen of the executing body may present movable identifiers of one or all of the virtual input options, where the virtual input options with movable identifiers are in a movable state. When the user performs a movement operation on the virtual input option having the movable flag, the virtual input option indicated by the movement operation may be moved to the target position indicated by the movement operation.
According to the interaction method in the implementation mode, the virtual input options can be moved to the target positions according to the user requirements, so that the flexibility of the virtual input option layout in the virtual input device is improved.
In some optional implementations of the present embodiment, the interaction method further includes: in response to a user input exiting the selected operation, the virtual input option is deselected.
In this implementation, if the user selects a virtual input option due to a misoperation and presents a movable identifier of one or all virtual input options, an exit operation for the selected operation may be input, and the selection of the virtual input option may be released. The exit operation here is a predefined operation for releasing selection of the virtual input option.
According to the interaction method for multiple screens, the local current display interface can be projected to the target screen, information sharing is achieved, the operation interface comprising at least one virtual input option is displayed by adopting the local screen, the projected current display interface is not displayed, heating and energy consumption of electronic equipment where the local screen is located are reduced, and information input accuracy of the local operation interface is improved.
An exemplary application scenario of the interaction method for multi-screen of the present disclosure is described below in conjunction with fig. 3.
As shown in fig. 3, fig. 3 illustrates one exemplary application scenario of an interaction method for multiple screens according to the present disclosure.
As shown in fig. 3, the interaction method 300 for multi-screen operation in the electronic device 320 may include:
first, in response to receiving a user selection 302 of a multi-screen interaction option 301, projecting a current display interface 303 to a target screen 304;
thereafter, an option 305 is presented to select the input device;
thereafter, a selection operation 306 of an option 305 of the selection input device is received;
thereafter, in response to the selection operation 306 indicating that the virtual input device 307 is employed, an operation interface 309 comprising at least one virtual input option 308 is generated based on the current display interface 303 projected to the target screen 304, and the operation interface 309 is displayed on the local screen 310 without displaying the current display interface 303 projected to the target screen 304.
It should be understood that the application scenario of the interaction method for multiple screens shown in fig. 3 described above is merely an exemplary description of the interaction method for multiple screens, and is not meant to be limiting. For example, the steps illustrated in fig. 3 above may be further implemented in greater detail. The interaction steps for multiple screens can also be further added on the basis of the above-mentioned fig. 3.
With further reference to fig. 4a, fig. 4a shows a schematic flow chart of one embodiment of a method of uploading a file in an interactive method for multiple screens according to the present disclosure.
As shown in fig. 4a, the method 400 for uploading a file in the interaction method for multiple screens of the present embodiment may include the following steps:
in step 401, in response to receiving a user selection operation of the multi-screen interaction option, the current display interface is projected to the target screen.
In this embodiment, an execution subject (e.g., a terminal shown in fig. 1) of the multi-screen interaction method may receive a user's selection operation of a multi-screen interaction option through an input device.
The multi-screen interaction option refers to an option of projecting a current display interface of the local screen to be displayed on other screens. The selected operation may be a preset operation that may be recognized as a selected action. The target screen is a screen which is indicated by the multi-screen interaction options and establishes a projection relationship with the local screen.
Step 402, an option to select an input device is presented.
In this embodiment, when the execution body projects the current display interface to the target screen, an option of selecting the input device may be presented on the local screen and/or the target screen. For example, an option to select a virtual input device and an option to select an entity input device are presented.
Step 403, receiving a selection operation of an option of selecting an input device.
In this embodiment, when the execution body presents the options of the input device, the selection operation of the options of the input device by the user may be received.
And step 404, responding to the selection operation instruction and adopting the entity input device, starting to adopt the local entity input device based on the corresponding relation between the preset local entity input device and the application software or the system software of the current display interface, and blacking out the local screen.
In this embodiment, the physical input device may be a physical button provided on a screen or a housing of the electronic device. When the physical button is opened, the electronic device is completely used as a physical interaction handle, and meanwhile, the physical interaction handle can be designed to have modes such as concave, convex or plane induction.
Fig. 4b, 4c and 4d show exemplary cases where the actual input options in the actual interaction handle are respectively a convex mode, a concave mode and a plane sensing mode.
If the selection operation indicates that the entity input device is adopted, the execution subject may start to adopt the local entity input device according to a preset corresponding relationship between the local entity input device and the current display interface, so as to improve the accuracy and precision of input. Further, the execution body can also blacken a local screen, so that the energy consumption of the execution body is reduced.
In some optional implementations of this embodiment, the initiating includes: the entity interaction handle is popped up.
In this implementation manner, if the local entity input device is started, the entity interaction handle may be popped up, so as to improve the interaction efficiency between the user and the current display interface projected to the target screen.
Unlike the virtual input device shown in fig. 2a, the interactive method for multiple screens in the embodiment shown in fig. 4a of the present disclosure uses a physical input device in fig. 4a, so that the operation efficiency of the user can be improved.
It will be appreciated by those skilled in the art that steps 401, 402 and 403 in fig. 4a correspond to steps 201, 202 and 203, respectively, and thus the operations and features described in fig. 2a for steps 201, 202 and 203 are equally applicable to steps 401, 402 and 403, and are not repeated here.
As an implementation of the method shown in the above figures, the embodiment of the disclosure provides an embodiment of an interaction device for multiple screens, where the embodiment of the device corresponds to the embodiment of the method shown in fig. 2a to 4d, and the device is specifically applicable to the terminal or the server shown in fig. 1.
As shown in fig. 5, the interaction device 500 for multiple screens of the present embodiment may include: a display interface projection unit 510 configured to project a current display interface to a target screen in response to receiving a user's selection operation of a multi-screen interaction option; a selection option presenting unit 520 configured to present options of the selection input device; a selection operation receiving unit 530 configured to receive a selection operation of an option of a selection input device; an operation interface display unit 540 configured to employ the virtual input device in response to the selection operation instruction, generate an operation interface including at least one virtual input option based on the current display interface projected to the target screen, and display the operation interface on the local screen without displaying the current display interface projected to the target screen.
In some embodiments, the display interface projection unit is further configured to: and in response to receiving the selection operation of the local user on the multi-screen interaction option, projecting a current display interface comprising the interaction operation of the local user and the interaction operation of the remote user to a target screen.
In some embodiments, the operation interface display unit is further configured to: the AI big data are adopted to predict the user portrait of the user on the current display interface projected to the target screen; an operator interface including at least one virtual input option is generated based on the predicted user representation.
In some embodiments, the operation interface display unit is further configured to: based on the current display interface projected to the target screen, an operation interface comprising a preset background and at least one virtual input option is generated, wherein the preset background comprises a default background or a background set by a user based on a background setting option.
In some embodiments, the virtual input options in the operation interface display unit include: virtual interactive handles.
In some embodiments, the interaction means further comprises (not shown): and an option vibration presenting unit configured to present vibration with a vibration frequency corresponding to the selected virtual input option in response to a user's selection operation of the virtual input option.
In some embodiments, the interaction means further comprises (not shown): a mobile identification presenting unit configured to present a movable identification of the virtual input option in response to a user's selection operation of the generated virtual input option; an input option moving unit configured to move a virtual input option indicated by a movement operation to a target position indicated by the movement operation in response to a movement operation of the virtual input option having a movable identification by a user; an option selection releasing unit configured to release selection of the virtual input option in response to a user input of an exit operation of the selected operation.
In some embodiments, the interaction means further comprises (not shown): and the input device starting unit is configured to respond to the selection operation instruction by adopting the entity input device, start adopting the local entity input device based on the corresponding relation between the preset local entity input device and the current display interface and black screen the local screen.
In some embodiments, the input device activation unit is further configured to: the entity interaction handle is popped up.
In some embodiments, the size of the target screen in the display interface projection unit is larger than the local screen.
In some embodiments, the display interface projection unit is further configured to: and responding to the receiving operation of the user on the multi-screen interaction option, and projecting the current display interface to a television screen through an adapter of the external equipment in the television.
It should be understood that the various units recited in the apparatus 500 correspond to the various steps recited in the methods described with reference to fig. 2 a-4 d. Thus, the operations and features described above with respect to the method are equally applicable to the apparatus 500 and the various units contained therein, and are not described in detail herein.
Referring now to fig. 6, a schematic diagram of an electronic device (e.g., server or terminal device of fig. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. Terminal devices in embodiments of the present disclosure may include, but are not limited to, devices such as notebook computers, desktop computers, and the like. The terminal device/server illustrated in fig. 6 is merely an example, and should not impose any limitation on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 601. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: responding to receiving the selection operation of the user on the multi-screen interaction options, and projecting a current display interface to a target screen; presenting an option to select an input device; receiving a selection operation of an option of selecting an input device; and responding to the selection operation instruction, adopting a virtual input device, generating an operation interface comprising at least one virtual input option based on the current display interface projected to the target screen, and displaying the operation interface on the local screen and not displaying the current display interface projected to the target screen.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a display interface projection unit, a selection option presentation unit, a selection operation receiving unit, and an operation interface display unit. The names of these units do not constitute limitations on the unit itself in some cases, and for example, the display interface projecting unit may also be described as "a unit that projects a current display interface to a target screen in response to receiving a user's selection of a multi-screen interaction option".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention referred to in this disclosure is not limited to the specific combination of features described above, but encompasses other embodiments in which features described above or their equivalents may be combined in any way without departing from the spirit of the invention. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).

Claims (14)

1. An interaction method for multiple screens, comprising:
responding to receiving the selection operation of the user on the multi-screen interaction options, and projecting a current display interface to a target screen;
presenting an option to select an input device, comprising: presenting an option to select a virtual input device and an option to select a physical input device;
receiving a selection operation of an option of selecting an input device;
generating an operation interface including at least two virtual input options based on a current display interface projected to a target screen using a virtual input device in response to the selection operation instruction, including: the AI big data are adopted to predict the user portrait of the user on the current display interface projected to the target screen; generating an operation interface comprising a preset background and at least two virtual input options based on the predicted user portrait and the current display interface, wherein the user portrait comprises: the method comprises the steps that the layout of virtual input options of a user category to which a current user belongs in a virtual input device corresponding to a current interactive interface is carried out, the preset background comprises a background set by the user based on background setting options, the operation interface is displayed on a local screen, and a current display interface projected to a target screen is not displayed;
And responding to the selection operation instruction by adopting an entity input device, starting to adopt the local entity input device based on the corresponding relation between the preset local entity input device and the current display interface, and blacking a local screen, wherein the local entity input device is provided with a convex mode, a concave mode and a plane induction mode.
2. The interaction method of claim 1, wherein the projecting the current display interface to the target screen in response to receiving a user selected operation of the multi-screen interaction option comprises:
and in response to receiving the selection operation of the local user on the multi-screen interaction option, projecting a current display interface comprising the interaction operation of the local user and the interaction operation of the remote user to a target screen.
3. The interaction method of any of claims 1-2, wherein the virtual input options include: virtual interactive handles.
4. The interaction method of claim 1, wherein the interaction method further comprises:
responsive to a user selection operation of the virtual input option, a vibration is presented with a vibration frequency corresponding to the selected virtual input option.
5. The interaction method of claim 1, wherein the interaction method further comprises:
Responsive to a user selection of a generated virtual input option, presenting a movable identification of the virtual input option;
responding to a moving operation of a user on a virtual input option with a movable identifier, and moving the virtual input option indicated by the moving operation to a target position indicated by the moving operation;
and releasing the selection of the virtual input option in response to a user input of an exit operation of the selected operation.
6. The interaction method of claim 1, wherein the initiating employs a local entity input device comprises: the entity interaction handle is popped up.
7. The interaction method of claim 1, wherein the target screen is larger in size than the home screen.
8. The interaction method of claim 1, wherein the projecting the current display interface to the target screen in response to receiving a user selected operation of the multi-screen interaction option comprises:
and responding to the receiving operation of the user on the multi-screen interaction option, and projecting the current display interface to a television screen through an adapter of the external equipment in the television.
9. An interactive apparatus for multiple screens, comprising:
the display interface projection unit is configured to respond to receiving the selection operation of the multi-screen interaction option by a user and project a current display interface to the target screen;
A selection option presenting unit configured to present options of the selection input device, comprising: presenting an option to select a virtual input device and an option to select a physical input device;
a selection operation receiving unit configured to receive a selection operation of an option of a selection input device;
an operation interface display unit configured to employ a virtual input device in response to the selection operation instruction, generate an operation interface including at least two virtual input options based on a current display interface projected to a target screen, including: the AI big data are adopted to predict the user portrait of the user on the current display interface projected to the target screen; generating an operation interface comprising a preset background and at least two virtual input options based on the predicted user portrait and the current display interface, wherein the user portrait comprises: the method comprises the steps that the layout of virtual input options of a user category to which a current user belongs in a virtual input device corresponding to a current interactive interface is carried out, the preset background comprises a background set by the user based on background setting options, the operation interface is displayed on a local screen, and a current display interface projected to a target screen is not displayed;
and the input device starting unit is configured to respond to the selection operation instruction by adopting the entity input device, start to adopt the local entity input device based on the corresponding relation between the preset local entity input device and the current display interface and black the local screen, and the local entity input device is provided with a convex mode, a concave mode and a plane sensing mode.
10. The interactive apparatus of claim 9, wherein the display interface projection unit is further configured to:
and in response to receiving the selection operation of the local user on the multi-screen interaction option, projecting a current display interface comprising the interaction operation of the local user and the interaction operation of the remote user to a target screen.
11. The interactive device of claim 9, wherein the interactive device further comprises:
a mobile identification presenting unit configured to present a movable identification of the virtual input option in response to a user's selection operation of the generated virtual input option;
an input option moving unit configured to move a virtual input option indicated by a movement operation to a target position indicated by the movement operation in response to a movement operation of the virtual input option having a movable identification by a user;
an option selection releasing unit configured to release selection of the virtual input option in response to a user input of an exit operation of the selected operation.
12. The interactive apparatus of claim 9, wherein the display interface projection unit is further configured to:
and responding to the receiving operation of the user on the multi-screen interaction option, and projecting the current display interface to a television screen through an adapter of the external equipment in the television.
13. An electronic device, comprising:
one or more processors;
a storage means for storing one or more programs;
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-8.
14. A computer readable medium having stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-8.
CN201911366530.7A 2019-12-26 2019-12-26 Interaction method and device for multiple screens Active CN111135557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911366530.7A CN111135557B (en) 2019-12-26 2019-12-26 Interaction method and device for multiple screens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911366530.7A CN111135557B (en) 2019-12-26 2019-12-26 Interaction method and device for multiple screens

Publications (2)

Publication Number Publication Date
CN111135557A CN111135557A (en) 2020-05-12
CN111135557B true CN111135557B (en) 2024-01-12

Family

ID=70520382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911366530.7A Active CN111135557B (en) 2019-12-26 2019-12-26 Interaction method and device for multiple screens

Country Status (1)

Country Link
CN (1) CN111135557B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11269453B1 (en) * 2020-08-17 2022-03-08 International Business Machines Corporation Failed user-interface resolution

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143294A2 (en) * 2008-05-20 2009-11-26 Citrix Systems, Inc. Methods and systems for using external display devices with a mobile computing device
CN102918490A (en) * 2010-04-01 2013-02-06 思杰系统有限公司 Interacting with remote applications displayed within a virtual desktop of a tablet computing device
CN106412291A (en) * 2016-09-29 2017-02-15 努比亚技术有限公司 Equipment control method and mobile terminal
CN109885746A (en) * 2019-01-17 2019-06-14 平安城市建设科技(深圳)有限公司 Page Dynamic Distribution method, apparatus, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8427438B2 (en) * 2009-03-26 2013-04-23 Apple Inc. Virtual input tools

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009143294A2 (en) * 2008-05-20 2009-11-26 Citrix Systems, Inc. Methods and systems for using external display devices with a mobile computing device
CN102027450A (en) * 2008-05-20 2011-04-20 思杰系统有限公司 Methods and systems for using external display devices with a mobile computing device
CN102918490A (en) * 2010-04-01 2013-02-06 思杰系统有限公司 Interacting with remote applications displayed within a virtual desktop of a tablet computing device
CN106412291A (en) * 2016-09-29 2017-02-15 努比亚技术有限公司 Equipment control method and mobile terminal
CN109885746A (en) * 2019-01-17 2019-06-14 平安城市建设科技(深圳)有限公司 Page Dynamic Distribution method, apparatus, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴斌.《网络科学与计算》.北京邮电大学出版社,2019,(第第1版版),第220-221页. *
徐璐瑶 ; 姜增祺 ; 黄婷婷 ; 刘云鹏 ; .基于大数据的用户画像系统概述.电子世界.2018,(02),全文. *

Also Published As

Publication number Publication date
CN111135557A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN110046021B (en) Page display method, device, system, equipment and storage medium
CN111866537B (en) Information display method and device in live broadcast room, storage medium and electronic equipment
US10637804B2 (en) User terminal apparatus, communication system, and method of controlling user terminal apparatus which support a messenger service with additional functionality
CN113965807A (en) Message pushing method, device, terminal, server and storage medium
CN111263175A (en) Interaction control method and device for live broadcast platform, storage medium and electronic equipment
WO2023103956A1 (en) Data exchange method and apparatus, electronic device, storage medium and program product
CN111790148B (en) Information interaction method and device in game scene and computer readable medium
CN106027631B (en) Data transmission method and device
CN114727146A (en) Information processing method, device, equipment and storage medium
US20170185422A1 (en) Method and system for generating and controlling composite user interface control
CN114679628B (en) Bullet screen adding method and device, electronic equipment and storage medium
CN110134480B (en) User trigger operation processing method and device, electronic equipment and storage medium
CN111135557B (en) Interaction method and device for multiple screens
CN114238673A (en) Content display method, device, equipment and storage medium
CN111857858A (en) Method and apparatus for processing information
WO2024060943A1 (en) Comment information publishing method and apparatus, electronic device, and storage medium
US20240073488A1 (en) Live video processing method and apparatus, device and medium
WO2024002162A1 (en) Method and apparatus for interaction in live-streaming room, and device and medium
WO2023246859A1 (en) Interaction method and apparatus, electronic device, and storage medium
US20230370686A1 (en) Information display method and apparatus, and device and medium
CN109947528B (en) Information processing method and device
CN115643445A (en) Interaction processing method and device, electronic equipment and storage medium
CN115269886A (en) Media content processing method, device, equipment and storage medium
CN114489891A (en) Control method, system, device, readable medium and equipment of cloud application program
CN113419650A (en) Data moving method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant