CN112306442B - Cross-device content screen projection method, device, equipment and storage medium - Google Patents

Cross-device content screen projection method, device, equipment and storage medium Download PDF

Info

Publication number
CN112306442B
CN112306442B CN202011311456.1A CN202011311456A CN112306442B CN 112306442 B CN112306442 B CN 112306442B CN 202011311456 A CN202011311456 A CN 202011311456A CN 112306442 B CN112306442 B CN 112306442B
Authority
CN
China
Prior art keywords
content
screen
display
throwing
display screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011311456.1A
Other languages
Chinese (zh)
Other versions
CN112306442A (en
Inventor
侯奕含
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011311456.1A priority Critical patent/CN112306442B/en
Publication of CN112306442A publication Critical patent/CN112306442A/en
Priority to PCT/CN2021/118823 priority patent/WO2022105403A1/en
Application granted granted Critical
Publication of CN112306442B publication Critical patent/CN112306442B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The embodiment of the application discloses a cross-equipment content screen projection method, a device, equipment and a storage medium, and belongs to the technical field of content sharing. The device with the display screen can receive the content screen throwing request sent by the first device and the second device successively, and can display the first content and the second content in a split screen mode. The first device and the second device send corresponding content screen throwing requests to the device with the display screen after preset interaction operation is executed. Therefore, the method and the device can reduce the operation complexity of simultaneously displaying the contents of the screen throwing of a plurality of devices by the device with the display screen, and improve the efficiency of screen throwing of the plurality of devices.

Description

Cross-device content screen projection method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of content sharing, in particular to a cross-device content screen projection method, device, equipment and storage medium.
Background
With the rapid development of intelligent device technology, the scenes of interaction of multiple people by using respective devices are increasing.
In the related art, a plurality of users can use the device to play the same game, or log in different shopping applications on their own device to compare the price of the same commodity. In the above scenario, if a user in the same room wants to view his own device and the other device at the same time, two users need to be in a position close to each other to view back and forth.
Disclosure of Invention
The embodiment of the application provides a cross-equipment content screen projection method, device, equipment and storage medium. The technical scheme is as follows:
according to an aspect of the present application, there is provided a cross-device content screening method applied to a device having a display screen, the method including:
receiving a first content screen throwing request sent by first equipment, wherein the first content screen throwing request is a request sent by the first equipment to the equipment with the display screen after preset interactive operation is executed;
playing first content in response to the first content screen-throwing request, wherein the first content is played in the first device;
receiving a second content screen projection request sent by a second device, wherein the second content screen projection request is a request sent by the second device to the device with the display screen after the preset interactive operation is executed;
and responding to the second content screen throwing request, and playing the first content and the second content in a screen splitting mode, wherein the second content is played in the second device.
According to another aspect of the present application, there is provided a cross-device content delivery apparatus for use in a device having a display screen, the apparatus comprising:
The first receiving module is used for receiving a first content screen-throwing request sent by first equipment, wherein the first content screen-throwing request is a request sent by the first equipment to the equipment with the display screen after preset interactive operation is executed;
the first playing module is used for responding to the first content screen projection request and playing the first content, wherein the first content is played in the first equipment;
the second receiving module is used for receiving a second content screen projection request sent by second equipment, wherein the second content screen projection request is a request sent by the second equipment to the equipment with the display screen after the preset interactive operation is executed;
and the second playing module is used for responding to the second content screen throwing request and adopting a screen splitting mode to play the first content and the second content, wherein the second content is played in the second equipment.
According to another aspect of the application, there is provided a cross-device content screening system, the system comprising a device having a display screen, a first device, at least one second device, and a remote control device, the remote control device comprising a screen-screening tag therein and being for remotely controlling the device having the display screen;
The first device is used for scanning information in the screen-throwing label;
the first device is used for sending a first content screen throwing request to the device with the display screen after scanning the information in the screen throwing label;
the device with the display screen is used for receiving the first content screen throwing request;
the device with the display screen is used for responding to the first content screen projection request and playing first content, wherein the first content is played in the first device;
the second device is used for scanning information in the screen-throwing label;
the second device is used for sending a second content screen throwing request to the device with the display screen after scanning the information in the screen throwing label;
the device with the display screen is used for receiving the second content screen throwing request;
the device with the display screen is used for responding to the second content screen throwing request and playing the first content and the second content in a split screen mode, and the second content is played in the second device.
According to another aspect of the present application, there is provided a cross-device content projection system, the system comprising a device having a display screen, a first device comprising a first image acquisition component, and at least one second device comprising a second image acquisition component;
The first device is used for shooting a preset gesture or a preset action through the first image acquisition component;
the first device is used for sending a first content screen throwing request to the device with the display screen after shooting the preset gesture or the preset action;
the device with the display screen is used for receiving the first content screen throwing request;
the device with the display screen is used for responding to the first content screen projection request and playing first content, wherein the first content is played in the first device;
the second device is used for shooting the preset gesture or the preset action through the second image acquisition component;
the second device is used for sending a second content screen throwing request to the device with the display screen after shooting the preset gesture or the preset action;
the device with the display screen is used for receiving the second content screen throwing request;
the device with the display screen is used for responding to the second content screen throwing request and playing the first content and the second content in a split screen mode, and the second content is played in the second device.
According to another aspect of the present application, there is provided an electronic device including a processor and a memory having at least one instruction stored therein, the instructions being loaded and executed by the processor to implement a cross-device content screening method as provided in various aspects of the present application.
According to another aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement a cross-device content screening method as provided in various aspects of the present application.
According to one aspect of the present application, a computer program product is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in various alternative implementations of the cross-device content screening aspect described above.
The device with the display screen can receive the content screen throwing request sent by the first device and the second device successively, and can display the first content and the second content in a split screen mode. The first device and the second device send corresponding content screen throwing requests to the device with the display screen after preset interaction operation is executed. Therefore, the method and the device can reduce the operation complexity of simultaneously displaying the contents of the screen throwing of a plurality of devices by the device with the display screen, and improve the efficiency of screen throwing of the plurality of devices.
Drawings
In order to more clearly describe the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a schematic diagram of an implementation environment of cross-device content screening provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a split screen mode of a device having a display screen provided in accordance with the embodiment of FIG. 1;
FIG. 3 is a schematic diagram of a read RF tag provided in accordance with the embodiment of FIG. 2;
FIG. 4 is a flowchart of a cross-device content screening method provided in one exemplary embodiment of the present application;
FIG. 5 is a flowchart of a cross-device content screening method provided in another exemplary embodiment of the present application;
FIG. 6 is an application diagram of a cross-device content screening method provided based on the embodiment shown in FIG. 5;
FIG. 7 is a flowchart of a cross-device content screening method provided in another exemplary embodiment of the present application;
Fig. 8 is a flowchart of a cross-device content screen projection method applied to a prison scene according to an embodiment of the present application;
FIG. 9 is a flowchart of a method for cross-device content delivery provided in an embodiment of the present application;
FIG. 10 is a schematic display view of a device having a display screen provided in accordance with the embodiment of FIG. 9;
FIG. 11 is a schematic illustration of a display interface of a device having a display screen provided in accordance with the embodiment of FIG. 9;
FIG. 12 is a block diagram of a cross-device content delivery apparatus according to an exemplary embodiment of the present application;
fig. 13 is a block diagram of a device with a display screen according to an exemplary embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it should be noted that, unless explicitly specified and limited otherwise, the terms "connected," "connected," and "connected" are to be construed broadly, and may be either fixedly connected, detachably connected, or integrally connected, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
For ease of understanding of the schemes shown in the embodiments of the present application, several terms appearing in the embodiments of the present application are described below.
A first content screen-drop request: is a piece of information sent by the first device to the device with the display screen for requesting the first content played in the first device to be played in the device with the display screen. In one possible manner, the first content projection request may be a character string conforming to a fixed format, where the character string includes a keyword and character content.
Optionally, when the preset interaction operation is scanning a screen-throwing label, the screen-throwing label is an electronic label corresponding to the device with the display screen. The character content may include a first character string, which is information read by the first device from a screen tag of a remote control device for remotely controlling a device having a display screen. For example, the first device has an NFC (Near Field Communication) module. The first device is capable of reading a first string in an NFC tag in the device when proximate to a remote control of the device having a display screen. In another description, an ID (Identity Document, identity number) is stored in an NFC tag in the remote controller, and the NFC module in the first device reads the ID in the remote controller and uses it as the first string.
In another possible way, there is an RFID (Radio Frequency Identification ) module in the first device that is capable of reading a first string in an RFID tag in the device when proximate to a remote control of the device having a display screen.
It should be noted that, in addition to the NFC and RFID described above, the remote control device of the device with a display screen may also be implemented as other short-distance radio frequency identification tags, and correspondingly, the first device may be provided with a corresponding short-distance identification module. For example, if a radio frequency identification tag conforming to the a protocol is set in a remote control device of a device with a display screen, a radio frequency identification module conforming to the a protocol can be set in the first device, and the radio frequency identification module has the capability of reading information in radio frequency identification.
Optionally, when the preset interaction operation includes shooting a preset gesture, or the preset interaction operation includes shooting a preset action, the first content screen projection request can acquire the first character string from the preset application. That is, when the first device captures a preset gesture or captures a preset action through the first image capturing component, the first device may start a preset application and acquire a first character string from the preset application, or when the preset application is in a start state, the first device may acquire the first character string from the preset application. It should be noted that, the process of the second device obtaining the second string is similar to the process of the first device obtaining the first string, which is not described herein.
And (3) a second content screen-throwing request: is a piece of information sent by the second device to the device with the display screen for requesting that the second content played in the second device continue to be played in the device with the display screen. In one possible manner, the second content-screening request may be a character string conforming to a fixed format, where the character string includes a keyword and character content.
Alternatively, a second character string, which is information read by the second device from a screen tag of a remote control device for remotely controlling a device having a display screen, may be included in the character content. The radio frequency module set in the second device may be the same type of module as the radio frequency module set in the first device. In other words, the radio frequency module provided in the second device may be a module conforming to the same radio frequency protocol as the radio frequency module provided in the first device.
A character string template: a character string provided in a device having a display screen. In one possible way, the character string template is the same as the reserved character string in the remote control device corresponding to the device with the display screen. When the first device and the second device correctly read reserved character strings in the remote control device, the first character strings in the first device, the second character strings in the second device and the character string templates are the same character strings.
For example, the reserved string in the remote control device is "abcdef1234" and the string template is also "abcdef1234". When the first device is close to the remote control device, and the reserved character string is read through the radio frequency module, the generated first character string is abcdef1234", and when the device with the display screen is identical to the character string template through the first character string, the first content originally played in the first device can be continuously played in the device with the display screen through verification. Similarly, when the second device correctly reads the reserved character string in the remote control device through the radio frequency module, the generated second character string is "abcdef1234". The device with the display screen will continue playing the second content originally played in the second device in the device with the display screen when the second string and the string template are the same.
In another possible way, the character string template is different from the reserved character in the remote control device corresponding to the device with the display screen, but the two character strings are matched with each other. For example, the reserved character in the remote control device is "33445566abc", and the character string template is "1122334455cba". The string templates are pre-stored in the device with the display screen, and the binding relationship between "1122334455cba" and "33445566abc" is stored. When the device with a display receives the first content screen-casting request to read "33445566abc" from it, or receives the second content screen-casting request to read "33445566abc" from it, the device with a display may pass the verification under the scene that "33445566abc" and "1122334455cba" match, continue playing the first content in the device with a display, or continue playing the second content in the device with a display.
The first content: content displayed in the first device may be played by the first device in a device having a display screen by dropping the content across devices. The application provides two alternative playing modes of the first device, which are described below.
In a first alternative manner, the first content is originally played in the first device, and after the first device successfully drops the first content to the device with the display screen for playing, the first content continues to be played in the first device. For example, when the first content is a game screen, the game screen may continue to be played in the first device, and the game can normally receive the user's manipulation in the first device. For another example, when the first content is a user interface of the shopping application, the user can continue to display the user interface of the shopping application in the first device, facilitating the user using the first device to continue to operate and view merchandise information displayed in the shopping application.
In a second alternative manner, the first content is originally played in the first device, and after the first device successfully drops the first content to the device with the display screen for playing, the first content stops playing in the first device. In this manner, the first device may also continue to display the manipulation interface with respect to the first content. For example, when the first content is video, the first device may also display controls to play, stop, fast forward, fast reverse, adjust the progress of play, or adjust the volume.
The first content may be at least one of a social application, a video application, a reading application, a map application, a weather application, a group purchase application, an office application, a photography application, a finance application, a travel application, a shopping application, a game application, or an examination learning application, as to an application type to which the first content belongs.
For the second content, the user interface to which the first content belongs may be the same interface as the user interface to which the second content belongs. The display mode of the second content in the second device may be the same as the display mode of the first content in the first device, and details may be referred to the description of the first content above, which is not repeated herein.
Illustratively, the first content may include a first commodity therein, and the second content may include a second commodity therein. The first merchandise and the second merchandise are merchandise belonging to the same shopping application.
The cross-device content screen projection method shown in the embodiment of the application can be applied to devices with display screens. The device with a display screen is a device with a display screen. Alternatively, the display screen size of a device with a display screen is a larger size device in the physical world. For example, the device having a display screen may be a television, a projector, a computer display screen, a wall painting display screen, or a monitor of a video invigilation system, a tablet computer, or a notebook computer, to which the present application is not limited. In another possible scenario, a device provided with a display screen and equipped with a remote control may be implemented as the device with a display screen shown in the present application.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment of cross-device content projection according to an embodiment of the present application. In fig. 1, a device 111 with a display screen, a remote control device 112, a first device 120 and a second device 130 are included. Wherein the remote control device 112 is used for remote control of the device 111 with a display screen.
In fig. 1, the device 111 having a display screen is a television, the remote control device 112 is a television remote control, and the first device 120 is a mobile phone and the second device 130 is a mobile phone.
The device 111 with a display screen may be a television placed in a user's home, company or public place. The device 111 with a display screen has a larger-sized display screen, and the device 111 with a display screen has a function of split-screen display. Alternatively, the display screen size of the device 111 with a display screen may be above 10 inches.
For a schematic illustration of the split screen function, please refer to fig. 2, fig. 2 is a schematic illustration of a split screen mode of a device with a display screen provided based on the embodiment shown in fig. 1. After the device 111 with a display screen enters the split-screen mode, it includes a first display area 210 and a second display area 220. The device 111 having a display screen is capable of independently displaying specified contents in the first display area 210 and the second display area 220, respectively. For example, the device 111 with a display screen displays an interface of the game G1 in the first display area 210, and displays the shopping application S1 in the second display area 220.
The first device 120 may connect with the device 111 having a display screen via bluetooth, wiFi (Wireless Fidelity ) or other short range wireless connection network. Illustratively, when the first device 120 is in the same WiFi network as the device 111 with the display screen, the first device 120 is able to screen the first content into the device 111 with the display screen.
Accordingly, the second device 130 can establish a connection with the device 111 having a display screen in the same manner, and screen the second content originally displayed in the second device 130 to the device 111 having a display screen.
Illustratively, the first device 120 or the second device 130 may perform the screen-projection process according to the same screen-projection protocol as the device 111 having the display screen.
In the embodiment of the application, the screen-throwing protocol can be any one of AirPlay, miracast, DLNA, chromecast, wiDi, WDHI or proprietary screen-throwing protocol. It should be noted that, the screen projection protocol is only schematically illustrated, and is not limited to the screen projection protocol actually applied in the embodiments of the present application.
In the embodiment of the present application, please refer to fig. 3, fig. 3 is a schematic diagram of a read rf tag according to the embodiment shown in fig. 2. It should be noted that, in the embodiment of the present application, the first device 120 and the second device 130 need to read information in the screen tag in the remote control device 112. In fig. 3, a procedure in which the first device 120 and the second device 130 respectively read the screen tag 310 in the remote control device is described. It should be noted that the first device 120 is provided with a first rf module 320, and the second device 130 is provided with a second rf module 330.
In an actual application scenario, if the screen-drop tag 310 in the remote control device 112 performs the card swiping operation with both the first device 120 and the second device 130, the time sequence is the same. In the embodiment of the present application, the remote control device 112 is defined to perform the card swiping operation with the first device 120 and then perform the card swiping operation with the second device 130.
In the first swipe phase 3A shown in fig. 3, the first device 120 touches the lower part of the remote control device 112. The screen tag 310 provided in the lower portion of the remote control device 112 is read by the first radio frequency module 320 in the first device 120, and the first device 120 generates a first character string according to the read information.
In the second swipe phase 3B shown in fig. 3, the second device 130 touches the lower part of the remote control device 112. The screen tag 310 provided in the lower portion of the remote control device 112 is read by the second radio frequency module 330 in the second device 130, and the second device 130 generates a second character string according to the read information.
In fig. 3, the first rf module 320 in the first device 120 is disposed at an upper portion of the back of the device, and the second rf module 330 in the second device 130 is disposed at a lower portion of the back of the device. The embodiment of the application can be used for describing equipment of different models, and the radio frequency modules can be arranged at different positions. Correspondingly, the positions of the radio frequency modules in the devices of the same model are the same positions.
Illustratively, a system as shown in FIG. 1 can include a device having a display screen, a first device, at least one second device, and a remote control device including a screen tag therein, and the remote control device for remotely controlling the device having the display screen. The first device is used for scanning information in the screen-throwing label; the first device is used for sending a first content screen throwing request to the device with the display screen after scanning the information in the screen throwing label; the device is provided with a display screen and is used for receiving a first content screen throwing request; a device having a display screen for playing first content in response to a first content screen-cast request, the first content being content played in the first device; the second device is used for scanning information in the screen-throwing label; the second device is used for sending a second content screen throwing request to the device with the display screen after scanning the information in the screen throwing label; the device is provided with a display screen and is used for receiving a second content screen throwing request; and the device with a display screen is used for responding to a second content screen throwing request and playing the first content and the second content in a screen splitting mode, wherein the second content is played in the second device.
In another possible implementation, a cross-device content projection system includes a device having a display screen, a first device including a first image acquisition component, and at least one second device including a second image acquisition component. The first device is used for shooting a preset gesture or a preset action through the first image acquisition component; the first device is used for sending a first content screen throwing request to the device with the display screen after shooting a preset gesture or a preset action; the device is provided with a display screen and is used for receiving a first content screen throwing request; a device having a display screen for playing first content in response to a first content screen-cast request, the first content being content played in the first device; the second device is used for shooting a preset gesture or a preset action through the second image acquisition component; the second device is used for sending a second content screen throwing request to the device with the display screen after shooting a preset gesture or a preset action; the device is provided with a display screen and is used for receiving a second content screen throwing request; and the device with a display screen is used for responding to a second content screen throwing request and playing the first content and the second content in a screen splitting mode, wherein the second content is played in the second device.
Referring to fig. 4, fig. 4 is a flowchart of a cross-device content screening method according to an exemplary embodiment of the present application. The cross-device content screening method can be applied to the device with the display screen shown in fig. 1. In fig. 4, the cross-device content screening method includes:
in step 410, a first content screen-throwing request sent by a first device is received, where the first content screen-throwing request is a request sent by the first device to a device with a display screen after a preset interactive operation is performed.
In the present application, the preset interaction operation may include one of scanning a screen-drop tag, shooting a preset gesture, or shooting a preset action.
When the preset interaction operation is to scan the screen-throwing label, the device with the display screen can receive a first content screen-throwing request sent by the first device. After the first device scans the screen-throwing label, the first device sends a first content screen-throwing request to the device with the display screen. In one scenario, if the screen-throwing label is set in a remote controller of a device with a display screen, a first device is close to or touches an area of the remote controller where the screen-throwing label is set, and generates a first content screen-throwing request according to the read information and sends the first content screen-throwing request to the device with the display screen.
When the preset interaction operation is to shoot a preset gesture, the first device can shoot the gesture of the user through the first image acquisition component. The first image acquisition component can be a front camera or a rear camera for shooting. The preset gesture comprises one of a single-hand gesture, a double-hand gesture or a hand-face combination gesture. A single-hand gesture is a specified gesture made by one of the left or right hand. A two-hand gesture is a gesture made by the mutual cooperation of left and right hands. The hand-face combination gesture is a gesture formed by combining one hand and a human face, or the hand-face combination gesture is a gesture formed by combining two hands and a human face. It should be noted that, in the embodiments of the present application, specific actions of the preset gesture are not limited. When the gesture made by the user is a preset gesture, the first device sends a first content screen throwing request to the device with the display screen. Accordingly, the device with the display screen receives the first content screen-casting request.
When the preset interaction operation is to shoot a preset action, the first device can acquire multi-frame images containing the action of the user, and when the extracted features in the multi-frame images accord with the features corresponding to the preset action, the first device can determine that the preset action is shot, and send a first content screen throwing request to the device with the display screen. Accordingly, the device with the display screen receives the first content screen-casting request.
Optionally, the device with the display screen may receive the first content screen-throwing request sent by the first device through a short-distance wireless communication mode. The short-distance wireless communication mode can be any one of WiFi, bluetooth or Zigbee (Zigbee). When the first device sends the first content screen-throwing request, the purpose is to continue playing the content which is originally played in the first device in the device with the display screen. Wherein the first content is content played in the first device. The first content may be a user interface of a currently playing application in the first device, or the first content may be a user interface of an application currently running in the foreground in the first device.
In step 420, in response to the first content screen-drop request, the first content is played, the first content being content played in the first device.
After receiving the first content screen throwing request, the device with the display screen responds to the request to continue playing the first content which is originally played in the first device. The played content can be video, game pictures or interfaces displayed in real time by the mobile phone.
Alternatively, the device with the display screen may play the first content in response to the first string in the first content drop request matching the string template, the first string being information read by the first device from the drop tag.
In the application, the device with the display screen can extract the first character string from the first content screen projection request, and match the first character string with the character string template. When the first character string and the character string template are matched, the device with the display screen plays the first content. In a practical application scenario, the device with the display screen may either play the first content in full screen, or display the first content in one of the display areas when the device with the display screen is currently in a multi-screen display mode or a split-screen mode.
It should be noted that the first character string is information read by the first device from the screen tag. The screen-throwing label can be arranged in remote control equipment corresponding to equipment with a display screen, the screen-throwing label can also be independently arranged into a radio frequency card, and the radio frequency identification label can be attached in a frame of the equipment with the display screen.
Step 430, receiving a second content screen-throwing request sent by the second device, where the second content screen-throwing request is a request sent by the second device to the device with the display screen after executing the preset interactive operation.
Similar to the process of receiving the first content screen request sent by the first device by the device with the display screen, the process of receiving the second content screen request sent by the second device by the device with the display screen is similar, and will not be described herein.
Alternatively, the second device may be the same type of device as the first device or a different type of device than the first device. In the embodiment of the present application, devices from the same vendor may be defined as one model of device. Alternatively, devices from both the same vendor and the same specific model are defined as the same model of device.
Alternatively, the second device may be the same type of device as the first device, or may be a different type of device than the first device. In the embodiment of the application, the types of the devices can be classified according to the classification modes of the mobile phone, the tablet personal computer, the intelligent watch, the notebook personal computer and the like.
In step 440, in response to the second content screen-drop request, the first content and the second content are played in a split screen mode, where the second content is played in the second device.
In the application, when the first content of the first device is played in the device with the display screen, the device with the display screen can respond to the screen-throwing request sent by other devices, and the originally played first content and the second content corresponding to the new requirement are simultaneously played in the split screen mode.
Optionally, in response to the second string in the second content drop request matching the string template, the first content and the second content are played simultaneously, the second string being information read by the second device from the drop tag.
Since the first content is already being played in the device with the display screen, when the second character string matches the character string template, the device with the display screen will decide how to display the first content and the second content simultaneously according to the area size of the screen occupied by the first content in the device with the display screen.
In one possible implementation, when the device with a display screen is full screen displaying the first content, the device with a display screen will employ a split screen mode to re-divide the two display areas. One of the regions is for displaying the first content and the other region is for displaying the second content. Alternatively, the two display areas may be a first display area and a second display area, respectively. Optionally, the device with a display screen may also be in a multi-screen mode, in which the device with a display screen comprises more than 2 display areas. Wherein 2 display areas are used for displaying the first content and the second content.
In another possible implementation, when the device with a display screen displays the first content already in the split-screen mode or the multi-screen mode, the device with a display screen may display the second content in a display area other than the display area of the first content.
Through the possible processing modes, the equipment with the display screen can simultaneously display the first content of the first equipment screen and the second content of the second equipment screen.
In summary, according to the cross-device content screen-casting method provided by the embodiment, since the device with the display screen can sequentially receive the content screen-casting requests sent by the first device and the second device, the first content in the first device is continuously played in the device with the display screen, and the second content in the second device is continuously played in the device with the display screen. The first device and the second device are screen-throwing requests sent to the device with the display screen after preset interactive operation is executed, corresponding content is played, the capacity of the device with the display screen for simultaneously displaying the content of a plurality of screen throwing devices under less operation is improved, and the efficiency of multi-device screen throwing is improved.
Based on the scheme disclosed in the previous embodiment, the device with the display screen can also enter a split-screen mode in advance, and the request of the first device and the request of the second device are received in the split-screen mode in sequence, and corresponding contents are displayed simultaneously. Please refer to the following examples.
Referring to fig. 5, fig. 5 is a flowchart of a cross-device content screening method according to another exemplary embodiment of the present application. The cross-device content screening method can be applied to the device with the display screen. In fig. 5, the cross-device content screening method includes:
step 511, receiving a split screen instruction.
The device with the display screen is capable of receiving a split screen instruction. The split screen instruction may be a voice instruction, an infrared instruction, or an instruction generated when a menu control is triggered.
(1) When the split instruction is a voice instruction, the device having a display screen may receive the instruction through built-in software such as a voice assistant. For example, a device with a display screen may enter a voice activated state after receiving a wake word. In a voice activated state of a device having a display, a microphone of the device having the display may receive voice and generate a voice command based on the voice.
In response to the split instruction, a split mode is entered 512 in which the display of the device with a display includes at least a first display area and a second display area.
A device having a display screen is capable of entering a split mode in response to a split instruction. In the present application, the split screen mode includes a first display area and a second display area. The first display area is used for displaying first content, and the second display area is used for displaying second content.
For example, a user speaking the wake-up word "small cloth" a device with a display will enter a voice activated state. At this time, when the user speaks to enter the split mode, the device with the display screen will enter the split mode.
For another example, the user can open a menu specified in the device with a display screen through the remote controller, and when a split button in the menu is clicked, the device with a display screen will enter a split mode.
Step 520, a first content screen-drop request sent by a first device is received.
In this example, the procedure performed in step 520 is the same as that performed in step 410, and will not be described here again.
In step 530, the first content is played in the first display area in response to the first string in the first content drop request matching the string template.
When the display area of the device having the display screen is divided into a first display area and a second display area, the first display area may be an area on the left side, and the second display area may be an area on the right side.
Step 540, receiving a second content screen-throwing request sent by the second device.
In this example, the procedure performed in step 540 is the same as that performed in step 430, and will not be described here again.
In step 550, in response to the second string in the second content drop request matching the string template, the second content is played in the second display area while the first content is played in the first display area.
In this example, the first content is already being displayed in the first display area. Thus, the device with the display screen arranges the second content for display in the second display area.
The above will be described by way of an example of practical application.
Referring to fig. 6, fig. 6 is an application schematic diagram of a cross-device content screening method according to the embodiment shown in fig. 5. In fig. 6, the device with a display screen is a television, the first device is an a-phone, the second device is a B-phone, and a screen-cast tag having a binding relationship with the device with a display screen is provided in a remote controller of the device with a display screen.
First, the user can control the television 620 to enter a left-right split screen mode using the remote controller 610 or voice. Illustratively, when the user controls the television 620 by voice, the television 620 will display the prompt 621, "the television will enter the left and right split screen".
Then, the user uses the a mobile phone 630 to lean against the remote controller 610, and the radio frequency module of the a mobile phone 630 reads the first character string from the screen-throwing label in the remote controller 610, and includes the information in the first content screen-throwing request, and sends the information to the television 620. Accordingly, the television 620 displays the content originally displayed in the a-phone 630 in the first display area 622 on the left side upon verifying that the first string matches the string template stored in the television 620.
Then, another user may use the B-phone 640 to lean against the same remote controller 610, and the radio frequency module of the B-phone 640 reads the second character string from the screen-throwing label in the remote controller 610, and the information is included in the second content screen-throwing request and sent to the television 620. Accordingly, the tv 620 displays the content originally displayed in the B phone 640 in the second display area 623 on the right side upon verifying that the second character string matches the character string template stored in the tv 620.
In summary, the embodiment can enter the split screen mode in advance according to the voice command of the user or the instruction of the remote controller in advance. After the first device passes the authentication of the device having a display screen, the first content displayed in the first device will be displayed in one of the display areas, and likewise, after the second device passes the authentication of the device having a display screen, the device having a display screen will display the second content displayed in the second device in the display area other than the display area in which the first content is displayed. The information used for verification by the first device and the second device with the display screen is obtained from a screen throwing label which has binding relation with the device with the display screen. Therefore, the device with the display screen can provide the split screen mode which is convenient to enter, and the content played in other devices can be directly projected to one display area in the split screen mode only by one step, and then, under the condition that the projected content exists, the content in other devices can be projected to other display areas in the split screen mode quickly. Therefore, the method provided by the embodiment of the application can improve the capability of the device with the display screen to simultaneously display the screen throwing content in a plurality of other devices.
In the embodiment of the application, besides entering the split screen mode in advance, the device with the display screen also has the capability of intelligently judging whether to enter the split screen to display the first content and the second content. Please refer to the following examples.
Referring to fig. 7, fig. 7 is a flowchart of a cross-device content screening method according to another exemplary embodiment of the present application. The cross-device content screening method can be applied to the device with the display screen. In fig. 7, the cross-device content screening method includes:
step 710, receiving a first content screen-drop request sent by a first device.
In this example, the execution of step 710 is the same as that of step 410, and will not be described in detail here.
In step 720, in response to the first character string in the first content screen-drop request matching the character string template, the first content is played in full screen.
Step 730, receiving a second content screen-drop request sent by the second device.
In this example, the execution of step 730 is the same as that of step 430, and will not be described in detail here.
And 741, responding to the second content screen throwing request and acquiring the split screen information.
It should be noted that the split screen information may be included in the second content screen request. The split screen information is used for being provided for equipment with a display screen so that the equipment with the display screen can judge whether to enter the split screen or not.
In step 742, in response to the split screen information meeting the preset condition, the first content is played in the first display area, and the second content is played in the second display area, where the display screen of the device with the display screen includes at least the first display area and the second display area in the split screen mode.
In this example, when the split information meets a preset condition, the device with the display screen plays the first content in the first display area and plays the second content in the second display area. Wherein the first display area and the second display area are non-overlapping display areas.
It should be noted that, the present application can provide a corresponding scheme for automatically entering the split screen mode according to the specific form of the split screen information, and the following description will be presented.
(1) The split screen information is equipment information related to split screen, and specifically shows the time when the first equipment successfully drops the screen and the time when the second equipment successfully drops the screen.
Step a1, responding to a second content screen-throwing request, and acquiring a reference time length between the screen-throwing time of the first equipment and the screen-throwing time of the second equipment.
And a step a2 of playing the first content in the first display area and playing the second content in the second display area in response to the reference time length being smaller than the first threshold value.
In this example, the screen-throwing time of the first device is used to indicate the time when the second content is successfully thrown in the device with the display screen, and the screen-throwing time of the second device is used to indicate the time when the third content is successfully thrown in the device with the display screen.
Illustratively, the device with the display screen will calculate the duration between the two screen-throwing moments, i.e. calculate the reference duration, based on the moment when the first device succeeds in throwing the screen on the device with the display screen and the moment when the second device succeeds in throwing the screen on the device with the display screen. The reference duration can reflect the relevance of screen projection between the first equipment and the second equipment, and if the reference duration is shorter, the screen projection of the first equipment on the equipment with the display screen is indicated, and the screen projection of the second equipment on the equipment with the display screen has stronger relevance. Therefore, based on the design concept, the present application plays the first content in the first display area and plays the second content in the second display area when the reference time period is smaller than the first threshold. Through the processing, the equipment with the display screen can intelligently and simultaneously play the first content of the first equipment screen and the second content of the second equipment screen.
Accordingly, when the reference time period is greater than or equal to the first threshold value, the first content originally displayed in the device with the display screen is replaced and displayed by the second content in the second device.
In one practical example. For example, the reference time period is 37 seconds and the first threshold is 120 seconds. In this case, the reference time period is smaller than the first threshold value. Thus, a device having a display screen plays first content in a first display area and second content in a second display area.
In another example, the reference time period is 560 seconds. In this case, the reference time period is greater than the first threshold. Thus, the first content that would otherwise be displayed in a device with a display screen would be replaced by the second content, which would be displayed by the device with the display screen.
(2) The split screen information is device information related to split screens, and particularly is historical screen throwing information of second devices.
Step b1, responding to a second content screen-throwing request, and acquiring historical screen-throwing information of the second equipment, wherein the historical screen-throwing information comprises at least one of the screen-throwing times or the screen-throwing accumulated time length of the second equipment for successfully throwing the screen to the equipment with the display screen.
And b21, in response to the number of screen shots being greater than or equal to a second threshold value, playing the first content in the first display area and playing the second content in the second display area.
And step b22, in response to the screen throwing accumulation time length being greater than or equal to a third threshold value, playing the first content in the first display area and playing the second content in the second display area.
After the step b1 is completed, the device having a display may execute the step b21 or the step b22.
In this example, the device with a display screen can determine, according to the cumulative duration of the second device, whether to split the screen and display the first content and the second content simultaneously. If the accumulated screen-throwing duration of the second device is long enough, the second device is strongly correlated with the device with the display screen in the aspect of screen throwing. In this case, the device with a display screen divides the screen into two display areas that do not overlap each other, namely a first display area and a second display area. Then, the devices having the display screen display the first content in the first display area and the second content in the second display area, respectively.
In this example, the device with a display screen can also determine whether to split the first content and the second content according to the number of times the second device has historically succeeded in performing screen projection on the device with a display screen. When the number of screen throwing times of the second device on the device with the display screen is larger than or equal to a second threshold value, the second device is strongly correlated with the device with the display screen in the aspect of screen throwing, and the device with the display screen simultaneously displays the first content and the second content after splitting the screen.
In contrast, when the historical screen-cast information indicates that the second device is not strongly correlated in terms of screen-cast with the device having the display screen, the device having the display screen will display a dialog box in the display screen. The dialog box is used to prompt the user to select a device having a display screen to display the first content and the second content via separate screens. Alternatively, the dialog box is used to prompt the user to select a device having a display screen to stop displaying the first content and to display the second content in a display area where the first content was originally displayed.
(3) The split screen information is content related to split screens, specifically, image information of a first content and image information of a second content.
Step c1, responding to a second content screen projection request, acquiring first image information in the first content according to a preset strategy, and acquiring second image information in the second content according to the preset strategy.
And c2, playing the first content in the first display area and playing the second content in the second display area in response to the similarity between the first image information and the second image information being greater than or equal to a fourth threshold.
In this example, the device with the display screen is capable of autonomously performing image recognition on the first content and the second content. In a specific application mode, the device with the display screen can acquire the second content according to the second content screen throwing request, and acquire a second key frame in the second content. Wherein the second key frame is an image frame acquired at a specified time. At the same designated time, the device with the display screen acquires an image frame in the first content, which is the first key frame. A device with a display screen is able to extract image features in a first key frame and is able to extract image features in a second key frame. When the similarity between the image features in the first key frame and the image features in the second key frame is greater than or equal to the fourth threshold, it is indicated that the first content and the second content are more likely to belong to the same application, or that the first content and the second content display corresponding objects. For example, the same person or the same article is displayed in the first content and the second content.
Because the content displayed in the first content and the second content are similar, it is highly likely that the user would like to view the content simultaneously in a device having a display screen for comparison. Thus, in this scenario, a device with a display screen will enter a split-screen state and display both the first content and the second content.
(4) The split screen information is content related to split screens, specifically, a content identifier of the first content and a content identifier of the second content.
Step d1, responding to a second content screen-throwing request, and acquiring a first content identifier corresponding to the first content and a second content identifier corresponding to the second content.
And d2, in response to the first content identification and the second content identification being matched, playing the first content in the first display area and playing the second content in the second display area.
In this example, the content identifier may indicate the identifier of the application to which the content corresponds, may indicate the category to which the application to which the content corresponds belongs, and may also indicate the type of the object in the content.
(1) The content identification indicates an application to which the content corresponds. For example, the first content identifier and the second content identifier are both used to indicate the application to which the content corresponds. When the first content identifier is an identifier of the a application and the second content identifier is an identifier of the a application, the device with the display screen determines that the first content identifier and the second content identifier match, plays the first content in the first display area while playing the second content in the second display area. In contrast, when the application indicated by the first content identification and the application indicated by the second content identification are different applications, the device having the display screen may stop displaying the first content and display the second content in the display area where the first content was originally displayed.
(2) The content identification indicates a category of an application to which the content corresponds. For example, the first content identifier and the second content identifier are respectively used for indicating a category of an application corresponding to the first content and a category of an application corresponding to the second content. For example, when the first content identification is a game-like application identification and the second content identification is a video application identification, the first content identification and the second content identification do not match. When the first content identification is a game class application identification and the second content identification is also a game class application identification, the first content identification and the second content identification match.
(3) The content identification indicates an object in the content, which may be a person or other item. For example, the first content identifier indicates a person jack and the second content identifier indicates a person brown, and the first content identifier and the second content identifier do not match. If the first content identifier indicates a person brown and the second content identifier also indicates a person brown, the first content identifier and the second content identifier are matched.
In the embodiment of the present application, the four scenes are used to indicate that the device with the display screen automatically recognizes that the split-screen mode needs to be entered, and the first content and the second content are respectively displayed in different display areas. The device with the display screen can also provide a control for a user to select whether to enter the split-screen mode automatically, so that the first content is displayed in a first display area in the split-screen mode and the second content is displayed in a second display area respectively. Please see the manual selection scheme by the user as follows.
And e1, responding to a second content screen throwing request, and displaying a split screen button.
And e2, responding to the triggering of the split screen button, playing the first content in the first display area, and playing the second content in the second display area.
In the embodiment of the application, the device with the display screen can display the split-screen button when receiving the content screen-throwing request of the second device. Illustratively, the split-screen button may be displayed in a display screen of a device having a display screen. The user can trigger the split button using the remote control device and when the split button is triggered, the device with the display will play the first content in the first display area and the second content in the second display area.
Optionally, the device with the display screen is also capable of displaying a close control, and when the user triggers the close control, the device with the display screen will cease displaying the split screen button.
In summary, the device with the display screen can first receive the first content screen-throwing request of the first device, and full-screen display the first content corresponding to the first device when the request meets the preset condition. Then, the device with the display screen automatically recognizes the importance degree of the second content or the safety of the second device for throwing the second content in a plurality of modes, and when the second content is important or the second device is safe relative to the device with the display screen, the device with the display screen automatically enters a split-screen mode and simultaneously displays the first content and the second content, thereby improving the intelligence of the device with the display screen, reducing the operation required by a user for simultaneously displaying the first content and the second content, and improving the convenience of throwing the screen on one device for a plurality of devices simultaneously.
Optionally, the device with a display screen can determine whether to automatically perform split-screen display according to a time length of a reference time length between the first content and the second content starting to throw the screen. When the first content and the second content are not long in time from the moment of starting to throw the screen on the device with the display screen, the correlation between the two contents is high enough, the device with the display screen automatically enters a split-screen state and simultaneously displays the first content and the second content, and the capability of the device with the display screen for intelligently determining whether to simultaneously display the first content and the second content from the aspect of screen throwing behavior is realized.
Optionally, the historical screen throwing information of the second device can confirm the intimacy degree between the second device and the device with the display screen, when the device with the display screen and the second device are frequently connected and have a comparatively intimacy relationship, the device with the display screen automatically enters a screen splitting state and simultaneously displays the first content and the second content, and the capability of the device with the display screen for intelligently judging whether to simultaneously display the first content and the second content from the historical screen throwing condition of the second device is improved.
Optionally, the device with the display screen can also determine whether to automatically enter the split-screen state according to the similarity of the first content and the second content on the image, so that the first content and the second content are displayed simultaneously. When the similarity of the first content and the second content on the image is larger, the first content and the second content are indicated to have stronger relevance, and the user is more likely to view the first content and the second content and compare the first content and the second content at the same time. Thus, the device having the display screen can automatically enter the split-screen state in the scene, thereby simultaneously displaying the first content and the second content.
Optionally, the device with a display screen is further capable of determining whether to enter a split-screen state according to whether the first content identifier and the second content identifier are matched, and when the two content identifiers are matched, it is indicated that the first content and the second content need to be displayed in a state that the first content and the second content can be viewed simultaneously. Therefore, in the scene, the device with the display screen automatically enters the split-screen state and simultaneously displays the first content and the second content, and when the content displayed by the plurality of devices is relevant, the device with the display screen automatically simultaneously displays the first content and the second content.
Besides the scene which needs to be split in the automatic identification mode, the embodiment of the application can be applied to the scene that the mobile equipment answers and the invigilation equipment monitors the mobile equipment. In this scenario, the device with the display screen is a monitoring device, and the first device and the second device are devices that display test papers and are used by the examinee. For an introduction to this application scenario, please refer to the content shown in fig. 8.
Referring to fig. 8, fig. 8 is a flowchart of a cross-device content screen projection method applied to a prison scene according to an embodiment of the present application. The method involved in fig. 8 may be implemented by the device with a display screen shown in fig. 1, in the following details:
Step 811, starting a device with a display screen, wherein the device with the display screen is a monitoring device of an examination room.
In this example, the device with the display screen is a monitoring device in the examination room, and the monitoring staff can know the content displayed in the examination device used by the examinee through the device with the display screen in the examination process.
It should be noted that the first device and the second device are examination devices. One first device and several second devices may be included in the present application. The first device is a device for projecting a first screen in the device with the display screen, and the second device is a device for projecting a screen in the device with the display screen after the first device finishes projecting the screen.
In this scenario, the monitoring device may be configured to provide a radio frequency card, which is a projection tag. The first device, the second device, etc. may be proximate to the radio frequency card to read information from the radio frequency card to generate the first string or the second string.
It should be noted that, in one possible manner, the radio frequency card is provided for each examination room, and the examinee who takes the examination at the examination room scans the radio frequency card using his own examination equipment.
In another possible way, the radio frequency card may be provided with one radio frequency card on each examinee's seat.
Step 812, a first content screen-drop request sent by the first device is received.
In this example, after the first device reads information from the radio frequency card and generates the second string, the second string is placed in the first content screen request, and the first content screen request is sent to the first device.
Correspondingly, the device with the display screen receives a first content screen throwing request sent by the first device.
Step 813, playing the first content in response to the first character string in the first content screen-drop request matching the character string template.
After receiving the first content screen request, the device with the display screen can read the first character string from the content screen request, and play the first content when the first character string is matched with the character string template.
Wherein the first content is content displayed in a first device screen. That is, content displayed in real time in the first device will be synchronously projected onto the device with the display screen for display.
Step 814, receiving a second content screen-drop request sent by the second device.
Step 815, in response to the second string in the second content drop request matching the string template, playing the first content and the second content simultaneously.
Similarly to the first device projecting the first content to the device with the display screen for display, the second device will repeat the same operations as the first device, also projecting the first content to the device with the display screen. In the embodiment of the application, since there are usually several or tens of test takers in one test room. Thus, one device with a display screen may correspond to one first device and several second devices.
In the above corresponding design, the number of the second devices may be 7, 9, 15, or 19 or the like in consideration of the screen size of the device having the display screen.
After the second device successfully drops the second content onto the device with the display screen, the device with the display screen will display the first content and the second content simultaneously. In the scene, the examinee uses the first device or the second device to answer the examination. Due to the specificity of the electronic device, it is also difficult for the proctor to proctor examination devices used by multiple test takers simultaneously. In the method provided by the application, the proctor can simultaneously see the first content and a plurality of second contents in the equipment with the display screen so as to view the answer interface seen by the examinee in real time.
In step 821, in response to the first content and/or the second content including the abnormal image, the abnormal image and a time at which the abnormal image is displayed are recorded.
In this example, the device with the display screen is capable of image recognition of images in the first content and/or the second content. In one possible way, a device with a display screen is able to perform image recognition on a frame-by-frame basis. In another possible way, a device with a display screen can recognize images in the same content once every N frames. For example, a device having a display screen performs image recognition on image frames in the first content, with 30 frames of images as a set of images. A device with a display screen is capable of image recognition of a first image frame of a set of images.
Wherein, the device with the display screen can be preset with the normal image template. The device with the display screen compares the image frames obtained from the first content or the second content with the image templates. When the similarity is higher than a preset threshold value, the image frames are indicated to belong to normal images; when the similarity is equal to or lower than a preset threshold, it is indicated that the image frame belongs to an abnormal image.
When the device with the display screen recognizes the abnormal image, the device with the abnormal image is recorded, and the display time of the abnormal image is recorded.
In another way of identifying an outlier, the application to which the outlier belongs does not belong to the whitelist of applications. In an examination scenario, a whitelist of applications may be maintained in a device having a display screen, to which application identifications associated with the examination are to be listed. When the examinee uses the examination equipment to use the application related to the examination of the examinee, the identifier of the application to which the first content or the second content belongs to the white list application list. If the examinee uses other applications, the examinee is caught by the device with the display screen at the first time. It should be noted that, each image frame in the first content or the second content acquired by the device with the display screen corresponds to the identifier of the application to which the device belongs.
For example, applications belonging to the whitelist application list include an a-test application and a B-calculator application. When the device with the display screen recognizes that the application to which one image frame belongs in the first content is neither an a-test application nor a B-calculator application, it is determined that the first content is a content including an abnormal image.
Optionally, the device with the display screen may continue to execute step 822 in addition to executing step 821.
At step 822, content including the anomaly image is highlighted, the highlighting including at least one of highlighting, displaying a preset mark, or displaying preset text.
In the embodiment of the application, in order to remind the proctor of abnormal situations, the device with the display screen can highlight the content comprising the abnormal image. For example, the first content in the first device includes an abnormal image, and the device having the display screen highlights the first content when the first content is displayed. Alternative ways of highlighting include, but are not limited to, highlighting the area, or attaching a preset mark to the first content, or attaching preset text to the first content. The embodiments of the present application are not limited to a specific manner of highlighting.
In step 830, a lock signal is sent to the device in which the abnormal image appears, where the lock signal is used to instruct the device in which the abnormal image appears to stop receiving the input signal.
In this example, the device having the display screen is also capable of transmitting a lock signal to the device in which the abnormal image appears, so that the device in which the abnormal image appears can no longer receive the input signal under the action of the lock signal. Through this design for the equipment that has the display screen can be when other equipment of monitoring appear unusual, the corresponding examination equipment of direct locking to realize strictly handling examination abnormal phenomenon, realized that the user uses the equipment of self daily use to carry out paperless examination, and can receive the effect of strictly proctoring.
In summary, the method provided by the embodiment of the application can be applied to the invigilation scene of paperless examination, the effect that the examinee uses the mobile equipment used daily by himself to take the examination can be achieved, the stricter invigilation effect can be achieved, the cost of building paperless examination rooms is reduced, and the strict invigilation effect of the examination is guaranteed.
In the embodiment of the present application, the device with a display screen can control the content displayed by the device with a display screen, besides displaying the first content and the second content at the same time, and for details, reference may be made to the embodiment shown in fig. 9.
Referring to fig. 9, fig. 9 is a flowchart of a method for cross-device content screening according to an embodiment of the present application. The method may be applied in a device having a display screen as shown in fig. 1. In fig. 9, the method includes:
step 911 receives a first content screen request sent by a first device.
In step 912, the first content is played in response to the first string in the first content drop request matching the string template.
Step 913, receiving a second content screen-drop request sent by the second device.
Step 914, in response to the second character string in the second content screen-drop request matching the character string template, playing the first content and the second content simultaneously.
In the embodiment of the present application, the execution of steps 911 to 914 may refer to the execution of steps 410 to 440, which are not described herein.
Step 921, local area information is received.
Wherein the local area information is used to indicate information of one local area in the first device or information of one local area in the second device.
In step 922, in response to the local area information, the area corresponding to the local area information is enlarged by k times.
Wherein k is greater than 1. The device with the display screen can enable a user to more easily observe the region which is focused on by amplifying the region corresponding to the local region information.
In step 923, the region corresponding to the partial region information is displayed in the display region corresponding to the partial region information after being enlarged k times.
In the embodiment of the application, the device with the display screen can display the local area in the first content in an enlarged mode under the control of a user. In one possible approach, the local area information is information that the first device sends to the device with the display screen. In another possible way, the local area information is information transmitted by a remote control device corresponding to a device having a display screen.
When the device with the display screen acquires the local area information, the device with the display screen determines the area corresponding to the local area information. Then, the image originally displayed in the region is displayed in an enlarged manner by k times. If the display area corresponding to the local area information is the first display area, the device with the display screen displays the area corresponding to the local area information after the magnification of k times in the first display area. If the display area corresponding to the local area information is the second display area, the device with the display screen displays the area corresponding to the local area information after the magnification of k times in the second display area.
Referring to fig. 10, fig. 10 is a schematic display diagram of a device with a display screen according to the embodiment shown in fig. 9. In fig. 10, a first display area 210 and a second display area 220 are included in the device 111 having a display screen. A first partial display area 211 is included in the first display area 210. Wherein the local area information is used to indicate the first local display area 211. Subsequently, the device 111 having a display screen displays the image magnified k times in the second partial display area 212.
For example, a minimap in the first content originally displayed in the local display area 211, the device with the display screen may further zoom in on the minimap in that area so that the user can see the details in more detail for the content of interest.
Alternatively, the device with the display screen can zoom in the corresponding partial region k times in the second content. For example, the second content is the same as the first content and the game interface is displayed, and the device with the display screen is provided with an area display for placing the minimap in the first content and the minimap in the first device close together, so that the user can compare the determined portions of the first content and the second content directly from the device with the display screen.
In step 931, image recognition is performed on the first content to obtain a first text matching with the preset keyword.
In step 932, image recognition is performed on the second content, so as to obtain a second text that matches the preset keyword.
In step 933, the first text and the second text are displayed in a contrasting manner.
In the application, the device with the display screen can perform image recognition on the first content and the second content, and text information in the first content and the second content is acquired. And when the text information in the text information is matched with the preset key, the device with the display screen compares the first text with the second text and displays the first text and the second text together. The preset keywords are keywords set in the device with the display screen. In one manner of display, a first text is displayed in a first display area and a second text is displayed in a second display area, the distance between the first text and the second text being less than a fifth threshold. Optionally, the first text and the second text are displayed in a contrast area, the contrast area being an area that spans the first display area and the second display area.
Illustratively, the partitioning algorithm for the first text and the second text is the same. For example, if the first text is a sentence, the second text is also a sentence. If the first text is a segment of speech, the second text is likewise a segment of speech.
For example, if the first content is a shopping page for commodity a in shopping application S2, the second content is a shopping page for commodity a in shopping application S3. Wherein the preset keyword is "actual price". The first text extracted in the first content by the device with the display screen is "actual price: 224 "while the second text extracted by the first device in the second content is" actual price: 206 yuan. The device with the display screen then displays the first text and the second text simultaneously at the location where the first display area interfaces with the second display area so that the user can directly compare the most interesting information.
Referring to fig. 11, fig. 11 is a schematic diagram of a display interface of a device with a display screen according to the embodiment shown in fig. 9. In fig. 11, the display area of the device 111 having a display screen includes a first display area 210 and a second display area 220. A first contrast display area 213 is included in the first display area 210, and a second contrast display area 221 is included in the second display area 220. The device with display 111 may display the first text 11A extracted from the first content in the first contrast display area 213 and the device with display 111 may also display the second text 11B extracted from the second content in the second contrast display area 221 so that the user can intuitively compare the first content with the second content in which the user compares the portion of interest.
The first content and the second content shown in fig. 11 are each a basketball commodity, and the extracted information includes a price, sales volume, and a name of a store to which the basketball belongs. Optionally, the device with the display screen may also highlight text that is less expensive.
In step 941, a first commodity extraction command sent by the first device is received.
Wherein the first commodity extraction command is used for extracting a commodity specified in the first content.
In step 942, in response to the first commodity extraction command, the corresponding first commodity is extracted and stored.
Step 943, a second commodity extraction command sent by the second device is received.
Wherein the second commodity extraction command is for extracting a commodity specified in the second content.
In response to the second commodity extraction command, a corresponding second commodity is extracted and stored in step 944.
Step 945, receiving the summarizing instruction, and sending the stored first commodity and the second commodity to a device sending the summarizing instruction.
The device sending out the summarizing instruction is the first device or the second device.
In the embodiment of the application, the first device and the second device can respectively select goods to be extracted from contents which are already projected onto the device with the display screen. Wherein the merchandise may be text, images or links.
Optionally, the first commodity and the second commodity are the same type of commodity, the type being a target type.
Illustratively, besides the commodity, the method and the device can also extract objects such as scenic spots, commodities, restaurants, markets, stars, movie names, concerts or pages.
It should be noted that, after the device with the display screen extracts and stores the corresponding object, the corresponding policy may also be executed according to the different object types. For example, when the target type is a sight, a device having a display screen automatically generates a travel route and transmits the travel route to the first device or the second device.
For another example, when the target type is a commodity, the device with the display automatically compares the lowest price commodity in the same commodity among the commodities that have been extracted, and sends a buy-in instruction to the first device or the second device, the buy-in instruction being for instructing to add the commodity to a shopping cart in a shopping application.
In summary, in the embodiment of the present application, the device with a display screen can zoom in on the content that needs to be zoomed in simultaneously in the first content and the second content that are displayed by the screen, so as to achieve the capability of effectively displaying the area of interest of the user.
Optionally, the device with the display screen can also perform image recognition on the first content and the second content, texts matched with the preset keywords are contained in the first content and the second content and are displayed together, so that difficulty of comparing information focused in the two contents by a user at the same time is reduced, and efficiency of comparing the contents displayed in a plurality of other devices in the device with the display screen is improved.
Optionally, the device with the display screen can also respectively extract and store corresponding objects under the control of the first device and the second device, the first device and the second device can respectively select objects in the respective devices, and finally, the selected objects can be summarized in one device, so that the capability of jointly selecting the objects by a plurality of devices under the scene that the device with the display screen displays references is realized.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 12, fig. 12 is a block diagram of a cross-device content projection apparatus according to an exemplary embodiment of the present application. The cross-device content delivery apparatus may be implemented in software, hardware, or a combination of both as all or part of a device having a display screen. The device comprises:
A first receiving module 1210, configured to receive a first content screen-throwing request sent by a first device, where the first content screen-throwing request is a request sent by the first device to the device with a display screen after performing a preset interaction operation;
a first playing module 1220, configured to respond to the first content screen-drop request, and play first content, where the first content is content played in the first device;
a second receiving module 1230, configured to receive a second content screen-throwing request sent by a second device, where the second content screen-throwing request is a request sent by the second device to the device with a display screen after the second device performs the preset interaction operation;
a second playing module 1240, configured to respond to the second content screen-drop request, and play the first content and the second content in a split screen mode, where the second content is the content played in the second device.
In an optional embodiment, the preset interaction operation related to the apparatus includes scanning a screen-throwing label, where the screen-throwing label is an electronic label corresponding to the device with the display screen; or, the preset interaction operation comprises shooting a preset gesture; or, the preset interaction operation comprises shooting a preset action.
In an optional embodiment, the apparatus further includes a third receiving module and a split screen module, where the third receiving module is configured to receive a split screen instruction; the split screen module is used for responding to the split screen instruction and entering the split screen mode, and the display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode; the first display area is used for displaying the first content, and the second display area is used for displaying the second content.
In an optional embodiment, the second playing module 1240 is configured to obtain split-screen information in response to the second content screen-casting request; and responding to the split screen information to meet a preset condition, entering a split screen mode, wherein the display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode, playing the first content in the first display area, and playing the second content in the second display area.
In an optional embodiment, the second playing module 1240 is configured to obtain, in response to the second content screen-throwing request, a reference duration between a screen-throwing time of the first device and a screen-throwing time of the second device; and in response to the reference time length being less than a first threshold value, playing the first content in the first display area, and playing the second content in the second display area.
In an optional embodiment, the second playing module 1240 is configured to obtain, in response to the second content screen-throwing request, historical screen-throwing information of the second device, where the historical screen-throwing information includes at least one of a screen-throwing number or a screen-throwing accumulated duration of a screen of the second device that is successful in throwing a screen to the device with a display screen; in response to the number of screen shots being greater than or equal to a second threshold, playing the first content in the first display area and the second content in the second display area; or, in response to the cumulative length of the screen being greater than or equal to a third threshold, playing the first content in the first display area and the second content in the second display area.
In an optional embodiment, the second playing module 1240 is configured to obtain, in response to the second content screen-drop request, first image information in the first content according to a preset policy, and obtain second image information in the second content according to the preset policy; and playing the first content in the first display area and the second content in the second display area in response to the similarity between the first image information and the second image information being greater than or equal to a fourth threshold.
In an optional embodiment, the second playing module 1240 is configured to obtain, in response to the second content screen-drop request, a first content identifier corresponding to the first content and a second content identifier corresponding to the second content; and in response to the first content identification and the second content identification matching, playing the first content in the first display area and playing the second content in the second display area.
In an alternative embodiment, the device further comprises a button display module for displaying a split screen button in response to the second content screen projection request; the second playing module 1240 is configured to enter the split-screen mode in response to the split-screen button being triggered, where the display screen of the device with a display screen includes at least a first display area and a second display area.
In an optional embodiment, the apparatus further includes an anomaly recording module and a highlighting module, where the anomaly recording module is configured to record, in response to the first content and/or the second content including an anomaly image, the anomaly image and a time at which the anomaly image is displayed, where an application to which the anomaly image belongs does not belong to a whitelist application list; the highlighting module is used for highlighting the content comprising the abnormal image, and the highlighting mode comprises at least one of highlighting, displaying a preset mark or displaying preset text.
In an alternative embodiment, the apparatus further comprises a device locking module for sending a locking signal to a device in which the abnormal image appears, the locking signal being used to instruct the device in which the abnormal image appears to stop receiving the input signal.
In an alternative embodiment, the apparatus further includes a fourth receiving module, a local amplifying module, and a local display module, where the fourth receiving module is configured to receive local area information, where the local area information is used to indicate information of one local area in the first device, or information of one local area in the second device; the local amplifying module is used for responding to the local area information and amplifying the area corresponding to the local area information by k times, wherein k is larger than 1; the local display module is configured to display, in a display area corresponding to the local area information, an area corresponding to the local area information that is amplified k times, where when the local area information is used to indicate one local area in the first content, the display area corresponding to the local area information is a first display area; when the local area information is used for indicating one local area in the second content, the display area corresponding to the local area information is a second display area.
In an optional embodiment, the apparatus further includes a first recognition module, a second recognition module, and an abutting display module, where the first recognition module is configured to perform image recognition on the first content to obtain a first text that matches a preset keyword, where the preset keyword is a keyword set in the device with a display screen; the second recognition module is used for carrying out image recognition on the second content to obtain a second text matched with the preset keyword; and the contrast display module is used for carrying out contrast display on the first text and the second text.
In an alternative embodiment, the apparatus further comprises a fifth receiving module, a first processing module, a sixth receiving module, a second processing module, and a third processing module; the fifth receiving module is configured to receive a first commodity extraction command sent by the first device, where the first commodity extraction command is used to extract a commodity specified in the first content; the first processing module is used for responding to a first object extraction command and extracting and storing a corresponding first commodity; the sixth receiving module is configured to receive a second commodity extraction command sent by the second device, where the second commodity extraction command is used to extract a commodity specified in the second content; the second processing module is used for responding to the second commodity extraction command and extracting and storing a corresponding second commodity; the third processing module is configured to receive a summary instruction, send the stored first commodity and the second commodity to a device that sends the summary instruction, where the device that sends the summary instruction is the first device or the second device.
In an optional embodiment, the first commodity and the second commodity involved in the device are commodities belonging to the same shopping application, and the summarized instruction includes an identifier of the shopping application corresponding to the first commodity and the second commodity together.
In summary, in the cross-device content screen-throwing device provided in this embodiment, since the device with a display screen can sequentially receive the content screen-throwing requests sent by the first device and the second device, and continue to play the first content in the first device in the device with the display screen, and continue to play the second content in the second device in the device with the display screen. The first device and the second device acquire corresponding character strings through the radio frequency tag which is in binding relation with the device with the display screen, and when the character strings are matched with the built-in character string templates, the device with the display screen plays corresponding contents, so that the capability of the device with the display screen for simultaneously displaying the contents of a plurality of devices under the condition of less operation is improved, and the efficiency of multi-device screen projection is improved.
Referring to fig. 13, fig. 13 is a block diagram of a device with a display screen according to an exemplary embodiment of the present application, and as shown in fig. 13, the device with a display screen includes a processor 1320, a memory 1340, a communication component 1360, and a display screen 1380, where at least one instruction is stored in the memory 1340, and the instruction is loaded and executed by the processor 1320 to implement a cross-device content screening method according to various method embodiments of the present application.
In this application, the device 1300 having a display screen is an electronic device having a display function. The device 1300 with a display screen receives a first content screen-throwing request sent by a first device, wherein the first content screen-throwing request is used for continuing to play first content in the device with the display screen, and the first content is played content in the first device; responding to the matching of a first character string in the first content screen throwing request and a character string template, and playing the first content, wherein the first character string is information read by the first device from a screen throwing label, and the screen throwing label has binding relation with the device with a display screen; receiving a second content screen projection request sent by a second device, wherein the second content screen projection request is used for continuing to play second content in the device with the display screen, and the second content is the content played in the second device; and responding to matching of a second character string in the second content screen-throwing request with a character string template, and simultaneously playing the first content and the second content, wherein the second character string is information read from the screen-throwing label by the second device.
Processor 1320 may include one or more processing cores. Processor 1320 utilizes various interfaces and lines to connect various portions of the overall device 1300, including the display screen, by executing or executing instructions, programs, code sets, or instruction sets stored in memory 1340, and invoking data stored in memory 1340, to perform various functions and processing data for the device 1300, including the display screen. Alternatively, the processor 1320 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). Processor 1320 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modems described above may also be implemented solely on a single chip, rather than being integrated into processor 1320.
Memory 1340 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). Optionally, the memory 1340 includes a non-transitory computer-readable medium (non-transitory computer-readable storage medium). Memory 1340 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 1340 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (e.g., a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc.; the storage data area may store data and the like referred to in the following respective method embodiments.
The communication component 1360 is configured to establish a communication connection with other devices through a specified communication protocol, where the communication connection includes WiFi, bluetooth, zigBee, and other communication protocols, and the communication protocol is not limited in this embodiment of the present application. The communication component 1360 of the device 1300 with a display screen can be used to establish communication connections with multiple devices simultaneously.
The display 1380 is used to display images rendered by the processor. Alternatively, when the device having a display screen is a large screen display device such as a television, the display screen 1380 may be a large-sized television screen. The television screen may be 30, 32, 34, 40, 42, 48, 55, 60, 65, or 100 inches in size. In other possible implementations, if the device with a display screen is a computer display or the like, display screen 1380 may be implemented in 18 inches, 20 inches, 22 inches, 24 inches, 32 inches, or the like.
Embodiments of the present application also provide a computer readable medium storing at least one instruction that is loaded and executed by the processor to implement the cross-device content screening method described in the above embodiments.
It should be noted that: when executing the cross-device content screen method, the cross-device content screen device provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the cross-device content screen projection device provided in the above embodiment and the cross-device content screen projection method embodiment belong to the same concept, and detailed implementation processes of the cross-device content screen projection device and the cross-device content screen projection method embodiment are detailed in the method embodiment, and are not repeated here.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is merely illustrative of the possible embodiments of the present application and is not intended to limit the present application, but any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (18)

1. A cross-device content screening method, for use in a device having a display screen, the method comprising:
receiving a first content screen throwing request sent by first equipment, wherein the first content screen throwing request is a request sent by the first equipment to the equipment with the display screen after preset interactive operation is executed;
playing first content in response to the first content screen-throwing request, wherein the first content is played in the first device;
receiving a second content screen projection request sent by a second device, wherein the second content screen projection request is a request sent by the second device to the device with the display screen after the preset interactive operation is executed;
responding to the second content screen-throwing request, acquiring screen-splitting information, wherein the screen-splitting information is information for judging whether to enter a screen-splitting mode or not, and comprises at least one of screen-throwing time, historical screen-throwing information, image information and content identification;
Responding to the split screen information to meet a preset condition, entering a split screen mode, wherein a display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode;
playing the first content in the first display area, and playing second content in the second display area, wherein the second content is played in the second device;
performing image recognition on the first content to obtain a first text matched with a preset keyword, wherein the preset keyword is a keyword set in the equipment with the display screen;
performing image recognition on the second content to obtain a second text matched with the preset keyword;
and comparing and displaying the first text and the second text.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the preset interactive operation comprises scanning a screen-throwing label, wherein the screen-throwing label is an electronic label corresponding to the equipment with the display screen;
or alternatively, the first and second heat exchangers may be,
the preset interaction operation comprises shooting a preset gesture;
or alternatively, the first and second heat exchangers may be,
the preset interactive operation comprises shooting a preset action.
3. The method according to claim 1 or 2, further comprising, prior to receiving the first content screen-drop request sent by the first device:
Receiving a split screen instruction;
responding to the split screen instruction, entering a split screen mode, wherein the display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode;
the first display area is used for displaying the first content, and the second display area is used for displaying the second content.
4. The method according to claim 1 or 2, wherein the obtaining split screen information in response to the second content screen request includes:
responding to the second content screen-throwing request, and acquiring a reference time length between the screen-throwing time of the first equipment and the screen-throwing time of the second equipment;
the responding to the split screen information meeting the preset condition, entering the split screen mode comprises the following steps:
and responding to the reference time length being smaller than a first threshold value, and entering the split screen mode.
5. The method according to claim 1 or 2, wherein the obtaining split screen information in response to the second content screen request includes:
responding to the second content screen-throwing request, and acquiring historical screen-throwing information of the second equipment, wherein the historical screen-throwing information comprises at least one of the screen-throwing times or the screen-throwing accumulated time length of the second equipment for successfully throwing the screen to the first equipment;
The responding to the split screen information meeting the preset condition, entering the split screen mode comprises the following steps:
responding to the screen throwing times being greater than or equal to a second threshold value, and entering the split screen mode;
or alternatively, the first and second heat exchangers may be,
and responding to the screen throwing accumulated time length being greater than or equal to a third threshold value, and entering the split screen mode.
6. The method according to claim 1 or 2, wherein the obtaining split screen information in response to the second content screen request includes:
responding to the second content screen projection request, acquiring first image information in the first content and acquiring second image information in the second content;
the responding to the split screen information meeting the preset condition, entering the split screen mode comprises the following steps:
and responding to the similarity between the first image information and the second image information to be larger than or equal to a fourth threshold value, and entering the split screen mode.
7. The method of claim 4, wherein the obtaining split screen information in response to the second content projection request comprises:
responding to the second content screen projection request, and acquiring a first content identifier corresponding to the first content and a second content identifier corresponding to the second content;
The responding to the split screen information meeting the preset condition, entering the split screen mode comprises the following steps:
and responding to the matching of the first content identification and the second content identification, and entering the split screen mode.
8. The method of claim 1 or 2, wherein after the receiving the second content drop request sent by the second device, the method further comprises:
responding to the second content screen throwing request, and displaying a split screen button;
and responding to the triggering of the split screen button, entering the split screen mode, wherein the display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode.
9. The method according to claim 1 or 2, characterized in that the method further comprises:
recording an abnormal image and the display time of the abnormal image in response to the first content and/or the second content, wherein the application to which the abnormal image belongs does not belong to a white list application list;
highlighting the content comprising the abnormal image, wherein the highlighting mode comprises at least one of highlighting, displaying a preset mark or displaying preset text.
10. The method according to claim 9, wherein the method further comprises:
and sending a locking signal to the equipment with the abnormal image, wherein the locking signal is used for indicating the equipment with the abnormal image to stop receiving an input signal.
11. The method of claim 1 or 2, wherein after the first content is played in the first display area and the second content is played in the second display area, the method further comprises:
receiving local area information, wherein the local area information is used for indicating one local area in the first content or one local area in the second content;
amplifying the region indicated by the local region information by k times in response to the local region information, wherein k is greater than 1;
displaying the region corresponding to the local region information after being amplified by k times in the display region corresponding to the local region information;
when the local area information is used for indicating one local area in the first content, the display area corresponding to the local area information is a first display area; when the local area information is used for indicating one local area in the second content, the display area corresponding to the local area information is a second display area.
12. The method of claim 1, wherein the method further comprises;
receiving a first commodity extraction command sent by the first device, wherein the first commodity extraction command is used for extracting commodities specified in the first content;
responding to a first object extraction command, and extracting and storing a corresponding first commodity;
receiving a second commodity extraction command sent by the second device, wherein the second commodity extraction command is used for extracting commodities specified in the second content;
responding to the second commodity extraction command, and extracting and storing a corresponding second commodity;
and receiving a summarizing instruction, and sending the stored first commodity and the second commodity to equipment for sending the summarizing instruction, wherein the equipment for sending the summarizing instruction is the first equipment or the second equipment.
13. The method of claim 12, wherein the first commodity and the second commodity are commodities belonging to a same shopping application, and the summary instruction includes an identification of the shopping application to which the first commodity and the second commodity correspond in common.
14. A cross-device content screening apparatus for use in a device having a display screen, the apparatus comprising:
The first receiving module is used for receiving a first content screen-throwing request sent by first equipment, wherein the first content screen-throwing request is a request sent by the first equipment to the equipment with the display screen after preset interactive operation is executed;
the first playing module is used for responding to the first content screen projection request and playing the first content, wherein the first content is played in the first equipment;
the second receiving module is used for receiving a second content screen projection request sent by second equipment, wherein the second content screen projection request is a request sent by the second equipment to the equipment with the display screen after the preset interactive operation is executed;
the second playing module is used for responding to the second content screen-throwing request and obtaining screen-splitting information, wherein the screen-splitting information is information for judging whether to enter a screen-splitting mode or not, and comprises at least one of screen-throwing time, historical screen-throwing information, image information and content identification; responding to the split screen information meeting a preset condition, and entering a split screen mode, wherein the display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode; playing the first content in the first display area, and playing second content in the second display area, wherein the second content is played in the second device;
The image recognition module is used for carrying out image recognition on the first content to obtain a first text matched with a preset keyword, wherein the preset keyword is a keyword set in the equipment with the display screen;
the image recognition module is used for carrying out image recognition on the second content to obtain a second text matched with the preset keyword;
and the comparison display module is used for comparing and displaying the first text and the second text.
15. An electronic device comprising a processor, and a memory coupled to the processor, and program instructions stored on the memory, wherein execution of the program instructions by the processor implements the cross-device content screening method of any one of claims 1 to 13.
16. A computer readable storage medium having stored therein program instructions, which when executed by a processor, implement the cross-device content screening method of any one of claims 1 to 13.
17. A cross-device content screening system, comprising a device with a display screen, a first device, at least one second device and a remote control device, wherein the remote control device comprises a screen-screening label and is used for remotely controlling the device with the display screen;
The first device is used for scanning information in the screen-throwing label;
the first device is used for sending a first content screen throwing request to the device with the display screen after scanning the information in the screen throwing label;
the device with the display screen is used for receiving the first content screen throwing request;
the device with the display screen is used for responding to the first content screen projection request and playing first content, wherein the first content is played in the first device;
the second device is used for scanning information in the screen-throwing label;
the second device is used for sending a second content screen throwing request to the device with the display screen after scanning the information in the screen throwing label;
the device with the display screen is used for receiving the second content screen throwing request;
the device with the display screen is used for responding to the second content screen-throwing request and acquiring screen-splitting information, wherein the screen-splitting information is information for judging whether to enter a screen-splitting mode or not, and comprises at least one of screen-throwing time, historical screen-throwing information, image information and content identification;
the device with the display screen is used for responding to the split screen information to meet a preset condition and entering a split screen mode, wherein the display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode;
The device with the display screen is used for playing the first content in the first display area and playing the second content in the second display area, wherein the second content is played in the second device;
the device with the display screen is used for carrying out image recognition on the first content to obtain a first text matched with a preset keyword, wherein the preset keyword is a keyword set in the device with the display screen;
the device with the display screen is used for carrying out image recognition on the second content to obtain a second text matched with the preset keyword;
the device with the display screen is used for comparing and displaying the first text and the second text.
18. A cross-device content delivery system, the system comprising a device having a display screen, a first device comprising a first image acquisition component, and at least one second device comprising a second image acquisition component;
the first device is used for shooting a preset gesture or a preset action through the first image acquisition component;
the first device is used for sending a first content screen throwing request to the device with the display screen after shooting the preset gesture or the preset action;
The device with the display screen is used for receiving the first content screen throwing request;
the device with the display screen is used for responding to the first content screen projection request and playing first content, wherein the first content is played in the first device;
the second device is used for shooting the preset gesture or the preset action through the second image acquisition component;
the second device is used for sending a second content screen throwing request to the device with the display screen after shooting the preset gesture or the preset action;
the device with the display screen is used for receiving the second content screen throwing request;
the device with the display screen is used for responding to the second content screen-throwing request and acquiring screen-splitting information, wherein the screen-splitting information is information for judging whether to enter a screen-splitting mode or not, and comprises at least one of screen-throwing time, historical screen-throwing information, image information and content identification;
the device with the display screen is used for responding to the split screen information to meet a preset condition and entering a split screen mode, wherein the display screen of the device with the display screen at least comprises a first display area and a second display area in the split screen mode;
The device with the display screen is used for playing the first content in the first display area and playing the second content in the second display area, wherein the second content is played in the second device.
CN202011311456.1A 2020-11-20 2020-11-20 Cross-device content screen projection method, device, equipment and storage medium Active CN112306442B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011311456.1A CN112306442B (en) 2020-11-20 2020-11-20 Cross-device content screen projection method, device, equipment and storage medium
PCT/CN2021/118823 WO2022105403A1 (en) 2020-11-20 2021-09-16 Cross-device content projection method and apparatus, and devices and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011311456.1A CN112306442B (en) 2020-11-20 2020-11-20 Cross-device content screen projection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112306442A CN112306442A (en) 2021-02-02
CN112306442B true CN112306442B (en) 2023-05-12

Family

ID=74335309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011311456.1A Active CN112306442B (en) 2020-11-20 2020-11-20 Cross-device content screen projection method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112306442B (en)
WO (1) WO2022105403A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112306442B (en) * 2020-11-20 2023-05-12 Oppo广东移动通信有限公司 Cross-device content screen projection method, device, equipment and storage medium
CN113190196B (en) * 2021-04-27 2023-09-05 北京京东振世信息技术有限公司 Multi-device linkage realization method and device, medium and electronic device
CN113010135B (en) * 2021-04-29 2024-03-12 深圳Tcl新技术有限公司 Data processing method and device, display terminal and storage medium
CN113382293A (en) * 2021-06-11 2021-09-10 北京字节跳动网络技术有限公司 Content display method, device, equipment and computer readable storage medium
CN113691850A (en) * 2021-08-25 2021-11-23 深圳康佳电子科技有限公司 Screen projection control method and device, intelligent terminal and computer readable storage medium
CN116684456B (en) * 2023-08-03 2023-10-03 云账户技术(天津)有限公司 Large-screen visual deployment method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018203595A1 (en) * 2017-05-04 2018-11-08 임성현 Projector capable of touch interaction
CN111061445A (en) * 2019-04-26 2020-04-24 华为技术有限公司 Screen projection method and computing equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103856809A (en) * 2012-12-03 2014-06-11 中国移动通信集团公司 Method, system and terminal equipment for multipoint at the same screen
WO2015075767A1 (en) * 2013-11-19 2015-05-28 日立マクセル株式会社 Projection-type video display device
US9992441B2 (en) * 2015-01-05 2018-06-05 Lattice Semiconductor Corporation Displaying multiple videos on sink device using display information of source device
CN105487796A (en) * 2015-11-25 2016-04-13 努比亚技术有限公司 Sub-screen display method and terminal
US10536739B2 (en) * 2017-01-04 2020-01-14 Samsung Electronics Co., Ltd. Display apparatus and control method thereof
CN109783181B (en) * 2019-01-31 2019-12-20 掌阅科技股份有限公司 Screen adaptive display method, electronic device and computer storage medium
CN110381345B (en) * 2019-07-05 2020-12-15 华为技术有限公司 Screen projection display method and electronic equipment
CN111131866B (en) * 2019-11-25 2021-06-15 华为技术有限公司 Screen-projecting audio and video playing method and electronic equipment
CN111913628B (en) * 2020-06-22 2022-05-06 维沃移动通信有限公司 Sharing method and device and electronic equipment
CN112306442B (en) * 2020-11-20 2023-05-12 Oppo广东移动通信有限公司 Cross-device content screen projection method, device, equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018203595A1 (en) * 2017-05-04 2018-11-08 임성현 Projector capable of touch interaction
CN111061445A (en) * 2019-04-26 2020-04-24 华为技术有限公司 Screen projection method and computing equipment

Also Published As

Publication number Publication date
WO2022105403A1 (en) 2022-05-27
CN112306442A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112306442B (en) Cross-device content screen projection method, device, equipment and storage medium
US20210382562A1 (en) Sub-display designation for remote content source device
TWI434242B (en) Interactive information system, interactive information method, and computer readable medium thereof
CN114302185B (en) Display device and information association method
CN111897507B (en) Screen projection method and device, second terminal and storage medium
US20030095154A1 (en) Method and apparatus for a gesture-based user interface
US11809479B2 (en) Content push method and apparatus, and device
US11706485B2 (en) Display device and content recommendation method
US20130265448A1 (en) Analyzing Human Gestural Commands
RU2635238C1 (en) Method, device and terminal for playing music on basis of photoalbum with people's photographs
US11715444B2 (en) Notification handling in a user interface
JP2013143141A (en) Display apparatus, remote control apparatus, and searching methods thereof
CN106341606A (en) Device control method and mobile terminal
WO2021126395A1 (en) Sub-display input areas and hidden inputs
US20150331598A1 (en) Display device and operating method thereof
US20210185520A1 (en) Sub-display pairing
JP2013157984A (en) Method for providing ui and video receiving apparatus using the same
WO2019119643A1 (en) Interaction terminal and method for mobile live broadcast, and computer-readable storage medium
WO2022078172A1 (en) Display device and content display method
CN113051435B (en) Server and medium resource dotting method
CN111724638B (en) AR interactive learning method and electronic equipment
CN114390329B (en) Display device and image recognition method
TWI595406B (en) Display apparatus and method for delivering message thereof
US11610044B2 (en) Dynamic management of content in an electronic presentation
CN112462939A (en) Interactive projection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant