WO2022166395A1 - 一种屏幕组合方法和装置 - Google Patents

一种屏幕组合方法和装置 Download PDF

Info

Publication number
WO2022166395A1
WO2022166395A1 PCT/CN2021/136884 CN2021136884W WO2022166395A1 WO 2022166395 A1 WO2022166395 A1 WO 2022166395A1 CN 2021136884 W CN2021136884 W CN 2021136884W WO 2022166395 A1 WO2022166395 A1 WO 2022166395A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
image
host
orientation
information
Prior art date
Application number
PCT/CN2021/136884
Other languages
English (en)
French (fr)
Inventor
谢志强
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21924388.8A priority Critical patent/EP4273691A1/en
Priority to US18/264,517 priority patent/US20240045638A1/en
Publication of WO2022166395A1 publication Critical patent/WO2022166395A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/026Video wall, i.e. juxtaposition of a plurality of screens to create a display screen of bigger dimensions
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2356/00Detection of the display position w.r.t. other display screens

Definitions

  • the present application relates to the field of terminals, and in particular, to a method and device for combining screens.
  • the position and orientation information of each screen needs to be determined.
  • the orientation and position information of the screen can be input to the control host through the method of manual input by the user.
  • the positions of the monitors can be manually marked to complete the setting of the screen position and orientation relationship of the multiple monitors.
  • the orientation combination of multiple displays can be completed by dragging and dropping the marked displays manually (via a mouse or a touch screen).
  • the embodiments of the present application provide a screen combination method and device, which can automatically complete screen splicing and improve user experience.
  • an embodiment of the present application provides a method for combining screens, which is applied to a screen splicing system.
  • the screen splicing system includes at least two screens and a host, and the at least two screens include a first screen and a second screen, including: a first screen and the second screen to form a first screen group, the first screen and the second screen are connected in communication; the host sends a first instruction to the first screen, and sends a second instruction to the second screen; the first screen captures the first image according to the first instruction ; the second screen shoots a second image according to the second instruction; and the orientation information of the first screen and the second screen is determined according to the first image and the second image.
  • the relative orientation relationship of the two devices can be identified according to the image (photo) taken by the device (the first screen or the second screen), and the user does not need to manually set it. Improve user experience.
  • the host is integrated into the first screen or the second screen; or the host is independent of the first screen or the second screen.
  • the first screen or the second screen may be a TV
  • the host may be a device such as a set-top box or a router.
  • the host can be regarded as a processing module in the first screen or the second screen.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: the first screen sends the first image to the second screen; the second screen sends the first screen to the first screen The second image; the first screen and the second screen respectively determine the orientation information of the first screen and the second screen according to the first image and the second image; the first screen sends the orientation information determined by the first screen to the host, and the second screen sends the orientation information to the host.
  • the host sends the orientation information determined by the second screen; the host determines the orientation information of the first screen and the second screen according to the orientation information determined by the first screen and the orientation information determined by the second screen.
  • the orientation information between some devices is redundant, and the information may not be used, or the identification result may be checked with reference to the redundant information.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: the first screen sends the first image to the host; the second screen sends the second image to the host; The host determines the orientation information of the first screen and the second screen according to the first image and the second image. That is, the host can identify the orientation relationship of each device in the screen group, and other devices (eg, the second screen) do not need to perform orientation identification, which can save the power consumption of other devices.
  • the method further includes: the first screen or the second screen sends a first short-range signal to each other at a preset frequency, and the first screen or The second screen determines the distance between the first screen and the second screen according to the RSSI of the received signal strength of the first short-range signal transmitted between the first screen and the second screen; when the distance between the first screen and the second screen is less than or equal to the first screen
  • the first screen and the second screen form a first screen group; wherein, the maximum combined radius corresponding to the first screen and the second screen is based on the first screen and the second screen.
  • the size and the location of the antenna are determined. In this way, whether to perform screen combination can be determined between the first screen and the second screen according to the first short-range signal, that is, the screen combination can be performed automatically, and the user does not need to perform complicated operations, which can improve user experience.
  • the method further includes: displaying the first prompt information on the first screen and/or the second screen, and the first prompt information is used to prompt the user
  • the first screen and/or the second screen obtains the user's instruction, and the user's instruction is used to confirm the screen splicing. In this way, whether to group screens can be determined according to the user's operation, thereby avoiding the error of automatically triggering screen grouping.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: performing image matching on the first image and the second image according to an image matching algorithm, and determining the first image and the overlapping area of the second image; the orientation of the first screen relative to the second screen is determined according to the orientation of the overlapping area in the first image and the orientation of the second image.
  • determining the orientation of the first screen relative to the second screen according to the orientation of the overlapping area in the first image includes: if the overlapping area is located in the lower half of the first image and is located above the second image If the overlapping area is located in the lower left corner of the first image and in the upper right corner of the second image, determine that the first screen is located at the upper right of the second screen; if the overlapping area is located in the upper right corner of the second screen The left half area of the first image is located in the right half area of the second image, and the first screen is determined to be located on the right side of the second screen; if the overlapping area is located in the upper left corner of the first image and is located in the lower right corner of the second image, Determine that the first screen is located at the lower right of the second screen; if the overlapping area is located in the upper half of the first image and in the lower half of the second image, determine that the first screen is located below the second screen; if the overlapping area is located in the first The upper right corner
  • the image matching algorithm includes at least one of a scale-invariant feature transform SIFT algorithm, an accelerated robust feature SURF algorithm, and a fast nearest neighbor search algorithm.
  • the image matching algorithm may be other algorithms, which are not limited in this application.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: if it is determined that the first image and the second image include the target object, according to the target object in the first image and the orientation of the second image to determine the orientation of the first screen relative to the second screen; wherein the target object includes any one of a human face, a human action or a furniture item.
  • determining the orientation of the first screen relative to the second screen according to the orientation of the target object in the first image and the second image includes: if the target object is located in the lower half of the first image and is located in the first image In the upper half of the two images, it is determined that the first screen is located above the second screen; if the target object is located in the lower left corner of the first image and is located in the upper right corner of the second image, it is determined that the first screen is located at the upper right of the second screen; If the target object is located in the left half of the first image and in the right half of the second image, determine that the first screen is located on the right side of the second screen; if the target object is located in the lower left corner of the first image and is located in the second image If the target object is located in the upper left corner of the first image and in the upper right corner of the second image, determine that the first screen is located on the right of the second screen; If the object is located in the upper left corner of the first image and in the lower right corner of the second screen; If the
  • the method further includes: the host sends layout information to the first screen and the second screen , the layout information includes at least one combination mode; in response to the operation of the user selecting a combination mode from the at least one combination mode, the host sends operation information to the first screen and the second screen, and the first screen and/or the second screen according to The operation information instructs the user to perform the first gesture or action at the first position and the second gesture or action at the second position; determining the orientation information of the first screen and the second screen according to the first image and the second image includes: if determined If the area containing the first gesture or action in the first image is greater than or equal to the preset threshold, it is determined that the first screen is located at the first position; if it is determined that the area containing the second gesture or action in the second image is greater than or equal to the preset threshold, it is determined that the The second screen is located in the second position
  • the host is integrated into the first screen or the second screen, the first screen and the second screen form a first screen group, and the method further includes: the first screen or the second screen pairing the first screen and the second screen
  • the resource status of the second screen is scored; wherein, the resource status includes at least one of the central processing unit CPU processing capacity, the read-only memory ROM storage capacity or the random access memory RAM storage capacity; if the score of the first screen is higher than that of the first screen.
  • the host is integrated into the first screen; if the rating of the second screen is higher than the rating of the first screen, the host is integrated into the second screen.
  • the first screen can be considered as the main device
  • the second screen can be considered as the main device.
  • the method further includes: the host determines the display information corresponding to the first screen and the second screen respectively according to the orientation information of the first screen and the second screen; the host sends the first screen corresponding to the first screen. display information; the first screen displays the corresponding display screen according to the display information corresponding to the first screen; the host sends the display information corresponding to the second screen to the second screen; after the second screen receives the display information corresponding to the second screen, according to the second screen The display information corresponding to the screen displays the corresponding display screen.
  • the first screen and the second screen can display corresponding display images according to the display information determined by the host, which can satisfy the display effect of larger images.
  • the screen splicing system further includes a third screen
  • the method further includes: the first screen and the third screen send a second short-range signal to each other; the second screen and the third screen send a third short-range signal to each other signal; determine the distance between the first screen and the third screen according to the RSSI of the second short-range signal; determine the distance between the second screen and the third screen according to the RSSI of the third short-range signal; when the distance between the first screen and the third screen When less than or equal to the maximum combined radius corresponding to the first screen and the third screen, the first screen, the second screen and the third screen are formed into a second screen group; wherein, the maximum combined radius corresponding to the first screen and the third screen is based on The size of the first screen and the third screen and the position of the antenna are determined; or when the distance between the second screen and the third screen is less than or equal to the maximum combined radius corresponding to the second screen and the third screen, the first screen, the second screen The screen and the third screen form a
  • whether to perform screen combination between the first screen and the third screen can be determined according to the second short-distance signal; or whether to perform screen combination between the second screen and the third screen according to the third short-distance signal, without the need for the user to perform screen combination.
  • Complex operations can improve user experience.
  • the method further includes: displaying second prompt information on the first screen and/or the second screen, and the second prompt information is used to prompt the user to detect a newly added device and whether to perform screen splicing; the first screen And/or the second screen acquires the user's instruction, and the user's instruction is used to confirm screen splicing. In this way, whether to group screens can be determined according to the user's operation, thereby avoiding the error of automatically triggering screen grouping.
  • the method further includes: detecting whether the first screen and/or the second screen meet the first condition; if the first condition is met, the first screen and/or the second screen Removes the third screen from the second screen group. That is, the first screen or the second screen can automatically detect whether a screen (eg, the third screen) is removed, and then can prompt the user, so that the user can know the situation of the screen group at any time.
  • a screen eg, the third screen
  • the first condition includes: the heartbeat connection between the third screen and the first screen is disconnected, or the heartbeat connection between the third screen and the second screen is disconnected; or the host receives the user's deletion of the third screen. operation; or the distance between the first screen and the third screen is greater than the maximum combined radius corresponding to the first screen and the third screen; or the distance between the second screen and the third screen is greater than the maximum combined radius corresponding to the second screen and the third screen.
  • the method further includes: the host re-determines display information corresponding to the first screen and the second screen respectively according to the orientation information of the first screen and the second screen. That is, the host can adaptively adjust the display information of the screen group according to the changes of the devices in the screen group.
  • the method further includes: the host sends a third instruction to the third screen, sends a fourth instruction to the first screen, and sends a fifth instruction to the second screen; the third screen shoots the first screen according to the third instruction Three images; the first screen captures the fourth image according to the fourth instruction; the second screen captures the fifth image according to the fifth instruction; the third screen sends the third image to the host; the second screen sends the fifth image to the host; the host receives the third image After the fourth image and the fifth image, the orientation information of the first screen, the second screen and the third screen is determined according to the third image, the fourth image and the fifth image.
  • each device in the screen group can be re-shot images, so as to re-determine the relative orientation relationship of each device in the screen group.
  • the third screen is removed from the second screen group, and the method further includes: the host sends a sixth instruction to the first screen, and sends a seventh instruction to the second screen; Six instructions to shoot the sixth image; the second screen shoots the seventh image according to the seventh instruction; the first screen sends the sixth image to the host; the second screen sends the seventh image to the host; the host determines the sixth image according to the sixth image and the seventh image.
  • Orientation information for the first screen and the second screen That is, when a device in the screen group is removed, it can be considered that the screen group has been reorganized, and each device in the screen group can be made to take images again, so as to re-determine the relative orientation relationship of each device in the screen group.
  • an embodiment of the present application provides a method for combining screens, which is applied to a screen splicing system.
  • the screen splicing system includes at least two screens and a host, and the at least two screens include a first screen and a second screen.
  • the first screen and the second screen The two screens form a first screen group, and the first screen and the second screen are connected in communication, including: the host sends a first instruction to the first screen, and the first instruction is used to instruct the first screen to take a first image; the host sends a first instruction to the second screen.
  • the second instruction is used to instruct the second screen to shoot the second image; the host determines the orientation information of the first screen and the second screen according to the first image and the second image.
  • the relative orientation relationship of the two devices can be identified according to the image (photo) taken by the device (the first screen or the second screen), and the user does not need to manually set it. Improve user experience.
  • the embodiment of the present application can automatically identify the combination intention between the devices and start the screen assembly program by dynamically monitoring the distance between the devices, without requiring manual setting by the user, which is more intelligent and convenient.
  • the host is integrated into the first screen or the second screen; or the host is independent of the first screen or the second screen.
  • the host determining the orientation information of the first screen and the second screen according to the first image and the second image includes: the host receiving the first image from the first screen; the host receiving the second image from the second screen ; The host determines the orientation information of the first screen and the second screen according to the first image and the second image.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: the host performs image matching on the first image and the second image according to an image matching algorithm, and determines the first image and the second image.
  • the overlapping area of the image and the second image; the orientation of the first screen relative to the second screen is determined according to the orientation of the overlapping area in the first image and the orientation of the second image.
  • determining the orientation of the first screen relative to the second screen according to the orientation of the overlapping area in the first image includes: if the overlapping area is located in the lower half of the first image and is located above the second image If the overlapping area is located in the lower left corner of the first image and in the upper right corner of the second image, determine that the first screen is located at the upper right of the second screen; if the overlapping area is located in the upper right corner of the second screen The left half area of the first image is located in the right half area of the second image, and the first screen is determined to be located on the right side of the second screen; if the overlapping area is located in the upper left corner of the first image and is located in the lower right corner of the second image, Determine that the first screen is located at the lower right of the second screen; if the overlapping area is located in the upper half of the first image and in the lower half of the second image, determine that the first screen is located below the second screen; if the overlapping area is located in the first The upper right corner
  • the image matching algorithm includes at least one of a scale-invariant feature transform SIFT algorithm, an accelerated robust feature SURF algorithm, and a fast nearest neighbor search algorithm.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: if it is determined that the first image and the second image include the target object, according to the target object in the first image and the orientation of the second image to determine the orientation of the first screen relative to the second screen.
  • the method further includes: the host sends layout information to the first screen and the second screen , the layout information includes at least one combination mode; in response to the operation of the user selecting a combination mode from the at least one combination mode, the host sends operation information to the first screen and the second screen, the operation information is used to instruct the user to select a combination mode in the first screen and the second screen.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: if it is determined that the first image contains the first If the area of a gesture or action is greater than or equal to the preset threshold, it is determined that the first screen is located at the first position; if it is determined that the area containing the second gesture or action in the second image is greater than or equal to the preset threshold, it is determined that the second screen is located at the second position .
  • the method further includes: the host determines the display information corresponding to the first screen and the second screen respectively according to the orientation information of the first screen and the second screen; the host sends the first screen corresponding to the first screen. Display information; the host sends the display information corresponding to the second screen to the second screen.
  • an embodiment of the present application provides an electronic device, which may be a first screen or a second screen, and the electronic device includes: a wireless communication module, a memory, and one or more processors; a wireless communication module, a memory coupled with a processor; wherein the memory is used to store computer program code, the computer program code includes computer instructions; when the computer instructions are executed by the processor, the electronic device is made to perform the first aspect or the second aspect and any possible possibilities thereof implement the method described.
  • an embodiment of the present application provides a chip system, where the chip system includes one or more interface circuits and one or more processors.
  • the interface circuit and the processor are interconnected by wires.
  • the above-described chip system may be applied to an electronic device (eg, a first screen or a second screen) including a communication module and a memory.
  • the interface circuit is configured to receive signals from the memory and send the received signals to the processor, the signals including computer instructions stored in the memory.
  • the processor executes the computer instructions, the electronic device can perform the method as described in any aspect and any of its possible implementations.
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium includes computer instructions.
  • the computer instructions when executed on an electronic device (eg, a first screen or a second screen), cause the electronic device to perform the method as described in the first aspect and any of its possible implementations.
  • an embodiment of the present application provides a computer program product, which, when the computer program product runs on a computer, causes the computer to execute the method described in the first aspect or the second aspect and any possible implementation manners thereof. method described.
  • an embodiment of the present application provides a software upgrade system, including a first screen, a second screen, and a host.
  • the first screen, the second screen, and the host can perform the first aspect and any possible implementation manner thereof. the method described.
  • FIG. 1A is a schematic diagram of a display interface of a screen combination in the prior art
  • FIG. 1B is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 1C is a schematic diagram of another system architecture provided by an embodiment of the present application.
  • 2A is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the application.
  • FIG. 2B is a schematic diagram of a software architecture of an electronic device according to an embodiment of the present application.
  • FIG. 3A is a schematic diagram of connection of a plurality of devices according to an embodiment of the present application.
  • FIG. 3B is a schematic diagram of yet another connection of multiple devices according to an embodiment of the present application.
  • 3C is a schematic display diagram provided by an embodiment of the present application.
  • FIG. 3D is another schematic display diagram provided by an embodiment of the present application.
  • FIG. 3E is another schematic display diagram provided by an embodiment of the present application.
  • FIG. 3F is another schematic display diagram provided by an embodiment of the present application.
  • FIG. 3G is another schematic display diagram provided by an embodiment of the present application.
  • FIG. 3H is another schematic display diagram provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of signal interaction provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of determining an antenna distance between two devices according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an overlapping area of images captured by a TV 101 and a TV 102 according to an embodiment of the present application;
  • FIG. 7 is a schematic diagram of another overlapping area of images captured by the TV 101 and the TV 102 according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of the position of a human face in images captured by the TV 101 and the TV 102 according to an embodiment of the present application;
  • FIG. 9 is a schematic diagram of sorting of a device according to an embodiment of the present application.
  • FIG. 10 is another schematic diagram of signal interaction provided by an embodiment of the present application.
  • FIG. 11A is a schematic diagram of sorting of another device according to an embodiment of the present application.
  • FIG. 11B is a schematic diagram of sorting of another device according to an embodiment of the present application.
  • FIG. 12 is another schematic diagram of signal interaction provided by an embodiment of the present application.
  • FIG. 13 is another schematic display diagram provided by an embodiment of the present application.
  • FIG. 14 is another schematic diagram of signal interaction provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • the present application provides a screen combination method.
  • the screen combination scene can be automatically detected, and the overlapping area of the picture captured by the camera can be automatically detected.
  • the relative orientation of the screen is calculated based on the orientation of the screen, so as to complete the screen combination process and provide users with a simple and intelligent screen splicing experience.
  • FIG. 1B it is a schematic structural diagram of a screen splicing system according to an embodiment of the present application.
  • the system may include one or more electronic devices, such as router 100 , television 101 , television 102 , television 103 , and television 104 .
  • the TV 101 , the TV 102 , the TV 103 and the TV 104 can be connected to the same local area network based on the router 100 .
  • the screen splicing system may also include more electronic devices, which is not limited in this application.
  • FIG. 1C it is a schematic structural diagram of another screen splicing system provided by an embodiment of the present application.
  • the system may include one or more electronic devices, which may include, for example, television 101 , television 102 , television 103 , and television 104 .
  • the TV 101 , the TV 102 , the TV 103 , and the TV 104 can be connected by a short-range communication technology (eg, WIFI direct connection technology, Bluetooth technology, etc.).
  • a short-range communication technology eg, WIFI direct connection technology, Bluetooth technology, etc.
  • the screen splicing system may also include more electronic devices, which is not limited in this application.
  • the TV 101 , the TV 102 , the TV 103 or the TV 104 may be the screen 110 , and the screen 110 may include: a processor 111 , a memory 112 , a wireless communication processing module 113 , a power switch 114 , and a wired LAN communication processing module 115 , HDMI communication processing module 116, universal serial bus (universal serial bus, USB) communication processing module 117, display screen 118, audio module 119, speaker 119A, microphone 119B, and so on.
  • a processor 111 the main memory 112
  • a wireless communication processing module 113 a wireless communication processing module
  • a power switch 114 a wired LAN communication processing module 115
  • HDMI communication processing module 116 universal serial bus (universal serial bus, USB) communication processing module 117
  • display screen 118 display screen 118
  • audio module 119 speaker 119A
  • microphone 119B and so on.
  • the processor 111 may be used to read and execute computer readable instructions.
  • the processor 111 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for saving the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 111 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
  • ASIC application specific integrated circuit
  • the processor 111 may be used to parse the signals received by the wireless communication processing module 113 and/or the wired LAN communication processing module 115 .
  • the processor 111 can be used to perform corresponding processing operations according to the analysis result of the signal, such as responding to a data request, or controlling the display of the display screen 118 and/or the output of the audio module 119 according to the control request, and so on.
  • the processor 111 may also be configured to generate signals sent out by the wireless communication processing module 113 and/or the wired LAN communication processing module 115, such as Bluetooth broadcast signals, beacon signals, and the like.
  • Memory 112 is coupled to processor 111 for storing various software programs and/or sets of instructions.
  • memory 112 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 112 may store operating systems, such as embedded operating systems such as uCOS, VxWorks, RTLinux, and the like.
  • the memory 112 may also store communication programs that can be used to communicate with other devices.
  • the wireless communication processing module 113 may include a Bluetooth (BT) communication processing module 113A and a WLAN communication processing module 113B.
  • BT Bluetooth
  • one or more of the Bluetooth (BT) communication processing module 113A and the WLAN communication processing module 113B can listen to signals transmitted by other devices, such as probe requests, scan signals, etc., and can send response signals , such as probe response, scan response, etc., so that other devices can discover the screen 110, establish a wireless communication connection with other devices, and communicate with other devices through one or more wireless communication technologies in Bluetooth or WLAN.
  • the WLAN communication processing module 113B may include one or more solutions for WLAN communication among Wi-Fi direct, Wi-Fi LAN or Wi-Fi softAP.
  • one or more of the Bluetooth (BT) communication processing module 113A and the WLAN communication processing module 113B may also transmit signals, such as broadcasting Bluetooth signals, beacon signals, so that other devices can discover the screen 110, And establish a wireless communication connection with other devices, and communicate with other devices through one or more wireless communication technologies in Bluetooth or WLAN.
  • signals such as broadcasting Bluetooth signals, beacon signals, so that other devices can discover the screen 110, And establish a wireless communication connection with other devices, and communicate with other devices through one or more wireless communication technologies in Bluetooth or WLAN.
  • the screen 110 can be connected to the Internet through a WLAN wireless communication technology, so as to establish a communication connection with a server on the Internet (eg, a channel identification server, an on-demand resource server, etc.).
  • a server on the Internet eg, a channel identification server, an on-demand resource server, etc.
  • the wireless communication processing module 113 may also include an infrared communication processing module 113C.
  • the infrared communication processing module 113C can communicate with other devices (eg, remote controllers) through infrared remote control technology.
  • the power switch 114 may be used to control the supply of power to the display 118 from the power source.
  • the wired LAN communication processing module 115 can be used to communicate with other devices in the same LAN through the wired LAN, and can also be used to connect to the WAN through the wired LAN, and can communicate with the devices in the WAN.
  • the HDMI communication processing module 116 can be used to communicate with devices such as a set-top box through an HDMI port.
  • the HDMI communication processing module 116 may receive media content sent by a set-top box through an HDMI port, and so on.
  • the USB communication processing module 117 can be used to communicate with other devices through the USB interface.
  • Display screen 118 may be used to display images, video, and the like.
  • the display screen 118 can be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (organic light-emitting diode, OLED) display, an active-matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED) Display, flexible light-emitting diode (flexible light-emitting diode, FLED) display, quantum dot light-emitting diode (quantum dot emitting diodes, QLED) display and so on.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active-matrix organic light emitting diode
  • flexible light-emitting diode flexible light-emitting diode
  • FLED flexible light-emitting diode
  • QLED quantum dot light-emitting diode
  • the audio module 119 can be used to convert digital audio signal to analog audio signal output, and can also be used to convert analog audio input to digital audio signal. Audio module 119 may also be used to encode and decode audio signals. In some embodiments, the audio module 119 may be arranged in the processor 111 , or some functional modules of the audio module 119 may be arranged in the processor 111 . The audio module 119 can transmit audio signals to the wireless communication module 113 through a bus interface (such as a UART interface, etc.), so as to realize the function of playing audio signals through a Bluetooth speaker.
  • a bus interface such as a UART interface, etc.
  • the speaker 119A may be used to convert the audio signal transmitted by the audio module 119 into a sound signal.
  • the screen 110 may also include a microphone 119B, also referred to as a "microphone” or “microphone”, for converting sound signals into electrical signals.
  • a microphone 119B also referred to as a "microphone” or “microphone” for converting sound signals into electrical signals.
  • the user can make a sound through the human mouth, and input the sound signal to the microphone 119B.
  • Camera 120 may be used to capture still images or video.
  • screen 110 may have more or fewer components than those shown in FIG. 2A, may combine two or more components, or may have different component configurations.
  • the various components shown in Figure 2A may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application layer may further include a screen splicing management service, and the screen splicing management service is used to manage screen splicing (screen combination) among multiple devices.
  • the screen splicing management service can be integrated into a system APP or a third-party APP, such as a smart life APP, a smart interconnection APP, a setting application, etc., which is not limited in this application.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include an activity manager, a window manager, a content provider, a view system, a resource manager, a notification manager, etc., which are not limited in this embodiment of the present application.
  • Activity Manager used to manage the life cycle of each application. Applications usually run in the operating system in the form of Activity. For each Activity, there will be a corresponding application record (ActivityRecord) in the activity manager, and this ActivityRecord records the state of the application's Activity. The activity manager can use this ActivityRecord as an identifier to schedule the Activity process of the application.
  • WindowManagerService used to manage the graphical user interface (graphical user interface, GUI) resources used on the screen, which can be used for: obtaining the display screen size, creating and destroying windows, displaying and hiding windows, Layout, focus management, input method and wallpaper management, etc.
  • the system library and kernel layer below the application framework layer can be called the underlying system.
  • the underlying system includes the underlying display system used to provide display services.
  • the underlying display system includes the display driver in the kernel layer and the surface in the system library. manager, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt tone is issued, the terminal vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and virtual machines.
  • Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager, Media Libraries, OpenGL ES, SGL, etc.
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • OpenGL ES is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • SGL is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • TV 101, TV 102, TV 103, and TV 104 may respectively include hardware such as a processor, a display screen, a camera, and a communication unit. module.
  • the television 101 , the television 102 , the television 103 and the television 104 can be connected to each other through a communication unit to perform communication.
  • TV 101, the TV 102, the TV 103 and the TV 104 may include an application layer, an application framework layer, Android runtime and system libraries, and a kernel layer, respectively.
  • the TV 101, the TV 102, the TV 103, and the TV 104 can automatically perform screen splicing through the screen splicing management service.
  • the main interface 300 of the television 101 is shown.
  • the TV 101 determines that the distance D1 between the TV 101 and the TV 102 is less than or equal to the maximum combined radius R1 corresponding to the TV 101 and the TV 102 (for the concept of the maximum combined radius, refer to the related description of step 406a below), as shown in (b) of FIG. 3C
  • the TV 101 can pop up a pop-up box 301 to prompt the user to detect a device nearby, and the pop-up box 301 can include a yes button 302 and a no button 303, so that the user can choose whether to perform screen combination.
  • the yes button 302 eg, selecting the button 302 through a remote control or a touch screen
  • the user may be prompted that an ongoing Make screen combinations.
  • the television 101 may also prompt the user for the identification or ID of the device nearby.
  • the TV 101 may pop up a pop-up box 304, prompting the user to detect that the device 222xxx (222xxx is the ID of the TV 102) is nearby, and the pop-up box 304 may include a yes button 302 and a no button 303, so that The user chooses whether to combine the current device with the TV in the living room.
  • the TV 101 and the TV 102 can Commonly display the desktop of the TV 101 (for example, the TV 101 is the master device, the process of determining the master device can refer to the relevant description of step 409 below), or, as shown in (d) in FIG. 3E , the TV 101 and the TV 102 can be separately The desktop of the television 101 is displayed.
  • a TV 103 is added to the screen group (for example, the first screen group) composed of the TV 101 and the TV 102 (that is, when the TV 101 and the TV 102 are spliced together, the TV 103 needs to be added together with the TV 102 ).
  • 101 and the TV 103 are further spliced together)
  • the TV 103 can gradually approach the screen group composed of the TV 101 and the TV 102, and then, as shown in (b) in FIG. 3F
  • a pop-up box 305 may pop up on the TV 101 and the TV 102, prompting the user to detect the device 123xxx (123xxx is the ID of the TV 103).
  • the pop-up box 305 may include a yes button 302 and a no button 303 for the user to choose Whether to add the device to the screen group.
  • the television 103 may also prompt the user for the identification or ID of the device nearby.
  • the TV 103 can pop up a pop-up box 306, prompting the user to detect the device 111xxx (111xxx can be the ID of the TV 101).
  • the TV 101, the TV 102, and the TV 103 may form a new screen group (eg, a second screen group).
  • the screen group formed by the TV 101 , the TV 102 and the TV 103 can jointly display the corresponding display content.
  • the TV 103 is deleted from the screen group of (for example, when the TV 101, the TV 102 and the TV 103 are spliced together, the TV 103 is removed), exemplarily, as shown in (b) of FIG. 3G, the TV 101 and the TV 102 may pop up a pop-up box 307, prompting the user to detect that the device 123xxx in the current screen group is removed.
  • An OK button 308 may be included in the pop-up box 307 .
  • the television 101 determines that the information in the pop-up box 307 is known to the user, and the pop-up box 307 can be hidden.
  • the pop-up frame 307 can be automatically hidden after appearing for a few seconds (for example, 2s), so as to avoid affecting the display contents of the TV 101 and the TV 102 .
  • the screen group formed by the TV 101 , the TV 102 and the TV 103 can jointly display the corresponding display content.
  • the TV 103 is deleted from the constituted screen group (for example, when the TV 101, the TV 102 and the TV 103 are spliced together, the TV 103 is removed).
  • the TV 102 and the TV 103 can still maintain their previous display contents, and the TV 101 and the TV 102 can pop up a pop-up box 309, prompting the user to be sure to remove the device 123xxx from the current screen group?
  • the pop-up box 309 may include an OK button 310 and a cancel button 311 .
  • the TV 101 and the TV 102 may jointly display the corresponding display content (the display content of the TV 101 and the TV 102 may be the TV 101 (main device) (determined by the processor of the television 103), the television 103 individually displays the corresponding display content (the content displayed by the television 103 may be determined by the processor of the television 103).
  • the cancel button 311 as shown in (b) of FIG. 3H, the TV 101, the TV 102, and the TV 103 can still keep their previous display contents.
  • the TV 101 starts the screen splicing management service.
  • the screen splicing management service can be started.
  • the screen splicing management service may be integrated into a system APP or a third-party APP on the TV 101, such as a smart life APP, a smart interconnection APP, a setting application, etc., which is not limited in this application.
  • the TV 102 starts the screen splicing management service.
  • the screen splicing management service can be started.
  • the screen splicing management service reference may be made to the relevant description in step 401, which will not be repeated here.
  • the TV 101 and the TV 102 establish a network connection, and share the device information of the TV 101 and the TV 102 .
  • the TV 101 and the TV 102 can be connected to the same local area network, so that the TV 101 and the TV 102 can establish a network connection.
  • the screen splicing management service of the TV 101 can complete other nearby devices (eg, the TV 102 ) installed with the screen splicing management service based on the short-range communication technology (eg, Bluetooth/WIFI proximity discovery technology). 's discovery.
  • the short-range communication technology eg, Bluetooth/WIFI proximity discovery technology
  • the screen splicing management service can complete the discovery of other nearby screen devices (e.g., the TV 101) based on short-range communication technology (e.g., Bluetooth/WIFI proximity discovery technology).
  • short-range communication technology e.g., Bluetooth/WIFI proximity discovery technology
  • the TV 101 and the TV 102 can directly discover and connect to each other through technologies such as Bluetooth/WIFI direct connection.
  • the TV 101 creates a list of nearby devices.
  • the screen splicing management service of the TV 101 can exchange information with the screen splicing management service of other devices (eg, the TV 102 ) connected to the TV 101 to obtain a list of nearby devices.
  • the list of nearby devices established by the television 101 may be as shown in Table 1.
  • the TV 101 can also be connected to more devices, for example, the TV 103, the TV 104 and other devices can be connected.
  • the list of nearby devices established by the television 101 can be as shown in Table 2.
  • the TV 101 may also obtain the name of each device, size information (for example, the length and width of the device), antenna information (the installation position of the antenna in the device, the type of the antenna, the accuracy of the device) from each device in the nearby device list. , size, etc.) and other information.
  • size information for example, the length and width of the device
  • antenna information the installation position of the antenna in the device, the type of the antenna, the accuracy of the device
  • the television 102 establishes a list of nearby devices.
  • the list of nearby devices established by the television 102 may be as shown in Table 3.
  • the TV 102 can also be connected to more devices, for example, the TV 103, the TV 104 and other devices can be connected.
  • the list of nearby devices established by the television 102 may be as shown in Table 4.
  • the TV 102 can also obtain the name of each device, size information (for example, the length and width of the device), and antenna information (the installation position of the antenna in the device, the type of the antenna, the accuracy of the device) from each device in the nearby device list. , size, etc.) and other information.
  • size information for example, the length and width of the device
  • antenna information the installation position of the antenna in the device, the type of the antenna, the accuracy of the device
  • a short-distance signal is sent between the television 101 and the television 102.
  • the TV 101 can measure the distance between the TV 101 and each device in the nearby device list separately through a short-range communication technology (eg, Bluetooth/WIFI signal ranging technology). For example, the TV 101 may obtain the distance D1 between the two devices based on the received signal strength indication (RSSI) of the short-range signal sent by the TV 102.
  • a short-range communication technology eg, Bluetooth/WIFI signal ranging technology.
  • RSSI received signal strength indication
  • Television 102 may measure the distance between television 102 and each device in television 102's list of nearby devices. For example, the TV 102 can obtain the distance D1 between the two devices based on the RSSI of the short-range signal sent by the TV 101 . Alternatively, the television 101 may notify the television 102 of the measured distance D1.
  • the TV 101 determines that the distance D1 between the TV 101 and the TV 102 is less than or equal to the maximum combined radius R1 corresponding to the TV 101 and the TV 102.
  • the TV 101 can measure the distance between the TV 101 and each device in the nearby device list, and determine the size of the distance between the two devices (two devices) and the maximum combined radius corresponding to the two devices.
  • TV 101 may measure distance D1 between TV 101 and TV 102 and determine the size of the maximum combined radius R1 between D1 and TV 101 and TV 102 .
  • the maximum combined radius corresponding to the TV 101 and the TV 102 may be determined according to the size of the two devices (the TV 101 and the TV 102 ) and the positions of the antennas.
  • the antenna may be, for example, a Bluetooth antenna, a WIFI antenna, or the like.
  • the combined radius corresponding to the two devices in the diagonal splicing scenario is the largest.
  • r1 is the maximum distance from the center point of device 1 to the edge
  • d1 is the distance between the antenna of device 1 and the center point of device 1
  • r2 is the maximum distance from the center point of device 2 to the edge
  • d2 is the antenna distance of device 2
  • the distance of the center point of the device 2; the distance recognition accuracy of the antenna is a cm.
  • the distance between the TV 101 and the TV 102 is less than or equal to the size of the maximum combined radius corresponding to the TV 101 and the TV 102, it means that the two devices have an assembly intention (combination intention) or an assembled state (combination state).
  • the TV 101 and the TV 102 are of the same size as an example.
  • the TV 101 determines that the distance between the TV 101 and the TV 102 is less than or equal to R1
  • the TV 101 and the TV 102 can be marked as ready to combine (ready to splicing). That is, when the TV 101 determines that the distance (placement interval) between the TV 101 and the TV 102 is less than or equal to the maximum combination radius R1 corresponding to the TV 101 and the TV 102, it determines that the TV 101 and the TV 102 have a combination intention and can prepare for combination.
  • the TV 101 can group and organize the devices marked in the ready-to-assemble state to form a screen-combination-ready device group, that is, the TV 101 and the TV 102 can form a screen-combination-ready device group.
  • the TV 101 can also measure the distance D2 between the TV 101 and the TV 103 and determine the size of the maximum combined radius R2 between D2 and the TV 101 and the TV 103 .
  • the TV 101 can also measure the distance D3 between the TV 101 and the TV 104 and determine the size of the maximum combined radius R3 between D3 and the TV 101 and the TV 104 .
  • the TV 102 determines that the distance D1 between the TV 101 and the TV 102 is less than or equal to the maximum combined radius R1 corresponding to the TV 101 and the TV 102 .
  • the TV 102 can measure the distance between the TV 102 and each device in the nearby device list, and determine the distance between the two devices and the size of the maximum combined radius corresponding to the two devices.
  • step 406a For the specific process, reference may be made to the relevant description of step 406a, which is not repeated here.
  • the television 102 sends the first information to the television 101, where the first information includes the distance information measured by the television 102.
  • the TV 101 may receive first information from the TV 102, and the first information may include the distance between the TV 102 and each device in the nearby device list of the TV 102, and/or the distance between the two devices determined by the TV 102 The comparison result of the size of the maximum combined radius corresponding to the pair of devices.
  • the television 101 may also receive distance information and/or comparison results measured by other devices from other devices.
  • the television 101 may also receive the second information from the television 103 .
  • the second information may include the distance between the TV 103 and each device in the nearby device list of the TV 103, and/or the distance between the two devices determined by the TV 103 and the maximum combined radius corresponding to the two devices. Size comparison results.
  • the television 101 may also receive third information from the television 104 .
  • the third information may include the distance between the TV 104 and each device in the nearby device list of the TV 104, and/or the distance between the two devices determined by the TV 104 and the maximum combined radius corresponding to the two devices. Size comparison results.
  • the TV 101 can determine the distance between every two devices (two-two devices) in the current local area network, and/or, the distance between every two devices and the size of the maximum combined radius corresponding to the two devices. By comparing the results, the TV 101 can determine multiple devices that need to be spliced together at present, and the multiple devices that need to be spliced together can form a screen group.
  • the same local area network may include multiple screen combination preparation device groups (referred to as screen groups for short), and each screen group may include at least two devices, the at least two devices may be spliced together, the at least two Devices can be directly or indirectly connected.
  • screen groups for short
  • each screen group may include at least two devices, the at least two devices may be spliced together, the at least two Devices can be directly or indirectly connected.
  • TV 101 and TV 102 can form a screen group, and TV 101 and TV 102 can be directly connected (distance D1 between TV 101 and TV 102 is less than or equal to the maximum combined radius corresponding to TV 101 and TV 102).
  • the television 101 sends second information to the television 102, where the second information includes the distance information measured by the television 101.
  • step 406c For the specific process, reference may be made to step 406c, which will not be repeated here.
  • the TV 101 determines that the distance D1 between the TV 101 and the TV 102 is smaller than the maximum combined radius R1 corresponding to the TV 101 and the TV 102, that is, it is determined that the TV 101 and the TV 102 need to form a screen group, the TV 101 can perform step 407.
  • the TV 101 displays first prompt information, where the first prompt information is used to prompt the user whether to group screens.
  • the user can set the screen combination strategy on the TV 101 in advance, for example, can set to automatically perform the screen combination or manually perform the screen combination.
  • the TV 101 can automatically start the screen combination detection program through the short-range communication technology.
  • the screen combination detection program detects whether the distance between two devices is less than or equal to the maximum corresponding to the two devices. Combination radius to determine whether screen combination stitching is required.
  • the detection program for screen combination splicing can be automatically started through specific scenarios such as power-on and wake-up from standby.
  • the user can enter a screen splicing management service such as a smart life APP, a smart interconnection APP or a setting application, and manually (for example, by clicking a specific control) start the detection procedure of the screen assembling and splicing.
  • the TV 101 can give an interface prompt, and determine whether to group the screen according to the user's operation, so as to avoid the error of automatically triggering the screen grouping.
  • the main interface 300 of the television 101 is shown.
  • the TV 101 determines that the distance D1 between the TV 101 and the TV 102 is less than or equal to the maximum combined radius R1 corresponding to the TV 101 and the TV 102, as shown in (b) of FIG.
  • the pop-up box 301 may include a yes button 302 and a no button 303, so that the user can choose whether to perform screen combination.
  • the television 101 may also prompt the user for the identification or ID of the device nearby.
  • the TV 101 may pop up a pop-up box 304, prompting the user to detect that the device 222xxx (222xxx is the ID of the TV 102) is nearby, and the pop-up box 304 may include a yes button 302 and a no button 303, so that The user chooses whether to combine the current device with the TV in the living room.
  • the television 102 may also display the first prompt information.
  • both the TV 101 and the TV 102 provide interface prompts for the user to choose whether to perform screen combination, if the user has confirmed on one device (for example, confirmed on the TV 101), the TV 101 can send the user's confirmation information to the TV 102, without requiring the user to confirm each device one by one.
  • any device in the screen group may give an interface prompt (for example, display the first prompt information), so that the user can choose whether to perform the screen combination, that is, the TV 102 can no longer perform the interface hint.
  • an interface prompt for example, display the first prompt information
  • Step 409 may be performed in response to the user clicking the button that agrees with the combination of screens.
  • step 407 and step 408 do not need to be performed
  • the TV 101 and the TV 102 form a screen group, and the main device is elected as the TV 101.
  • the TV 101 can weight the resources of each device according to the resource status of each device in the current screen group, sort the resources from high to low, and use the device with the highest real-time resource score as the master device.
  • the resource situation of the device may include hardware resource capabilities such as central processing unit (CPU)/read only memory (ROM)/random access memory (RAM).
  • CPU central processing unit
  • ROM read only memory
  • RAM random access memory
  • the user can manually select the master device.
  • the user can enter the setting application of the device to select the main device, or, after the main device is automatically elected, the TV 101 can pop up a pop-up box to remind the user of the identity of the current main device (for example, it can remind the user that the current main device is the living room) TV (ie, TV 101 )), the user can confirm the TV 101 as the main device based on the OK button in the pop-up box, or the user can modify the main device based on the modify button in the pop-up box.
  • the TV 101 when the TV 101 is used as the main device, it can be understood as including a host for controlling the screen group and a screen for displaying images, and the host is integrated in the TV 101 .
  • TV 101 As the main device:
  • the master device sends a first notification message to the TV 102, where the first notification message is used to notify the TV 102 to take a photo and perform orientation recognition.
  • the main device ie, the TV 101
  • the TV 101 can use a camera to take pictures
  • the TV 101 can send a first notification message to the TV 102, where the first notification message is used to notify the TV 102 uses the camera to take a photo (image/picture), and notifies the TV 102 to perform orientation recognition based on the captured photo and photos obtained from other devices.
  • the television 101 takes a picture.
  • the TV 101 can control the camera built in the TV 101 to take a picture through the screen splicing management service.
  • the television 102 After receiving the first notification message, the television 102 takes a photo.
  • the TV 102 can control the camera built in the TV 102 to take a picture through the screen splicing management service.
  • TV 101 and TV 102 can negotiate to take pictures at the same time.
  • the TV 101 sends the photo taken by the TV 101 to the TV 102 .
  • the TV 102 sends the photo taken by the TV 102 to the TV 101 .
  • the TV 101 determines the orientation relationship between the TV 101 and the TV 102 according to the photos taken by the TV 101 and the photos captured by the TV 102.
  • the TV 101 After the TV 101 receives the photos taken by the TV 102 from the TV 102, it can perform image matching (comparison) between the photos captured by itself and the photos captured by the TV 102 through the image matching algorithm, and determine the overlapping area (ie, similar images) of the two photos. part/image content).
  • image matching is to determine the overlapping part of the two photos by analyzing the corresponding relationship between the image content, features, structure, relationship, texture and gray level of the two photos, as well as the similarity and consistency.
  • the image matching algorithm may include Scale Invariant Feature Transform (SIFT), speed up robust features (SURF), fast nearest neighbor search algorithm (Flann-based matcher), and the like.
  • SIFT Scale Invariant Feature Transform
  • SURF speed up robust features
  • Flann-based matcher fast nearest neighbor search algorithm
  • the television 101 can determine the relative positional relationship (relative orientation relationship) of the television 101 and the television 102 based on the position of the overlapping area on the photo taken by the television 101 . That is, the relative positional relationship between the television 101 and the television 102 is determined through the mapping relationship between the position of the overlapping area and the imaging orientation.
  • the relative azimuth relationship between the TV 101 and the TV 102 may be, for example, that the TV 101 is located in directions such as up, down, left, right, upper left, lower left, upper right, lower right, etc. of the TV 102 .
  • the relative orientation relationship between the television 101 and the television 102 may be that the television 102 is located in the upper, lower, left, right, upper left, lower left, upper right, lower right and other directions of the television 101 .
  • the splicing modes of the TV 101 and the TV 102 may include three modes: up-down splicing, left-right splicing, or diagonal splicing.
  • the splicing mode of the TV 101 and the TV 102 can be up and down; when the TV 101 is positioned on the left or right of the TV 102, the splicing mode of the TV 101 and the TV 102 can be left and right Splicing; when the TV 101 is located at the upper left, lower left, upper right or lower right of the TV 102, the splicing mode of the TV 101 and the TV 102 can be diagonal splicing.
  • the dotted frame represents the photo taken by the screen TV 101
  • the solid line frame represents the photo captured by the screen TV 102
  • the orientation of the screen TV 101 relative to the screen TV 102 is shown in Table 6.
  • the overlapping area is located in the lower right area (lower right corner) of the photo taken by the TV 101, and is located in the upper left area (upper left corner) of the photo captured by the TV 102, it is determined that the TV 101 is located at the upper left (upper left corner) of the television 102 .
  • the overlapping area is located in the lower half area (directly below) of the photo taken by the TV 101, and is located in the upper half area (directly above) of the photo taken by the TV 102, it is determined that the TV 101 is located in the TV 102 (right above); as shown in (c) in FIG.
  • image matching may be performed on the photo taken by the TV 101 and the photo taken by the TV 102 to identify the overlapping area of the two photos. Then, calculate which azimuth area the overlapping area is located in the photo taken by the TV 101 and the photo taken by the TV 102, and then find out the relative positions of the TV 101 and the TV 102 through Table 6. For example, the overlapping area is in the lower half of the photo taken by the television 101 , and the orientation of the television 101 relative to the television 102 is upward, that is, the television 101 is located above the television 102 .
  • the photos taken by each device may be divided into several sub-areas (eg, 6/9/12, etc., which are not limited in this application) ), match each sub-region in the photo taken by the TV 101 with each sub-region in the photo taken by the TV 102, determine the number of the matched sub-region, and determine the matched sub-region according to the number of the matched sub-region Which azimuth area is located in the photo taken by the TV 101 and the photo taken by the TV 102, and then the relative positions of the TV 101 and the TV 102 are found out through Table 6.
  • sub-areas eg, 6/9/12, etc., which are not limited in this application
  • the photos taken by the TV 101 can be divided into 6 sub-regions, respectively including 147258
  • the photos taken by the TV 102 can be divided into 6 sub-regions, respectively including 258369
  • the matching subregions include 258
  • the lookup table 6 shows that the TV 101 is located to the left of the TV 102 , that is, the TV 102 is located to the right of the TV 101 .
  • the lookup table 6 shows that the TV 101 is located to the left of the TV 102, that is, the TV 102 is located to the right of the TV 101.
  • an operation prompt may be displayed on the TV 101 and/or the TV 102, so that the user can ensure that a specific identifier (for example, a human face) is displayed on the camera of the TV 101 and/or the TV 102. It can be seen in the picture, and then the relative orientation between the devices is determined according to the position of the face in the photo taken by the TV 101 and/or the TV 102 .
  • a specific identifier for example, a human face
  • the position of the photo taken by each device of the specific identifier may have multiple latitude directions, for example, may include up and down latitudes and left and right latitudes.
  • the latitude in the same direction can be ignored, and the latitude in different directions can be used as the basis for judging the orientation between devices. Exemplarily, as shown in FIG.
  • the sub-regions of the photos taken by the TV 101 and the TV 102 respectively include 124578 and 235689, and the human face is located in the region 2, since the region 2 is located at the upper right of the photo taken by the TV 101, and the region 2 It is located at the upper left of the photo taken by the TV 102 , and it can be determined that the TV 101 is located to the left of the TV 102 through the look-up table 7 . That is, the latitude of the same direction is ignored (that is, the latitude of "up” in the upper right and upper left is ignored), and the latitude of the direction such as "left" and "right” is used as the basis for judging the orientation between devices.
  • related programs can be preset in the screen splicing management service to prompt the user to give cooperative measures through screen display, voice prompts, etc., to speed up the camera's recognition of a specific position, thereby speeding up the recognition speed of the relative position of the picture. , or directly input a specific image to the camera of a device to mark the orientation of the corresponding device.
  • the television 102 determines the orientation relationship between the television 101 and the television 102 according to the photo taken by the television 102 and the photo taken by the television 101.
  • step 412a For the specific process, reference may be made to the description in step 412a, and the contents such as the execution body may simply be replaced, which will not be repeated here.
  • the television 102 sends the television 101 the azimuth relationship between the television 101 and the television 102 determined by the television 102.
  • the master device determines the relative orientation relationship of all devices in the screen group.
  • the main device can collect the relative orientation information between every two devices in the screen group for a summary, and make a unified arrangement in a coordinate system according to the orientation information, and record the serial number, coordinates and other information for each device separately.
  • the orientation of all devices in the screen group can be represented by an array, for example, it can be (device 1, device 2, the direction of device 1 relative to device 2).
  • the orientation of TV 101 relative to TV 102 can be: (TV 101, TV 102, up), indicating that TV 101 is located above TV 102.
  • the orientation of the TV 102 relative to the TV 101 may be: (TV 102 , TV 101 , down), indicating that the TV 102 is located below the TV 101 .
  • the host device determines that the orientation of the TV 101 relative to the TV 102 is (TV 101, TV 102, left), that is, the TV 101 is located on the left side of the TV 102, then the TV 101 and the TV 102 It can be arranged left and right in the coordinate system, and the order of the TV 101 and the TV 102 can be (1), (2), that is, the TV 101 and the TV 102 are respectively from left to right.
  • the host device determines that the orientation of the TV 101 relative to the TV 102 is (TV 101, TV 102, up), that is, the TV 101 is located above the TV 102, then the TV 101 and the TV 102 can Arranged downward in the coordinate system, the TV 101 and the TV 102 can be ordered as (1) and (2), that is, the TV 101 and the TV 102 are respectively the TV 101 and the TV 102 from top to bottom.
  • the numbering manner of the devices in the screen group by the master device may be one by one from the upper left to the lower right direction.
  • the devices spliced together may be arranged in an n*m matrix, where n may represent a row, m may represent a column, n is an integer greater than or equal to 1, m is an integer greater than or equal to 1, and n and m are different. 1 at the same time. For example, as shown in (a) of FIG.
  • the master device synchronizes the basic information of screen group splicing to the TV 102.
  • the master device can synchronize the basic information of screen group splicing to all devices in the screen group.
  • Each device in the screen group can receive synchronization messages sent by the master device.
  • the synchronization message includes basic information of screen group splicing.
  • the basic information of screen group splicing includes the number of devices included in the screen group, the MAC/ID of each device, the master-slave information (that is, the information of the master device and the slave device), and the orientation information between the devices.
  • the current screen group splicing basic information may include the number of devices included in the screen group (for example, 2), the MAC/ID of each device (for example, the IDs of the TV 101 and the TV 102), the master-slave information (for example, , the master device is the TV 101, the slave device is the TV 102), and the orientation information between the devices (for example, the TV 101 and the TV 102 are in a left-right splicing state).
  • a heartbeat link can be established between every two devices to maintain the combination relationship between the devices in real time.
  • the TV 101 can send a heartbeat monitoring data frame (also called a heartbeat) to the TV 102 every 1 minute (or, 30s, 2 minutes, 3 minutes, etc.). packet), the TV 102 can send a response frame after receiving the heartbeat monitoring data frame, then the TV 101 determines that the connection is normal, otherwise it indicates that the connection is disconnected or abnormal.
  • a heartbeat monitoring data frame also called a heartbeat
  • the TV 102 can send a response frame after receiving the heartbeat monitoring data frame, then the TV 101 determines that the connection is normal, otherwise it indicates that the connection is disconnected or abnormal.
  • the TV 101 determines the display information of the TV 101 and the TV 102 respectively according to the basic information of the screen group splicing.
  • the TV 101 can respectively determine the display information of the TV 101 and the TV 102 according to the basic information of the screen group splicing. That is, during the operation of the screen group system, the main device can realize the screen group output display arrangement, interface focus switching, etc. based on the basic information of screen group splicing.
  • the television 101 sends the display information of the television 102 to the television 102.
  • the television 101 displays a corresponding display screen according to the display information of the television 101 .
  • the television 102 displays a corresponding display screen according to the display information of the television 102 .
  • the TV 101 may divide the display content of the TV 101 into N shares (eg, 2 shares) and distribute them to each device in the screen group (eg, the TV 101 (itself) and the TV 102 ).
  • N is less than or equal to the number of devices included in the screen group.
  • the display contents of the TV 101 and the TV 102 before splicing are shown as (a) and (b) in FIG. 3E, respectively.
  • the TV 101 and the TV 102 may collectively display the desktop of the main device (eg, TV 101 ), or, as shown in (d) of FIG. 3E , the TVs 101 and 102 may separately display the desktop of the main device (eg, TV 101 ).
  • each device in the screen group can continuously detect the increase or decrease of the device, and refresh the basic information of the screen group splicing.
  • adding a device may be a new device added to the current screen group, and reducing a device may be actively removing/removing some devices from the current screen group, or some devices are powered off and passively offline.
  • the television 101 can detect whether a new device is added through short-range communication.
  • the TV 101 can detect and confirm whether the peer device is offline according to the heartbeat link.
  • the television 101 may determine whether a certain device is offline through short-range communication detection.
  • the user can manually remove a device from the management interface.
  • the screen group splicing method may further include:
  • a short-range signal is sent between the television 101 and the television 102.
  • a short-range signal may be periodically transmitted between the television 101 and the television 102 .
  • a short-range signal is sent between the television 102 and the television 103.
  • a short-range signal may be periodically transmitted between the television 102 and the television 103 .
  • the TV 101/TV 102 determines, according to the short-distance signal, that the TV 103 needs to be added to the current screen group.
  • the TV 101 can measure the distance between the TV 101 and the TV 103 according to the short-range signal
  • the TV 102 can measure the distance between the TV 102 and the TV 103 according to the short-range signal.
  • the distance D2 between the TV 101 and the TV 103 is less than or equal to the corresponding maximum combined radius R2 between the TV 101 and the TV 103, that is, D2 ⁇ R2;
  • the distance D4 between the TV 102 and the TV 103 is less than It is equal to the maximum combined radius R4 between the TV 102 and the TV 103, that is, D4 ⁇ R4.
  • the television 101 compares the distance D2 between the television 101 and the television 103 and the size of the corresponding maximum combined radius R2 between the television 101 and the television 103 .
  • the television 101 may also acquire information about the distance D4 between the television 102 and the television 103 and the information about the maximum combined radius R4 between the television 102 and the television 103 from the television 102 .
  • the TV 101 determines that the distance D2 between the TV 101 and the TV 103 is less than or equal to the maximum combined radius R2 between the TV 101 and the TV 103, that is, D2 ⁇ R2, the TV 101 determines that the TV 101, the TV 102 and the TV 103 can form a screen group .
  • the process of determining the maximum combined radius R2 between the TV 101 and the TV 103 may refer to the relevant description in step 406a, which will not be repeated here.
  • the TV 102 can compare the distance D4 between the TV 102 and the TV 103 and the size of the corresponding maximum combined radius R4 between the TV 102 and the TV 103 . If D4>R4, the TV 102 can also obtain the information of D2 ⁇ R2 from the TV 101, so as to determine that the TV 103 needs to be added to the current screen group.
  • TV 101 and TV 102 can be within the corresponding maximum combined radius (that is, the maximum combined version radius corresponding to TV 101 and TV 102).
  • TV 101 and TV 103 may be within the corresponding maximum combined radius (that is, the maximum combined radius corresponding to TV 101 and TV 103), and TV 102 and TV 103 may not be within the corresponding maximum combined radius (ie, TV 102 and TV 103 correspond to the maximum combined radius). within the maximum combined plate radius).
  • the TV 103 can be indirectly spliced with the TV 102 (the distance D2 between the TV 103 and the TV 102 is greater than the maximum combined radius corresponding to the TV 103 and the TV 102), and the TV 103 can be directly spliced with the TV 101 (the distance D3 between the TV 103 and the TV 101 is less than or equal to Maximum combined radius for TV 103 and TV 101). Since the TV 101 and the TV 102 are spliced together, and the TV 101 and the TV 103 are spliced together, the TV 101 , the TV 102 and the TV 103 are spliced together.
  • the TV 101 , the TV 102 , and the TV 103 can perform orientation recognition between each other (ie, identify the splicing mode between the two devices).
  • the TV 101 and the TV 102 can perform orientation recognition according to the photos taken (ie, identify whether the TV 101 and the TV 102 are stitched up and down, left and right, or diagonally), the TV 101 and the TV 103 can be identified according to the photos taken, and the TV 102 and the TV 103 can perform orientation recognition according to the photos taken.
  • the TV 101/TV 102 can directly execute step 416a. If the TV 103 is a device newly added to the local area network, the TV 101 and the TV 102 can establish a connection with the TV 103 based on the local area network and discover each other based on the short-range communication technology; or the TV 101 and the TV 102 can establish a direct connection with the TV 103; the TV 101 .
  • the TV 102 can refresh the list of nearby devices, and the TV 103 can create a new list of nearby devices; then, the TV 101/TV 102 can execute step 416a.
  • the TV 101 and the TV 102 display second prompt information, where the second prompt information is used to prompt the user to detect a newly added device in the current screen group.
  • a pop-up box 305 may pop up on the TV 101 and the TV 102, prompting the user to detect the device 123xxx (123xxx is the ID of the TV 103).
  • the pop-up box 305 may include yes button 302 and button 303 of No, so that the user can choose whether to add the device to the screen group.
  • the television 103 may also prompt the user for the identification or ID of the device nearby.
  • the TV 103 can pop up a pop-up box 306, prompting the user to detect the device 111xxx (111xxx can be the ID of the TV 101).
  • the pop-up box 306 can include a yes button 302 and a no button 303, so that the user can choose whether to connect the current device with The television 103 performs screen combination.
  • Step 417 may be executed in response to the user's operation of clicking the button for agreeing to add the newly added device to the screen group.
  • the TV 101, the TV 102 and the TV 103 form a screen group, and the TV 101 is elected as the main device.
  • step 409 For the master device election process, reference may be made to the description in step 409, which is not repeated here.
  • the TV 101 when used as the main device, it can be understood as including a host for controlling the screen group and a screen for displaying images, and the host is integrated in the TV 101 .
  • TV 101 As the main device:
  • the television 101 sends the first notification message to the television 102.
  • the first notification message is used to notify the TV 102 to take a photo, and perform orientation recognition based on the photo taken by the TV 102 and the photos obtained from other devices.
  • the television 101 sends a second notification message to the television 103.
  • the second notification message is used to notify the TV 103 to take a photo, and to perform orientation recognition based on the photo taken by the TV 103 and the photos obtained from other devices.
  • the television 101 takes a picture.
  • the TV 101 can control the camera built in the TV 101 to take a picture through the screen splicing management service.
  • the television 102 After receiving the first notification message sent by the main device, the television 102 takes a photo.
  • the TV 102 can control the camera built in the TV 102 to take a picture through the screen splicing management service.
  • TV 101 and TV 102 can negotiate to take pictures at the same time.
  • the television 103 After receiving the second notification message sent by the main device, the television 103 takes a photo.
  • the TV 103 can control the camera built in the TV 103 to take a picture through the screen splicing management service.
  • TV 101 and TV 103 can negotiate to take pictures at the same time.
  • the TV 101, the TV 102, and the TV 103 can negotiate to take pictures at the same time.
  • steps 419d-419i can be executed:
  • the TV 101 sends the photo taken by the TV 101 to the TV 102 .
  • the TV 102 sends the photo taken by the TV 102 to the TV 103 .
  • the TV 102 sends the photo taken by the TV 102 to the TV 101 .
  • the TV 103 sends the photo taken by the TV 103 to the TV 102.
  • the TV 101 sends the photo taken by the TV 101 to the TV 103.
  • the television 103 transmits the photo taken by the television 103 to the television 101.
  • the 420a and the TV 101 respectively acquire the photos taken by the TV 102 and the TV 103, and identify the azimuth relationship between the TV 101 and the TV 102 and the TV 103, respectively.
  • step 412a For the corresponding orientation identification process, reference may be made to the relevant description of step 412a, which will not be repeated here.
  • the 420b and the TV 102 respectively acquire the photos taken by the TV 101 and the TV 103, and identify the azimuth relationship between the TV 102 and the TV 101 and the TV 103, respectively.
  • step 412a For the corresponding orientation identification process, reference may be made to the relevant description of step 412a, which will not be repeated here.
  • the 420c and the TV 103 respectively obtain the photos taken by the TV 101 and the TV 102, and identify the azimuth relationship between the TV 103 and the TV 101 and the TV 102, respectively.
  • step 412a For the corresponding orientation identification process, reference may be made to the relevant description of step 412a, which will not be repeated here.
  • the television 102 sends the azimuth relationship between the television 102 and other devices to the television 101.
  • the television 103 sends the azimuth relationship between the television 103 and other devices to the television 101.
  • the master device determines the relative orientation relationship of all devices in the screen group.
  • the relative azimuth relationship between the TV 101 and the TV 102 may be, for example, that the TV 101 is located in the upper, lower, left, right, upper left, lower left, upper right, lower right and other directions of the TV 102 .
  • the relative orientations are identified between the devices.
  • every two devices located within the maximum combined radius that is, the distance between the two devices is less than or equal to the maximum combined radius corresponding to the two devices
  • the relative orientation of each device can be identified by identifying the relative orientation of each device, that is, the orientation of each device relative to other devices can be identified.
  • the sorting process for determining the three devices can be: first traverse the orientation between the two devices to determine that the TV 103 is located at the leftmost of the remaining two devices (TV 101, TV 102), and then traverse the position between the two devices.
  • the orientation determines that the TV 101 is located on the far left of the other device (TV 102), and then traverses the orientation between the two devices to determine that the TV 102 is located on the far right, and finally determines that the order of the three devices such as the TV 103, the TV 101, and the TV 102 is as follows (1), (2), (3), that is, from left to right, TV 103, TV 101, and TV 102, respectively.
  • the orientation information between some devices is redundant, and the information may not be used, or the identification result may be checked with reference to the redundant information.
  • the overall screen group orientation identification is completed according to the orientations between the TV 103 and the TV 101 and between the TV 101 and the TV 102 .
  • the orientation information between the TV 103 and the TV 102 is redundant.
  • the orientation of the overall screen group may be verified based on the orientation between the television 103 and the television 102, so as to improve the accuracy of identifying the orientation of the overall screen group.
  • the TV 101 , the TV 102 , and the TV 103 are also spliced from the bottom to the top (vertically spliced), and the relative orientation relationship of each device is referred to the above related description, which is not repeated here.
  • the screen group includes 9 devices, namely TV 101 , TV 102 , TV 103 , TV 104 , TV 105 , TV 105 , TV 107 , TV 108 , and TV 109 .
  • the relative orientation relationship of all the devices in the screen group can be summarized by statistics and summarizing the orientation relationship between the two devices.
  • the sorting process for determining the three devices can be: first read the relative orientation of TV 101 and TV 102 to determine that TV 101 is on the left side of TV 102, and then read TV 101 and TV 103
  • the relative orientation of TV 103 is determined on the right side of TV 101. So far, the relative orientation of TV 101 and TV 103 cannot be determined. It is necessary to further read the relative orientation of TV 102 and TV 103, and finally determine TV 101, TV 102, TV 103, etc.
  • the order of the three devices is (1), (2), (3), that is, TV 101, TV 102, and TV 103 from left to right.
  • TV 101, TV 105 and TV 109 Take three devices arranged diagonally as an example, for example, take TV 101, TV 105 and TV 109 as an example, the order of identification between the devices from top left to bottom right is TV 101 and TV 105, TV 101 and TV 109, TV 105 and TV 109, by traversing the relative orientations of the two devices, it can be determined that the top left device is the TV 101, the TV 105 is located at the lower right of the TV 101, and the TV 109 is located at the lower right of the TV 105, so the TV 101 is finally determined. and TV 109, the order of the three devices such as TV 105 and TV 109 is (1), (5), (9), that is, TV 101, TV 105 and TV 109 respectively from upper left to lower right.
  • steps 420a to 421 can be replaced with step S1:
  • the TV 101 obtains the photos taken by the TV 102 and the TV 103, and respectively identifies the orientation relationship between the TV 101 and the TV 102, the orientation relationship between the TV 101 and the TV 103, and the relationship between the TV 102 and the TV 103.
  • Orientation relationship That is, the orientation relationship of each device in the screen group can be identified by the master device, so that the TV 102 and the TV 103 do not need to perform orientation identification, and the power consumption of the TV 102 and the TV 103 can be saved.
  • the master device synchronizes the basic information of screen group splicing to the TV 102.
  • the master device synchronizes the basic information of screen group splicing to the TV 103.
  • the master device synchronizes the basic information of screen group splicing to each device in the screen group.
  • the current screen group splicing basic information may include the number of devices included in the screen group (for example, 3), the MAC/ID of each device (for example, the IDs of TV 101, TV 102, and TV 103), master-slave Information (for example, the master device is TV 101, and the slave devices include TV 102 and TV 103), orientation information between devices (for example, TV 103, TV 101 and TV 102 are spliced sequentially from left to right).
  • the TV 101 determines the display information of the TV 101, the TV 102 and the TV 103 respectively according to the basic information of the screen group splicing.
  • the TV 101 can respectively determine the display information of the TV 101 and the TV 102 according to the basic information of the screen group splicing. That is, during the operation of the screen group system, the main device can realize the screen group output display arrangement, interface focus switching, etc. based on the basic information of screen group splicing. For example, the TV 101 may divide the display content of the TV 101 into N shares (eg, 3 shares) and distribute them to each device in the screen group (eg, the TV 101 (itself), the TV 102 and the TV 103 ). Wherein, N is less than or equal to the number of devices included in the screen group.
  • N shares eg, 3 shares
  • the television 101 sends the display information of the television 102 to the television 102 .
  • the television 101 transmits the display information of the television 103 to the television 103.
  • the television 101 displays a corresponding display screen according to the display information of the television 101 .
  • the television 102 displays a corresponding display screen according to the display information of the television 102.
  • the television 103 displays a corresponding display screen according to the display information of the television 103.
  • a device may be removed from a screen group.
  • deleting a device may be actively removing/removing some devices from the current screen group, or some devices are powered off and passively offline.
  • each device in the screen group can detect and confirm whether a certain device is offline according to the heartbeat link.
  • each device in the screen group can determine whether a certain device is offline through short-range communication detection.
  • information of the device deleted by the user may be marked, so that each device in the screen group determines that a certain device is offline.
  • the screen group splicing method may further include:
  • a short-range signal is sent between the television 101 and the television 102.
  • a short-range signal may be periodically sent between the TV 101 and the TV 102, so as to measure the distance between the TV 101 and the TV 102 according to the short-range signal.
  • a short-range signal is sent between the television 102 and the television 103.
  • a short-range signal may be periodically sent between the TV 101 and the TV 103, so as to measure the distance between the TV 101 and the TV 103 according to the short-range signal.
  • the TV 101/TV 102 deletes the TV 103 from the current screen group according to the short-range signal.
  • the TV 101 can compare the distance D2 between the TV 101 and the TV 103 and the corresponding maximum combined radius R2 between the TV 101 and the TV 103 . If the TV 101 determines that the distance D2 between the TV 101 and the TV 103 is greater than the maximum combined radius R2 between the TV 101 and the TV 103, that is, D2>R2, the TV 101 determines that the TV 101 and the TV 103 are not in the splicing state, from the TV 101, TV 103 is deleted from the screen group composed of TV 102 and TV 103 .
  • the TV 102 can obtain the information of D2>R2 from the TV 101, so as to determine that the TV 103 needs to be deleted in the current screen group.
  • the TV 101 and the TV 102 display third prompt information, where the third prompt information is used to prompt the user to detect that a device in the current screen group has been removed.
  • the screen group formed by the TV 101 , the TV 102 and the TV 103 can jointly display the corresponding display content.
  • the TV 103 is deleted from the screen group of (for example, when the TV 101, the TV 102 and the TV 103 are spliced together, the TV 103 is removed), exemplarily, as shown in (b) of FIG. 3G, the TV 101 and the TV 102 may pop up a pop-up box 307, prompting the user to detect that the device 123xxx in the current screen group is removed.
  • An OK button 308 may be included in the pop-up box 307 .
  • the television 101 determines that the information in the pop-up box 307 is known to the user, and the pop-up box 307 can be hidden.
  • the pop-up frame 307 can be automatically hidden after appearing for a few seconds (for example, 2s), so as to avoid affecting the display contents of the TV 101 and the TV 102 .
  • the user may be prompted that a device has been removed, and in response to the user's determination to remove the device, the device may be removed from the current screen group. equipment.
  • the screen group formed by the TV 101, the TV 102 and the TV 103 can display the corresponding display content together.
  • TV 103 is removed from the group (eg, TV 103 is removed when TV 101, TV 102, and TV 103 are spliced together), exemplarily, as shown in (b) of FIG.
  • TV 101, TV 102 and The TV 103 can still maintain its previous display content, and the TV 101 and the TV 102 can pop up a pop-up box 309, prompting the user to remove the device 123xxx from the current screen group?
  • the pop-up box 309 may include an OK button 310 and a cancel button 311 .
  • OK button 310 In response to the user's operation of clicking the OK button 310, it is determined to remove the TV 103 from the screen group. As shown in (c) in FIG.
  • the TV 101 and the TV 102 can jointly display the corresponding display content (the The display content may be determined by the processor of the TV 101 (the host device), and the TV 103 displays the corresponding display content alone (the content displayed by the TV 103 may be determined by the processor of the TV 103).
  • the user clicks the cancel button 311, as shown in (b) of FIG. 3H the television 101, the television 102, and the television 103 may still maintain their previous display contents.
  • the removed device is the master device in the current screen group
  • the remaining devices in the screen group can re-elect the master device.
  • the master device refreshes the basic information of screen group splicing, and synchronizes it to all devices in the screen group, so that each device in the screen group knows which device is removed from the screen group.
  • the refreshed basic information of screen group splicing may include the number of devices included in the screen group (for example, 2), the MAC/ID of each device (for example, the IDs of TV 101 and TV 102 ), master-slave information (For example, the master device is the TV 101, and the slave device is the TV 102), orientation information between the devices (for example, the TV 101 and the TV 102 are in a left-right splicing state).
  • the main device can realize the screen output display arrangement, interface focus switching, etc.
  • the TV 101 may divide the display content of the TV 101 into N shares (eg, 2 shares) and distribute them to each device in the screen group (eg, the TV 101 (itself) and the TV 102 ).
  • N is less than or equal to the number of devices included in the screen group.
  • a device in the screen group when removed, it may be considered that the screen group has been reorganized, and the relative orientation relationship of each device in the screen group may be re-determined, for example, steps 410-414 may be re-executed.
  • the camera that comes with the device can be used to take photos, and the photos taken by each device can be identified and compared, for example, the orientation of the photos where the overlapping area is located can be determined, and then the identification The relative orientation relationship between the two devices can be obtained without manual setting by the user, which can improve the user experience.
  • the embodiment of the present application can automatically identify the combination intention between the devices and start the screen assembly program by dynamically monitoring the distance between the devices, without requiring manual setting by the user, which is more intelligent and convenient.
  • the azimuth relationship between the devices can be determined by means of human-computer interaction.
  • different actions (gestures) or objects may be used in the areas directly in front of the cameras of TV 101 and TV 102 to indicate where different devices are located.
  • the TV 101 and the TV 102 may prompt the user to select the arrangement between the devices.
  • the arrangement of the devices may include, for example: (1) Arrangement up and down; (2) Arrange left and right. As shown in (b) of FIG.
  • the TV 101 and the TV 102 may prompt the user that "the area directly in front of the camera of the first device from the left is more than "gesture 1" , in the area directly in front of the camera of the second device from the left than "Gesture 2".
  • the area directly in front of the camera of the first device from the left eg, TV 101
  • the television 101 can detect whether a human hand appears in the field of view of the camera, and if it is determined that a human hand appears, it can capture an image.
  • the TV 102 can detect whether a human hand appears in the field of view of the camera, and if it is determined that a human hand appears, it can capture an image.
  • the television 101 determines whether the gesture in the image captured by itself matches “gesture 1" or “gesture 2", and if it matches "gesture 1", determines that the television 101 is the first device from the left.
  • the TV 102 can determine whether the gesture in the image captured by itself matches “gesture 1" or "gesture 2", and if it matches "gesture 2", it determines that the TV 102 is the second device from the left. In this way, it can be determined that the television 101 is to the left of the television 102 . In this way, the user's participation and interest in the process of screen splicing can be improved.
  • an embodiment of the present application provides a screen combining method, which is applied to a screen splicing system.
  • the screen splicing system includes at least two screens and a host.
  • the at least two screens include a first screen and a second screen.
  • the host is integrated in in the first screen or the second screen; or the host is independent of the first screen or the second screen.
  • the method includes:
  • the first screen and the second screen form a first screen group, and the first screen and the second screen are connected in communication.
  • the method further includes: the first screen or the second screen sends a first short-range signal to each other at a preset frequency, and the first screen or the second screen is based on the first screen or the second screen.
  • the received signal strength of the first short-range signal transmitted between one screen and the second screen indicates that RSSI determines the distance between the first screen and the second screen; when the distance between the first screen and the second screen is less than or equal to the first screen and the second screen
  • the maximum combined radius corresponding to the screen the first screen and the second screen form the first screen group; wherein, the maximum combined radius corresponding to the first screen and the second screen is based on the size of the first screen and the second screen and the position of the antenna definite.
  • the first screen and/or the second screen may display the first prompt information, and the first prompt information is used to prompt the user to detect that there is a device nearby, and whether to proceed or not. Screen stitching.
  • the host sends a first instruction to the first screen.
  • the first indication may be a signal sent by the host to the camera of the television 101 .
  • the host sends a second instruction to the second screen.
  • the TV 101 can send a second instruction to the second screen (eg, the TV 102 ), and the second instruction can refer to the first notification message above , which will not be repeated here.
  • the first screen captures a first image according to the first instruction.
  • the first image refers to an image (photo/picture) captured by the first screen (eg, TV 101 ).
  • the second screen captures a second image according to the second instruction.
  • the second image refers to an image (photo/picture) captured by the second screen (eg, the TV 102 ).
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: the first screen sends the first image to the second screen; the second screen sends the second image to the first screen The first screen and the second screen respectively determine the orientation information of the first screen and the second screen according to the first image and the second image; the first screen and the second screen respectively send the orientation information and the second screen determined by the first screen to the host computer.
  • the orientation information determined by the screen; the host determines the orientation information of the first screen and the second screen according to the orientation information determined by the first screen and the orientation information determined by the second screen.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: the first screen sends the first image to the host; the second screen sends the second image to the host; The first image and the second image determine orientation information of the first screen and the second screen.
  • determining the orientation information of the first screen and the second screen according to the first image and the second image includes: performing image matching on the first image and the second image according to an image matching algorithm, and determining the first image and the second image.
  • the overlapping area of the second image; the orientation of the first screen relative to the second screen is determined according to the orientation of the overlapping area in the first image and the orientation of the second image.
  • the image matching algorithm includes at least one of a scale-invariant feature transform SIFT algorithm, an accelerated robust feature SURF algorithm, and a fast nearest neighbor search algorithm.
  • the overlapping area is located in the lower half of the first image and is located in the upper half of the second image, it is determined that the first screen is located above the second screen; if the overlapping area is located in the lower left corner of the first image, and In the upper right corner of the second image, it is determined that the first screen is located at the upper right of the second screen; if the overlapping area is located in the left half of the first image and in the right half of the second image, it is determined that the first screen is located in the upper right corner of the second screen.
  • the overlapping area is located in the upper left corner of the first image and is located in the lower right corner of the second image, determine that the first screen is located at the lower right of the second screen; if the overlapping area is located in the upper half of the first image and is located in the second In the lower half area of the two images, it is determined that the first screen is located below the second screen; if the overlapping area is located in the upper right corner of the first image and is located in the lower left corner of the second image, it is determined that the first screen is located in the lower left corner of the second screen; If the overlapping area is located in the right half of the first image and in the left half of the second image, determine that the first screen is located to the left of the second screen; if the overlapping area is located in the lower right area of the first image and is located in the second In the upper left area of the image, it is determined that the first screen is located at the upper left of the second screen.
  • the orientation of the first screen relative to the second screen is determined according to the orientation of the target object in the first image and the second image.
  • the method further includes: the host sends layout information to the first screen and the second screen, the layout The information includes at least one combination mode; in response to the operation of the user selecting a combination mode from the at least one combination mode, the host sends operation information to the first screen and the second screen, and the first screen and/or the second screen according to the operation information Instructing the user to perform a first gesture or action at a first position and a second gesture or action at a second position; determining the orientation information of the first screen and the second screen according to the first image and the second image includes: if determining the first If the area containing the first gesture or action in the image is greater than or equal to the preset threshold, it is determined that the first screen is located at the first position; if it is determined that the area containing the second gesture or action in the second image is greater than or equal to the preset threshold, the second screen is determined in the second
  • the first screen or the second screen grades the resource status of the first screen and the second screen; wherein, the resource status includes the central processing unit CPU processing capacity, read-only memory ROM storage capacity or random access memory RAM storage capacity. At least one of the abilities; if the rating of the first screen is higher than the rating of the second screen, the host is integrated into the first screen; if the rating of the second screen is higher than the rating of the first screen, the host is integrated into the second screen .
  • the host determines the display information corresponding to the first screen and the second screen respectively according to the orientation information of the first screen and the second screen; the host sends the display information corresponding to the first screen to the first screen; the first screen is based on the first screen.
  • the display information corresponding to the screen displays the corresponding display screen; the host sends the display information corresponding to the second screen to the second screen; after receiving the display information corresponding to the second screen, the second screen displays the corresponding display according to the display information corresponding to the second screen screen.
  • the screen splicing system further includes a third screen
  • the method further includes: the first screen and the third screen send a second short-range signal to each other, and the second screen and the third screen send a third short-range signal to each other;
  • the RSSI of the short-range signal determines the distance between the first screen and the third screen
  • the RSSI of the third short-range signal determines the distance between the second screen and the third screen; when the distance between the first screen and the third screen is less than or equal to the first screen
  • the maximum combined radius corresponding to the third screen, the first screen, the second screen and the third screen are formed into a second screen group; wherein, the maximum combined radius corresponding to the first screen and the third screen is based on the first screen and the third screen.
  • the size of the three screens and the position of the antenna are determined; or when the distance between the second screen and the third screen is less than or equal to the maximum combined radius corresponding to the second screen and the third screen, the first screen, the second screen and the third screen A second screen group is formed; wherein, the maximum combined radius corresponding to the second screen and the third screen is determined according to the size of the second screen and the third screen and the position of the antenna.
  • the first screen and/or the second screen displays second prompt information, and the second prompt information is used to prompt the user whether to perform screen splicing after detecting a newly added device.
  • the method further includes: displaying third prompt information on the first screen and/or the second screen, where the third prompt information is used to prompt the user that the third screen is selected from the current screen group remove.
  • the first condition includes: the heartbeat connection between the third screen and the first screen is disconnected, or the heartbeat connection between the third screen and the second screen is disconnected; or the host receives an operation of the user to delete the third screen; or the first screen and The distance between the third screen is greater than the maximum combined radius corresponding to the first screen and the third screen; or the distance between the second screen and the third screen is greater than the maximum combined radius corresponding to the second screen and the third screen.
  • the method further includes: the host re-determines display information respectively corresponding to the first screen and the second screen according to the orientation information of the first screen and the second screen.
  • the first screen in the embodiment shown in FIG. 14 may be the TV 101 in the previous embodiment, the second screen may be the TV 102, and the third screen may be the TV 103.
  • the embodiment shown in FIG. 14 For parts not described in detail, reference may be made to the foregoing embodiments, which will not be repeated here.
  • a camera built in the device (the first screen or the second screen) can be used to take pictures, and the pictures taken by each device can be identified and compared, for example, it can be determined that The orientation of the photo where the overlapping area is located, thereby identifying the relative orientation relationship of the two devices, without requiring manual setting by the user, which can improve user experience.
  • the embodiment of the present application can automatically identify the combination intention between the devices and start the screen assembly program by dynamically monitoring the distance between the devices, without requiring manual setting by the user, which is more intelligent and convenient.
  • the chip system includes at least one processor 1501 and at least one interface circuit 1502 .
  • the processor 1501 and the interface circuit 1502 may be interconnected by wires.
  • the interface circuit 1502 may be used to receive signals from other devices (eg, the memory of the first screen, the memory of the second screen, or the memory of the third screen).
  • the interface circuit 1502 may be used to send signals to other devices (eg, the processor 1501).
  • the interface circuit 1502 may read instructions stored in memory in the device and send the instructions to the processor 1501 .
  • the instructions are executed by the processor 1501 , the first screen or the second screen (the screen 110 shown in FIG. 2A ) can be caused to perform the various steps in the above-mentioned embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
  • first screen (the screen 110 shown in FIG. 2A ), where the first screen may include: a communication module, a memory, and one or more processors.
  • the communication module and memory are coupled to the processor.
  • the memory is used to store computer program code comprising computer instructions.
  • Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium includes computer instructions, when the computer instructions are executed on a first screen or a second screen (screen 110 shown in FIG. 2A ) , so that the screen 110 performs each function or step performed by the television 101 or the television 102 in the above method embodiments.
  • the embodiments of the present application further provide a computer program product, when the computer program product runs on a computer, the computer is made to execute the first screen (for example, the television 101 ) or the second screen (for example, the television 101 ) or the second screen (for example, Each function or step performed by the television 102).
  • the computer program product runs on a computer
  • the computer is made to execute the first screen (for example, the television 101 ) or the second screen (for example, the television 101 ) or the second screen (for example, Each function or step performed by the television 102).
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, which are stored in a storage medium , including several instructions to make a device (may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Abstract

本申请实施例提供一种屏幕组合方法和装置,涉及终端领域,可以自动完成屏幕拼接,提高用户的体验。本申请实施例应用于屏幕拼接系统,屏幕拼接系统包括至少两个屏幕及主机,至少两个屏幕包括第一屏幕及第二屏幕,其方法包括:第一屏幕及第二屏幕组成第一屏组,第一屏幕及第二屏幕通信连接;主机向第一屏幕发出第一指示,向第二屏幕发出第二指示;第一屏幕根据第一指示拍摄第一图像;第二屏幕根据第二指示拍摄第二图像;根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息。

Description

一种屏幕组合方法和装置
本申请要求于2021年02月08日提交国家知识产权局、申请号为202110171975.0、发明名称为“一种屏幕组合方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端领域,尤其涉及一种屏幕组合方法和装置。
背景技术
随着网络技术的普遍应用,从指挥监控中心、网管中心的建立,到临时会议、技术讲座的进行,都需要更大画面的显示效果,为达到这一效果,可以将多个带屏设备的屏幕拼接起来,以满足更大画面的显示效果。
在多个屏幕组合时,需要确定每个屏幕的位置和方向信息。目前,可以通过用户手动输入的方法将屏幕的方向和位置信息输入到控制主机。比如在多个显示器连接到一个电脑主机的组合的场景中,可以人工标记显示器的位置,以完成多个显示器组合的屏幕位置和方向关系设定。如图1A所示,在设置界面中,可以通过手动(通过鼠标或触摸屏)拖拽标记显示器,完成多个显示器的方位组合。
上述方法中,屏幕组合关系中的位置和方向信息设定需要用户手动输入,组合过程复杂,步骤繁琐,导致用户体验较差。
发明内容
本申请实施例提供一种屏幕组合方法和装置,可以自动完成屏幕拼接,提高用户的体验。
第一方面,本申请实施例提供一种屏幕组合方法,应用于屏幕拼接系统,屏幕拼接系统包括至少两个屏幕及主机,至少两个屏幕包括第一屏幕及第二屏幕,包括:第一屏幕及第二屏幕组成第一屏组,第一屏幕及第二屏幕通信连接;主机向第一屏幕发出第一指示,向第二屏幕发出第二指示;第一屏幕根据第一指示拍摄第一图像;第二屏幕根据第二指示拍摄第二图像;根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息。
基于本申请实施例提供的方法,在屏幕组合拼接过程中,可以根据设备(第一屏幕或第二屏幕)拍摄的图像(照片)识别出两个设备的相对方位关系,无需用户手动设置,可以提高用户体验。
在一种可能的实现方式中,主机集成于第一屏幕或第二屏幕中;或者主机独立于第一屏幕或第二屏幕。示例性的,第一屏幕或第二屏幕可以是电视,当主机独立于第一屏幕或第二屏幕时,主机可以是机顶盒或路由器等设备。当主机集成于第一屏幕或第二屏幕中时,主机可以视为第一屏幕或第二屏幕中的一个处理模块。
在一种可能的实现方式中,根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:第一屏幕向第二屏幕发送第一图像;第二屏幕向第一屏幕发送第二图像;第一屏幕和第二屏幕分别根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息;第一屏幕向主机发送第一屏幕确定的方位信息,第二屏幕向主机发送第二屏 幕确定的方位信息;主机根据第一屏幕确定的方位信息和第二屏幕确定的方位信息确定第一屏幕与第二屏幕的方位信息。在一些情况中,有部分设备之间的方位信息是冗余的,可以不使用这些信息,或者可以参考冗余的信息对识别结果进行校验。
在一种可能的实现方式中,根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:第一屏幕向主机发送第一图像;第二屏幕向主机发送第二图像;主机根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息。即可以由主机识别屏组中各个设备的方位关系,其他设备(例如,第二屏幕)可以无需进行方位识别,可以节省其他设备的功耗。
在一种可能的实现方式中,第一屏幕及第二屏幕组成第一屏组之前,方法还包括:第一屏幕或第二屏幕以预设频率互相发送第一短距信号,第一屏幕或第二屏幕根据第一屏幕和第二屏幕之间传输的第一短距信号的接收信号强度指示RSSI确定第一屏幕和第二屏幕的距离;当第一屏幕和第二屏幕的距离小于等于第一屏幕和第二屏幕对应的最大组合半径时,第一屏幕与第二屏幕组成第一屏组;其中,第一屏幕和第二屏幕对应的最大组合半径是根据第一屏幕和第二屏幕的尺寸和天线的位置确定的。这样,第一屏幕和第二屏幕之间可以根据第一短距信号确定是否进行屏幕组合,即可以自动进行屏幕组合,无需用户进行复杂的操作,可以提高用户体验。
在一种可能的实现方式中,第一屏幕及第二屏幕组成第一屏组之前,方法还包括:第一屏幕和/或第二屏幕显示第一提示信息,第一提示信息用于提示用户检测到附近有设备,是否进行屏幕拼接,第一屏幕和/或第二屏幕获取用户的指示,用户的指示用于确认进行屏幕拼接。这样,可以根据用户的操作确定是否组屏,从而可以避免自动触发屏幕组合的误差。
在一种可能的实现方式中,根据第一图像和第二图像确定第一屏幕和第二屏幕的方位信息包括:根据图像匹配算法对第一图像和第二图像进行图像匹配,确定第一图像和第二图像的重叠区域;根据重叠区域在第一图像的方位和第二图像的方位确定第一屏幕相对于第二屏幕的方位。这样,通过确定重叠区域在不同照片的方位,可以识别出两个设备(第一屏幕和第二屏幕)的相对方位关系,无需用户手动设置,可以提高用户体验。
在一种可能的实现方式中,根据重叠区域在第一图像的方位确定第一屏幕相对于第二屏幕的方位包括:若重叠区域位于第一图像的下半区,且位于第二图像的上半区,确定第一屏幕位于第二屏幕的上方;若重叠区域位于第一图像的左下角,且位于第二图像的右上角,确定第一屏幕位于第二屏幕的右上方;若重叠区域位于第一图像的左半区,且位于第二图像的右半区,确定第一屏幕位于第二屏幕的右方;若重叠区域位于第一图像的左上角,且位于第二图像的右下角,确定第一屏幕位于第二屏幕的右下方;若重叠区域位于第一图像的上半区,且位于第二图像的下半区,确定第一屏幕位于第二屏幕的下方;若重叠区域位于第一图像的右上角,且位于第二图像的左下角,确定第一屏幕位于第二屏幕的左下方;若重叠区域位于第一图像的右半区,且位于第二图像的左半区,确定第一屏幕位于第二屏幕的左方;若重叠区域位于第一图像的右下区,且位于第二图像的左上区,确定第一屏幕位于第二屏幕的左上方。这样,通过确定重叠区域在不同照片的方位,可以识别出两个设备(第一屏幕和第二屏幕)的相 对方位关系,无需用户手动设置,可以提高用户体验。
在一种可能的实现方式中,图像匹配算法包括尺度不变特征变换SIFT算法、加速稳健特征SURF算法、快速最近邻搜索算法中的至少一种。当然,图像匹配算法可以是其他算法,本申请不做限定。
在一种可能的实现方式中,根据第一图像和第二图像确定第一屏幕和第二屏幕的方位信息包括:若确定第一图像和第二图像包括目标物体,根据目标物体在第一图像和第二图像的方位确定第一屏幕相对于第二屏幕的方位;其中,目标物体包括人脸、人体动作或家具物品中的任一项。这样,通过确定目标物体在不同照片的方位,可以识别出两个设备(第一屏幕和第二屏幕)的相对方位关系,无需用户手动设置,可以提高用户体验。
在一种可能的实现方式中,根据目标物体在第一图像和第二图像的方位确定第一屏幕相对于第二屏幕的方位包括:若目标物体位于第一图像的下半区,且位于第二图像的上半区,确定第一屏幕位于第二屏幕的上方;若目标物体位于第一图像的左下角,且位于第二图像的右上角,确定第一屏幕位于第二屏幕的右上方;若目标物体位于第一图像的左半区,且位于第二图像的右半区,确定第一屏幕位于第二屏幕的右方;若目标物体位于第一图像的左下角,且位于第二图像的右下角,确定第一屏幕位于第二屏幕的右方;若目标物体位于第一图像的左上角,且位于第二图像的右上角,确定第一屏幕位于第二屏幕的右方;若目标物体位于第一图像的左上角,且位于第二图像的右下角,确定第一屏幕位于第二屏幕的右下方;若目标物体位于第一图像的上半区,且位于第二图像的下半区,确定第一屏幕位于第二屏幕的下方;若目标物体位于第一图像的右上角,且位于第二图像的左下角,确定第一屏幕位于第二屏幕的左下方;若目标物体位于第一图像的右半区,且位于第二图像的左半区,确定第一屏幕位于第二屏幕的左方;若目标物体位于第一图像的右下角,且位于第二图像的左下角,确定第一屏幕位于第二屏幕的左方;若目标物体位于第一图像的右上角,且位于第二图像的左上角,确定第一屏幕位于第二屏幕的左方;若目标物体位于第一图像的右下区,且位于第二图像的左上区,确定第一屏幕位于第二屏幕的左上方。这样,通过确定目标物体在不同照片的方位,可以识别出两个设备(第一屏幕和第二屏幕)的相对方位关系,无需用户手动设置,可以提高用户体验。
在一种可能的实现方式中,第一屏幕根据第一指示拍摄第一图像,第二屏幕根据第二指示拍摄第二图像之前,方法还包括:主机向第一屏幕和第二屏幕发送布局信息,布局信息包括至少一种组合模式;响应于用户从至少一种组合模式选择一种组合模式的操作,主机向第一屏幕和第二屏幕发送操作信息,第一屏幕和/或第二屏幕根据操作信息指示用户在第一位置进行第一手势或动作,在第二位置进行第二手势或动作;根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:若确定第一图像中包含第一手势或动作的区域大于等于预设阈值,确定第一屏幕位于第一位置;若确定第二图像中包含第二手势或动作的区域大于等于预设阈值,确定第二屏幕位于第二位置。这种基于用户的手势确定设备的方位的方案,可以提升屏幕拼接过程中用户的参与度和趣味性。
在一种可能的实现方式中,主机集成于第一屏幕或第二屏幕中,第一屏幕及第二 屏幕组成第一屏组,方法还包括:第一屏幕或第二屏幕对第一屏幕和第二屏幕的资源情况进行评分;其中,资源情况包括中央处理单元CPU处理能力、只读存储器ROM存储能力或随机存取存储器RAM存储能力中的至少一项;若第一屏幕的评分高于第二屏幕的评分,主机集成于第一屏幕中;若第二屏幕的评分高于第一屏幕的评分,主机集成于第二屏幕中。当主机集成于第一屏幕中时,可以认为第一屏幕是主设备,当主机集成于第二屏幕中时,可以认为第二屏幕是主设备。
在一种可能的实现方式中,方法还包括:主机根据第一屏幕与第二屏幕的方位信息确定第一屏幕和第二屏幕分别对应的显示信息;主机向第一屏幕发送第一屏幕对应的显示信息;第一屏幕根据第一屏幕对应的显示信息显示对应的显示画面;主机向第二屏幕发送第二屏幕对应的显示信息;第二屏幕接收第二屏幕对应的显示信息后,根据第二屏幕对应的显示信息显示对应的显示画面。这样,第一屏幕和第二屏幕可以根据主机确定的显示信息显示对于的显示画面,能够满足更大画面的显示效果。
在一种可能的实现方式中,屏幕拼接系统还包括第三屏幕,方法还包括:第一屏幕与第三屏幕互相发送第二短距信号;第二屏幕与第三屏幕互相发送第三短距信号;根据第二短距信号的RSSI确定第一屏幕与第三屏幕的距离;根据第三短距信号的RSSI确定第二屏幕与第三屏幕的距离;当第一屏幕和第三屏幕的距离小于等于第一屏幕和第三屏幕对应的最大组合半径时,将第一屏幕、第二屏幕及第三屏幕组成第二屏组;其中,第一屏幕和第三屏幕对应的最大组合半径是根据第一屏幕和第三屏幕的尺寸和天线的位置确定的;或者当第二屏幕和第三屏幕的距离小于等于第二屏幕和第三屏幕对应的最大组合半径时,将第一屏幕、第二屏幕及第三屏幕组成第二屏组;其中,第二屏幕和第三屏幕对应的最大组合半径是根据第二屏幕和第三屏幕的尺寸和天线的位置确定的。这样,第一屏幕和第三屏幕之间可以根据第二短距信号确定是否进行屏幕组合;或者第二屏幕和第三屏幕之间可以根据第三短距信号确定是否进行屏幕组合,无需用户进行复杂的操作,可以提高用户体验。
在一种可能的实现方式中,方法还包括:第一屏幕和/或第二屏幕显示第二提示信息,第二提示信息用于提示用户检测到新增设备,是否进行屏幕拼接;第一屏幕和/或第二屏幕获取用户的指示,用户的指示用于确认进行屏幕拼接。这样,可以根据用户的操作确定是否组屏,从而可以避免自动触发屏幕组合的误差。
在一种可能的实现方式中,若满足第一条件,方法还包括:第一屏幕和/或第二屏检测是否满足第一条件;若满足第一条件,第一屏幕和/或第二屏幕将第三屏幕从第二屏组中被移除。即第一屏幕或第二屏幕可以自动检测是否有屏幕(例如,第三屏幕)被移除,进而可以提示用户,以便用户随时了解屏组的情况。
在一种可能的实现方式中,第一条件包括:第三屏幕与第一屏幕的心跳连接断开,或第三屏幕与第二屏幕的心跳连接断开;或者主机接收用户删除第三屏幕的操作;或者第一屏幕和第三屏幕的距离大于第一屏幕和第三屏幕对应的最大组合半径;或者第二屏幕和第三屏幕的距离大于第二屏幕和第三屏幕对应的最大组合半径。
在一种可能的实现方式中,方法还包括:主机重新根据第一屏幕与第二屏幕的方位信息确定第一屏幕和第二屏幕分别对应的显示信息。即主机可以随着屏组中设备的变化情况适应性调整屏组的显示信息。
在一种可能的实现方式中,方法还包括:主机向第三屏幕发出第三指示,向第一屏幕发出第四指示,向第二屏幕发出第五指示;第三屏幕根据第三指示拍摄第三图像;第一屏幕根据第四指示拍摄第四图像;第二屏幕根据第五指示拍摄第五图像;第三屏幕向主机发送第三图像;第二屏幕向主机发送第五图像;主机接收第四图像和第五图像后,根据第三图像、第四图像和第五图像确定第一屏幕、第二屏幕和第三屏幕的方位信息。即当屏组中新增设备时,可以认为屏组发生了重组,可以使屏组中各设备重新拍摄图像,以便重新确定屏组中的各设备的相对方位关系。
在一种可能的实现方式中,第三屏幕从第二屏组中被移除,方法还包括:主机向第一屏幕发出第六指示,向第二屏幕发出第七指示;第一屏幕根据第六指示拍摄第六图像;第二屏幕根据第七指示拍摄第七图像;第一屏幕向主机发送第六图像;第二屏幕向主机发送第七图像;主机根据第六图像和第七图像确定第一屏幕和第二屏幕的方位信息。即当屏组中有设备被移除时,可以认为屏组发生了重组,可以使屏组中各设备重新拍摄图像,以便重新确定屏组中的各设备的相对方位关系。
第二方面,本申请实施例提供一种屏幕组合方法,应用于屏幕拼接系统,屏幕拼接系统包括至少两个屏幕及主机,至少两个屏幕包括第一屏幕及第二屏幕,第一屏幕及第二屏幕组成第一屏组,第一屏幕及第二屏幕通信连接,包括:主机向第一屏幕发出第一指示,第一指示用于指示第一屏幕拍摄第一图像;主机向第二屏幕发出第二指示,第二指示用于指示第二屏幕拍摄第二图像;主机根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息。
基于本申请实施例提供的方法,在屏幕组合拼接过程中,可以根据设备(第一屏幕或第二屏幕)拍摄的图像(照片)识别出两个设备的相对方位关系,无需用户手动设置,可以提高用户体验。并且,本申请实施例可以通过动态监测设备间的距离,自动识别设备间的组合意图并启动屏幕拼装程序,无需用户手动设置,更加智能便捷。
在一种可能的实现方式中,主机集成于第一屏幕或第二屏幕中;或者主机独立于第一屏幕或第二屏幕。
在一种可能的实现方式中,主机根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:主机从第一屏幕接收第一图像;主机从第二屏幕接收第二图像;主机根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息。
在一种可能的实现方式中,根据第一图像和第二图像确定第一屏幕和第二屏幕的方位信息包括:主机根据图像匹配算法对第一图像和第二图像进行图像匹配,确定第一图像和第二图像的重叠区域;根据重叠区域在第一图像的方位和第二图像的方位确定第一屏幕相对于第二屏幕的方位。
在一种可能的实现方式中,根据重叠区域在第一图像的方位确定第一屏幕相对于第二屏幕的方位包括:若重叠区域位于第一图像的下半区,且位于第二图像的上半区,确定第一屏幕位于第二屏幕的上方;若重叠区域位于第一图像的左下角,且位于第二图像的右上角,确定第一屏幕位于第二屏幕的右上方;若重叠区域位于第一图像的左半区,且位于第二图像的右半区,确定第一屏幕位于第二屏幕的右方;若重叠区域位于第一图像的左上角,且位于第二图像的右下角,确定第一屏幕位于第二屏幕的右下方;若重叠区域位于第一图像的上半区,且位于第二图像的下半区,确定第一屏幕位 于第二屏幕的下方;若重叠区域位于第一图像的右上角,且位于第二图像的左下角,确定第一屏幕位于第二屏幕的左下方;若重叠区域位于第一图像的右半区,且位于第二图像的左半区,确定第一屏幕位于第二屏幕的左方;若重叠区域位于第一图像的右下区,且位于第二图像的左上区,确定第一屏幕位于第二屏幕的左上方。
在一种可能的实现方式中,图像匹配算法包括尺度不变特征变换SIFT算法、加速稳健特征SURF算法、快速最近邻搜索算法中的至少一种。
在一种可能的实现方式中,根据第一图像和第二图像确定第一屏幕和第二屏幕的方位信息包括:若确定第一图像和第二图像包括目标物体,根据目标物体在第一图像和第二图像的方位确定第一屏幕相对于第二屏幕的方位。
在一种可能的实现方式中,第一屏幕根据第一指示拍摄第一图像,第二屏幕根据第二指示拍摄第二图像之前,方法还包括:主机向第一屏幕和第二屏幕发送布局信息,布局信息包括至少一种组合模式;响应于用户从至少一种组合模式选择一种组合模式的操作,主机向第一屏幕和第二屏幕发送操作信息,所述操作信息用于指示用户在第一位置进行第一手势或动作,在第二位置进行第二手势或动作;根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:若确定第一图像中包含第一手势或动作的区域大于等于预设阈值,确定第一屏幕位于第一位置;若确定第二图像中包含第二手势或动作的区域大于等于预设阈值,确定第二屏幕位于第二位置。
在一种可能的实现方式中,方法还包括:主机根据第一屏幕与第二屏幕的方位信息确定第一屏幕和第二屏幕分别对应的显示信息;主机向第一屏幕发送第一屏幕对应的显示信息;主机向第二屏幕发送第二屏幕对应的显示信息。
第二方面中各实现方式的有益效果可以参考第一方面中相应实现方式的有益效果,在此不做赘述。
第三方面,本申请实施例提供一种电子设备,该电子设备可以是第一屏幕或第二屏幕,该电子设备包括:无线通信模块、存储器和一个或多个处理器;无线通信模块、存储器与处理器耦合;其中,存储器用于存储计算机程序代码,计算机程序代码包括计算机指令;当计算机指令被处理器执行时,使得电子设备执行如第一方面或第二方面及其任一种可能的实现方式所述的方法。
第四方面,本申请实施例提供一种芯片系统,该芯片系统包括一个或多个接口电路和一个或多个处理器。该接口电路和处理器通过线路互联。上述芯片系统可以应用于包括通信模块和存储器的电子设备(例如,第一屏幕或第二屏幕)。该接口电路用于从存储器接收信号,并向处理器发送接收到的信号,该信号包括存储器中存储的计算机指令。当处理器执行该计算机指令时,电子设备可以执行如任一方面及其任一种可能的实现方式所述的方法。
第五方面,本申请实施例提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令。当计算机指令在电子设备(例如,第一屏幕或第二屏幕)上运行时,使得该电子设备执行如第一方面及其任一种可能的实现方式所述的方法。
第六方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面或第二方面及其任一种可能的实现方式所述的方法。
第七方面,本申请实施例提供一种软件升级系统,包括第一屏幕、第二屏幕和主机,第一屏幕、第二屏幕和主机可以执行如第一方面及其任一种可能的实现方式所述的方法。
附图说明
图1A为现有技术中的一种屏幕组合的显示界面示意图;
图1B为本申请实施例提供的一种系统架构示意图;
图1C为本申请实施例提供的又一种系统架构示意图;
图2A为本申请实施例提供的一种电子设备的硬件结构示意图;
图2B为本申请实施例提供的一种电子设备的软件架构示意图;
图3A为本申请实施例提供的一种多个设备的连接示意图;
图3B为本申请实施例提供的又一种多个设备的连接示意图;
图3C为本申请实施例提供的一种显示示意图;
图3D为本申请实施例提供的又一种显示示意图;
图3E为本申请实施例提供的又一种显示示意图;
图3F为本申请实施例提供的又一种显示示意图;
图3G为本申请实施例提供的又一种显示示意图;
图3H为本申请实施例提供的又一种显示示意图;
图4为本申请实施例提供的一种信号交互示意图;
图5为本申请实施例提供的一种确定两个设备之间天线距离的示意图;
图6为本申请实施例提供的一种电视101和电视102拍摄的图像的重叠区域的示意图;
图7为本申请实施例提供的又一种电视101和电视102拍摄的图像的重叠区域的示意图;
图8为本申请实施例提供的一种人脸在电视101和电视102拍摄的图像的位置的示意图;
图9为本申请实施例提供的一种设备的排序示意图;
图10为本申请实施例提供的又一种信号交互示意图;
图11A为本申请实施例提供的又一种设备的排序示意图;
图11B为本申请实施例提供的又一种设备的排序示意图;
图12为本申请实施例提供的又一种信号交互示意图;
图13为本申请实施例提供的又一种显示示意图;
图14为本申请实施例提供的又一种信号交互示意图;
图15为本申请实施例提供的芯片系统的结构示意图。
具体实施方式
本申请提供一种屏幕组合方法,针对自带摄像头的屏幕终端设备,在不借助任何其他特定传感器和人工输入方位的情况下,可以自动检测屏幕组合场景,并自动基于摄像头所拍摄图片的重叠区域的方位来计算出屏幕相对方位,从而完成屏幕组合过程,给用户以简便、智能化的屏幕拼接使用体验。
如图1B所示,为本申请实施例提供的一种屏幕拼接系统的架构示意图。如图1B 所示,该系统可以包括:一个或多个电子设备,例如可以包括路由器100、电视101、电视102、电视103和电视104。电视101、电视102、电视103和电视104之间可以基于路由器100连接到同一个局域网。当然,屏幕拼接系统还可以包括更多的电子设备,本申请不做限定。
如图1C所示,为本申请实施例提供的又一种屏幕拼接系统的架构示意图。该系统可以包括:一个或多个电子设备,例如可以包括电视101、电视102、电视103和电视104。电视101、电视102、电视103、电视104两两之间可以通过短距通信技术连接(例如,WIFI直连技术、蓝牙技术等)。当然,屏幕拼接系统还可以包括更多的电子设备,本申请不做限定。
如图2A所示,电视101、电视102、电视103或电视104可以为屏幕110,屏幕110可以包括:处理器111,存储器112,无线通信处理模块113,电源开关114,有线LAN通信处理模块115,HDMI通信处理模块116,通用串行总线(universal serial bus,USB)通信处理模块117,显示屏118,音频模块119,扬声器119A,麦克风119B,等等。其中:
处理器111可用于读取和执行计算机可读指令。具体实现中,处理器111可主要包括控制器、运算器和寄存器。其中,控制器主要负责指令译码,并为指令对应的操作发出控制信号。运算器主要负责保存指令执行过程中临时存放的寄存器操作数和中间操作结果等。具体实现中,处理器111的硬件架构可以是专用集成电路(ASIC)架构、MIPS架构、ARM架构或者NP架构等等。
在一些实施例中,处理器111可以用于解析无线通信处理模块113和/有线LAN通信处理模块115接收到的信号。处理器111可以用于根据信号的解析结果进行相应的处理操作,如响应数据请求,又如根据该控制请求,控制显示屏118的显示和/或控制音频模块119的输出,等等。
在一些实施例中,处理器111还可用于生成无线通信处理模块113和/有线LAN通信处理模块115向外发送的信号,如蓝牙广播信号、信标信号等等。
存储器112与处理器111耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器112可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器112可以存储操作系统,例如uCOS、VxWorks、RTLinux等嵌入式操作系统。存储器112还可以存储通信程序,该通信程序可用于与其他设备进行通信。
无线通信处理模块113可以包括蓝牙(BT)通信处理模块113A、WLAN通信处理模块113B。
在一些实施例中,蓝牙(BT)通信处理模块113A、WLAN通信处理模块113B中的一项或多项可以监听到其他设备发射的信号,如探测请求、扫描信号等等,并可以发送响应信号,如探测响应、扫描响应等,使得其他设备可以发现屏幕110,并与其他设备建立无线通信连接,通过蓝牙或WLAN中的一种或多种无线通信技术与其他设备进行通信。WLAN通信处理模块113B可以包括Wi-Fi direct、Wi-Fi LAN或Wi-Fi softAP中一项或多项WLAN通信的解决方案。
在另一些实施例中,蓝牙(BT)通信处理模块113A、WLAN通信处理模块113B 中的一项或多项也可以发射信号,如广播蓝牙信号、信标信号,使得其他设备可以发现屏幕110,并与其他设备建立无线通信连接,通过蓝牙或WLAN中的一种或多种无线通信技术与其他设备进行通信。
在一些实施例中,屏幕110可以通过WLAN无线通信技术连接上Internet,从而与Internet上的服务器(例如频道识别服务器,点播资源服务器,等等)建立通信连接。
无线通信处理模块113还可以包括红外线通信处理模块113C。红外线通信处理模块113C可以通过红外遥控技术与其他设备(如遥控器)进行通信。
电源开关114可用于控制电源向显示器118的供电。
有线LAN通信处理模块115可用于通过有线LAN和同一个LAN中的其他设备进行通信,还可用于通过有线LAN连接到WAN,可与WAN中的设备通信。
HDMI通信处理模块116可用于通过HDMI端口与机顶盒等设备进行通信。例如,HDMI通信处理模块116可以通过HDMI端口接收机顶盒发送的媒体内容,等等。
USB通信处理模块117可用于通过USB接口与其他设备进行通信。
显示屏118可用于显示图像,视频等。显示屏118可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED)显示屏,有源矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED)显示屏,柔性发光二极管(flexible light-emitting diode,FLED)显示屏,量子点发光二极管(quantum dot emitting diodes,QLED)显示屏等等。
音频模块119可用于将数字音频信号转换成模拟音频信号输出,也可用于将模拟音频输入转换为数字音频信号。音频模块119还可以用于对音频信号编码和解码。在一些实施例中,音频模块119可以设置与处理器111中,或将音频模块119的部分功能模块设置与处理器111中。音频模块119可以通过总线接口(例如UART接口,等等)向无线通信模块113传递音频信号,实现通过蓝牙音箱播放音频信号的功能。
扬声器119A可以用于将音频模块119的发送的音频信号转换为声音信号。
在一些实施例中,屏幕110还可以包括有麦克风119B,也称“话筒”,“传声器”,用于将将声音信号转换为电信号。当发送语音控制指令时,用户可以通过人嘴发声,将声音信号输入到麦克风119B。
摄像头120可以用于捕获静态图像或视频。
可以理解的是,上述屏幕110可以具有比图2A中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图2A中所示出的各种部件可以在包括一个或多个信号处理或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图2B所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
本申请实施例中,应用程序层还可以包括屏幕拼接管理服务,屏幕拼接管理服务用于管理多个设备间的屏幕拼接(屏幕组合)。屏幕拼接管理服务可以集成在系统APP或第三方APP,例如智慧生活APP、智慧互联APP、设置应用等等,本申请不做限定。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application  programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2B所示,应用程序框架层可以包括活动管理器、窗口管理器,内容提供器,视图系统,资源管理器,通知管理器等,本申请实施例对此不做任何限制。
活动管理器(Activity Manager):用于管理每个应用的生命周期。应用通常以Activity的形式运行在操作系统中。对于每一个Activity,在活动管理器中都会有一个与之对应的应用记录(ActivityRecord),这个ActivityRecord记录了该应用的Activity的状态。活动管理器可以利用这个ActivityRecord作为标识,调度应用的Activity进程。
窗口管理器(WindowManagerService):用于管理在屏幕上使用的图形用户界面(graphical user interface,GUI)资源,具体可用于:获取显示屏大小、窗口的创建和销毁、窗口的显示与隐藏、窗口的布局、焦点的管理以及输入法和壁纸管理等。
应用程序框架层以下的系统库和内核层等可称为底层系统,底层系统中包括用于提供显示服务的底层显示系统,例如,底层显示系统包括内核层中的显示驱动以及系统库中的surface manager等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,终端振动,指示灯闪烁等。
如图2B所示,Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
如图2B所示,系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),OpenGL ES,SGL等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
OpenGL ES用于实现三维图形绘图,图像渲染,合成,和图层处理等。
SGL是2D绘图的绘图引擎。
如图2B所示,内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像 头驱动,音频驱动,传感器驱动。
如图3A所示,提供一种电视101、电视102、电视103和电视104的连接示意图,电视101、电视102、电视103和电视104分别可以包括处理器、显示屏、摄像头和通信单元等硬件模块。电视101、电视102、电视103和电视104之间可以通过通信单元相互连接,从而进行通信。
如图3B所示,提供又一种电视101、电视102、电视103和电视104的连接示意图。电视101、电视102、电视103和电视104可以分别包括应用程序层、应用程序框架层、安卓运行时和系统库以及内核层。在本申请实施例中,电视101、电视102、电视103和电视104之间可以通过屏幕拼接管理服务自动进行屏幕拼接。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请的描述中,除非另有说明,“至少一个”是指一个或多个,“多个”是指两个或多于两个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
为了便于理解,以下结合附图对本申请实施例提供的屏幕组合方法进行具体介绍。
如图3C中的(a)所示,示出了电视101的主界面300。当电视101确定电视101和电视102的距离D1小于等于电视101和电视102对应的最大组合半径R1(最大组合半径的概念参见下文步骤406a的相关描述)时,如图3C中的(b)所示,电视101可以弹出弹框301,提示用户检测到附近有设备,弹框301中可以包括是的按钮302和否的按钮303,以便用户选择是否进行屏幕组合。如图3C中的(c)所示,响应于用户选中是的按钮302(例如,通过遥控器或触摸屏选中按钮302)的操作,如图3C中的(d)所示,可以向用户提示正在进行屏幕组合。
可选的,电视101也可以提示用户附近的设备的标识或ID。例如,如图3D所示,电视101可以弹出弹框304,提示用户检测到设备222xxx(222xxx是电视102的ID)在附近,弹框304中可以包括是的按钮302和否的按钮303,以便用户选择是否将当前设备与客厅的电视进行屏幕组合。
假设电视101和电视102在拼接前的显示内容分别如图3E中的(a)和(b)所示,进行屏幕拼接后,如图3E中的(c)所示,电视101和电视102可以共同显示电视101(例如,电视101为主设备,确定主设备的过程可以参考下文步骤409的相关描述)的桌面,或者,如图3E中的(d)所示,电视101和电视102可以分别显示电视101的桌面。
在一些实施例中,若由电视101和电视102构成的屏组(例如,第一屏组)中新增电视103(即当电视101与电视102拼接在一起时,新增电视103需要和电视101以及电视103进一步拼接在一起),示例性的,如图3F中的(a)所示,电视103可以逐渐靠近电视101以及电视102组成的屏组,而后,如图3F中的(b)所示,在电视101和电视102上可以弹出弹框305,提示用户检测到设备123xxx(123xxx是电视103的ID),弹框305中可以包括是的按钮302和否的按钮303,以便用户选择是否将该设备加入屏组。可选的,电视103也可以提示用户附近的设备的标识或ID。例如, 电视103可以弹出弹框306,提示用户检测到设备111xxx(111xxx可以是电视101的ID),弹框306中可以包括是的按钮302和否的按钮303,以便用户选择是否进行屏幕组合。响应于用户点击是的按钮302的操作,如图3F中的(c)所示,电视101、电视102和电视103可以组成新的屏组(例如,第二屏组)。
在一些实施例中,如图3G中的(a)所示,电视101、电视102和电视103构成的屏组可以共同显示相应的显示内容,若需要从由电视101、电视102和电视103构成的屏组中删除电视103(例如,当电视101、电视102和电视103拼接在一起时,移除了电视103),示例性的,如图3G中的(b)所示,电视101和电视102可以弹出弹框307,提示用户检测到当前屏组中设备123xxx被移除。弹框307中可以包括确定按钮308。响应于用户点击确定按钮308,电视101确定弹框307中的信息用户已知,可以隐去弹框307。或者,弹框307可以在出现几秒(例如,2s)后自动隐去,避免影响电视101和电视102的显示内容。
在另一些实施例中,如图3H中的(a)所示,电视101、电视102和电视103构成的屏组可以共同显示相应的显示内容,若需要从由电视101、电视102和电视103构成的屏组中删除电视103(例如,当电视101、电视102和电视103拼接在一起时,移除了电视103),示例性的,如图3H中的(b)所示,电视101、电视102和电视103可以仍保持其之前的显示内容,且电视101和电视102可以弹出弹框309,提示用户确定将设备123xxx从当前屏组中移除?弹框309中可以包括确定按钮310和取消按钮311。响应于用户点击确定按钮310的操作,如图3H中的(c)所示,电视101和电视102可以共同显示相应的显示内容(电视101和电视102的显示内容可以是电视101(主设备)的处理器确定的),电视103单独显示相应的显示内容(电视103显示的内容可以是电视103的处理器确定的)。另外,若用户点击取消按钮311,如图3H中的(b)所示,电视101、电视102和电视103可以仍保持其之前的显示内容。
如图4所示,以电视101和电视102进行屏幕拼接为例,对本申请实施例提供的屏幕组合方法的具体实现过程进行说明,包括以下流程:
401、电视101启动屏幕拼接管理服务。
电视101开机后,可以启动屏幕拼接管理服务。其中,屏幕拼接管理服务可以集成在电视101上的系统APP或第三方APP,例如智慧生活APP、智慧互联APP、设置应用等等,本申请不做限定。
402、电视102启动屏幕拼接管理服务。
电视102开机后,可以启动屏幕拼接管理服务。屏幕拼接管理服务可以参考步骤401中的相关描述,在此不做赘述。
403、电视101和电视102建立网络连接,共享电视101和电视102的设备信息。
在一种实现方式中,电视101和电视102可以接入同一个局域网,实现电视101和电视102建立网络连接。
电视101启动屏幕拼接管理服务后,电视101的屏幕拼接管理服务可以基于短距通信技术(例如,蓝牙/WIFI的靠近发现技术)完成对附近其他安装屏幕拼接管理服务的设备(例如,电视102)的发现。
类似的,电视102启动屏幕拼接管理服务后,屏幕拼接管理服务可以基于短距通 信技术(例如,蓝牙/WIFI的靠近发现技术)完成对附近其他屏幕设备(例如,电视101)的发现。
在一些实施例中,电视101和电视102之间可以直接通过蓝牙/WIFI直连等技术相互发现并连接。
404a、电视101建立附近设备列表。
电视101的屏幕拼接管理服务可以与电视101连接的其他设备(例如,电视102)的屏幕拼接管理服务进行信息交互,得到附近设备列表。示例性的,电视101建立的附近设备列表可以如表1所示。
表1
附近设备(已连接) MAC/ID
电视102 MAC 2/ID2
当然,电视101还可以与更多设备连接,例如可以与电视103、电视104等设备连接。这样一来,电视101建立的附近设备列表可以如表2所示。
表2
附近设备(已连接) MAC/ID
电视102 MAC 2/ID2
电视103 MAC 3/ID3
电视104 MAC4/ID4
可选的,电视101还可以从附近设备列表中的各个设备获取各个设备的名称、尺寸信息(例如,设备的长和宽)、天线信息(天线在设备中的安装位置、天线的类型、精度、大小等)等信息。
404b、电视102建立附近设备列表。
示例性的,电视102建立的附近设备列表可以如表3所示。
表3
附近设备(已连接) MAC/ID
电视101 MAC 1/ID1
当然,电视102还可以与更多设备连接,例如可以与电视103、电视104等设备连接。这样一来,电视102建立的附近设备列表可以如表4所示。
表4
附近设备(已连接) MAC/ID
电视101 MAC 1/ID1
电视103 MAC 3/ID3
电视104 MAC4/ID4
可选的,电视102还可以从附近设备列表中的各个设备获取各个设备的名称、尺寸信息(例如,设备的长和宽)、天线信息(天线在设备中的安装位置、天线的类型、精度、大小等)等信息。
405、电视101和电视102之间发送短距信号。
在一种可能的设计中,电视101可以通过短距通信技术(例如蓝牙/WIFI信号测距技术),分别测量电视101与附近设备列表中的每个设备之间的距离。例如,电视 101可以基于电视102发送的短距信号的接收信号强度指示(received signal strength indication,RSSI)得到两台设备的间的距离D1。
电视102可以测量电视102与电视102的附近设备列表中的每个设备之间的距离。例如,电视102可以基于电视101发送的短距信号的RSSI得到两台设备间的距离D1。或者,可以由电视101将测量得到的距离D1通知给电视102。
406a、电视101确定电视101与电视102之间的距离D1小于等于电视101和电视102对应的最大组合半径R1。
电视101可以测量电视101与附近设备列表中的每个设备之间的距离,并确定两两设备(两个设备)之间的距离与该两两设备对应的最大组合半径的大小。
例如,电视101可以测量电视101与电视102之间的距离D1,并确定D1与电视101与电视102之间的最大组合半径R1的大小。
其中,电视101和电视102对应的最大组合半径可以是根据两个设备(电视101与电视102)的尺寸和天线的位置确定的。其中,天线例如可以是蓝牙天线、WIFI天线等。
示例性的,如图5中的(a)所示,假设两个设备(例如,电视101和电视102)的尺寸相同,高度为h,宽度为w,w>=h,以设备的左上角为坐标系的(0,0)坐标,屏幕中心点的坐标为(x,y),天线的坐标可以为(x1,y1)。天线的距离识别精度可以是a厘米。
在两个设备左右组合场景中,假设天线位于设备垂直方向的中间,即y1=y,且x1>x,则两个设备对应的组合半径=w+2*(x1-x)+2*a。类似的,在两个设备上下组合场景中,假设天线位于设备水平方向的中间,即x1=x,且y1>y,则两个设备对应的组合半径=h+2*(y1-y)+2*a。
如图5中的(b)所示,在两个设备对角组合的场景中,假设天线位于设备的边缘,例如分别位于左上方和右下方,此时,两个设备对应的组合半径=2*r+2*d+2*a;其中,d为天线距离设备中心点的距离,
Figure PCTCN2021136884-appb-000001
r为设备中心点到边缘的最大距离,
Figure PCTCN2021136884-appb-000002
可以理解的是,对角拼接场景下的两个设备对应的组合半径最大,为了尽可能保证组合半径有效,两个设备对应的最大组合半径可以为:R1=2*r+2*d+2*a。即两个设备对应的最大组合半径可以是根据电视101与电视102在对角拼接场景下天线的距离确定的。
上文描述了两个相同尺寸的设备对应的最大组合半径的算法。如果是两个不同大小的设备进行组合,如表5所示,假设设备1对应的尺寸参数为r1和d1,设备2对应的尺寸参数是r2和d2,则设备1和设备2的最大组合半径R2=(r1+r2)+(d1+d2)+2*a。其中,r1为设备1的中心点到边缘的最大距离,d1为设备1的天线距离设备1的中心点的距离;r2为设备2的中心点到边缘的最大距离,d2为设备2的天线距离设备2的中心点的距离;天线的距离识别精度是a厘米。
表5
Figure PCTCN2021136884-appb-000003
Figure PCTCN2021136884-appb-000004
若电视101和电视102之间的距离小于等于电视101和电视102对应的最大组合半径的大小,表示两台设备有拼装意图(组合意图)或为拼装状态(组合状态)。
下文以电视101与电视102为相同尺寸为例,当电视101确定电视101与电视102之间的距离小于等于R1时,可以将电视101与电视102标记为准备组合状态(准备拼接状态)。即电视101确定电视101与电视102间的距离(摆放间隔)小于等于电视101与电视102对应的最大组合半径R1时,确定电视101与电视102具有组合意图,可以准备进行组合。电视101可以将标记为准备组合状态的设备进行编组整理,形成屏幕组合准备设备组,即电视101和电视102可以形成一个屏幕组合准备设备组。
电视101还可以测量电视101与电视103之间的距离D2,并确定D2与电视101与电视103之间的最大组合半径R2的大小。电视101还可以测量电视101与电视104之间的距离D3,并确定D3与电视101与电视104之间的最大组合半径R3的大小。具体过程可以参考上文的相关描述,在此不做赘述。
406b、电视102确定电视101与电视102之间的距离D1小于等于电视101和电视102对应的最大组合半径R1。
电视102可以测量电视102与附近设备列表中的每个设备之间的距离,并确定两两设备之间的距离与该两两设备对应的最大组合半径的大小。
具体过程可以参考步骤406a的相关说明,在此不做赘述。
406c、电视102向电视101发送第一信息,第一信息包括电视102测量得到的距离信息。
电视101可以从电视102接收第一信息,第一信息中可以包含电视102与电视102的附近设备列表中的每个设备之间的距离,和/或电视102确定的两两设备之间的距离与该两两设备对应的最大组合半径的大小的比较结果。
可选的,电视101还可以从其他设备处接收其他设备测量得到的距离信息和/或比较结果。
又例如,电视101还可以从电视103接收第二信息。第二信息中可以包含电视103与电视103的附近设备列表中的每个设备之间的距离,和/或电视103确定的两两设备之间的距离与该两两设备对应的最大组合半径的大小的比较结果。
再例如,电视101还可以从电视104接收第三信息。第三信息中可以包含电视104与电视104的附近设备列表中的每个设备之间的距离,和/或电视104确定的两两设备之间的距离与该两两设备对应的最大组合半径的大小的比较结果。
这样,电视101可以确定当前局域网中每两个设备(两两设备)之间的距离,和/或,每两个设备之间的距离与该两个设备之间对应的最大组合半径的大小的比较结果,从而电视101可以确定出当前需要拼接在一起的多个设备,需要拼接在一起的多个设备可以组成一个屏组。
应该理解的是,同一个局域网中可以包括多个屏幕组合准备设备组(简称为屏组),每个屏组可以包括至少两个设备,该至少两个设备可以拼接在一起,该至少两个设备可以是直接或间接相连的。例如,电视101和电视102可以组成一个屏组,电视101 和电视102可以直接相连(电视101和电视102的距离D1小于等于电视101和电视102对应的最大组合半径)。
406d、电视101向电视102发送第二信息,第二信息包括电视101测量得到的距离信息。
具体过程可以参考步骤406c,在此不做赘述。
若电视101确定电视101与电视102之间的距离D1小于电视101和电视102对应的最大组合半径R1,即确定电视101和电视102需要组成屏组,电视101可以执行步骤407。
407、电视101显示第一提示信息,第一提示信息用于提示用户是否组屏。
用户可以提前在电视101上进行屏幕组合策略设置,例如可以设置自动进行屏幕组合或手动进行屏幕组合。
若用户设置自动进行屏幕组合,电视101可以通过短距通信技术自动启动屏幕组合拼接的检测程序,屏幕组合拼接的检测程序即检测两两设备之间的距离是否小于等于该两个设备对应的最大组合半径,从而确定是否需要进行屏幕组合拼接。可选的,可以通过上电开机、待机唤醒等特定场景自动启动屏幕组合拼接的检测程序。
若用户设置手动进行屏幕组合,用户可以进入例如智慧生活APP、智慧互联APP或设置应用的屏幕拼接管理服务中,手动(例如通过点击特定控件)启动屏幕组装拼接的检测程序。电视101可以给出界面提示,根据用户的操作确定是否组屏,这样可以避免自动触发屏幕组合的误差。
示例性的,如图3C中的(a)所示,示出了电视101的主界面300。当电视101确定电视101和电视102的距离D1小于等于电视101和电视102对应的最大组合半径R1时,如图3C中的(b)所示,电视101可以弹出弹框301,提示用户检测到附近有设备,弹框301中可以包括是的按钮302和否的按钮303,以便用户选择是否进行屏幕组合。
可选的,电视101也可以提示用户附近的设备的标识或ID。例如,如图3D所示,电视101可以弹出弹框304,提示用户检测到设备222xxx(222xxx是电视102的ID)在附近,弹框304中可以包括是的按钮302和否的按钮303,以便用户选择是否将当前设备与客厅的电视进行屏幕组合。
可选的,电视102也可以显示第一提示信息。当电视101和电视102都给出界面提示供用户选择是否进行屏幕组合时,若用户在一个设备上已确认(例如,在电视101上已确认),电视101可以将用户的确认信息发送给电视102,无需用户在每个设备上一一确认。
在一些实施例中,可以由屏组中的任一个设备(例如,电视101)给出界面提示(例如显示第一提示信息),以便用户选择是否进行屏幕组合,即电视102可以不再进行界面提示。
408a、用户点击同意屏幕组合的按钮。
示例性的,如图3C中的(c)所示,响应于用户选中是的按钮302(例如,通过遥控器或触摸屏选中按钮302)的操作,如图3C中的(d)所示,可以向用户提示正在进行屏幕组合。响应于用户点击同意屏幕组合的按钮,可以执行步骤409。或者, 若用户设置自动进行屏幕组合,此时可以无需在界面提示是否进行屏幕组合(即无需执行步骤407和步骤408),可以直接开始执行步骤409。
409、电视101和电视102组成屏组,选举主设备为电视101。
电视101可以根据当前屏组中各个设备的资源情况对各个设备的资源进行加权计分,按照资源评分由高到低进行排序,将实时资源评分最高的设备作为主设备。设备的资源情况可以包括中央处理单元(central processing unit,CPU)/只读存储器(read only memory,ROM)/随机存取存储器(random access memory,RAM)等硬件资源能力。确定出主设备之后,屏组中的其余设备可以作为从设备。例如,电视101可以作为主设备,那么电视102可以作为从设备。
可选的,用户可以手动选择主设备。例如,用户可以进入设备的设置应用中进行主设备的选择,或者,当自动选举出主设备后,电视101可以弹出弹框提示用户当前主设备的标识(例如,可以提醒用户当前主设备为客厅电视(即电视101)),用户可以基于弹框中确定按钮确认将电视101作为主设备,或者用户可以基于弹框中的修改按钮修改主设备。在这种情况下,电视101作为主设备时,可理解为包括用于控制屏组的主机,及用于显示图像的屏幕,所述主机集成在电视101中。
下面以电视101作为主设备进行说明:
410、主设备向电视102发送第一通知消息,第一通知消息用于通知电视102拍摄一张照片,并进行方位识别。
示例性的,在电视101和电视102进行组合的场景下,主设备(即电视101)可以采用摄像头拍摄照片,同时电视101可以向电视102发送第一通知消息,第一通知消息用于通知电视102采用摄像头拍摄一张照片(图像/图片),并通知电视102根据拍摄的照片和从其他设备获取的照片进行方位识别。
411a、电视101拍摄照片。
电视101可以通过屏幕拼接管理服务控制电视101自带的摄像头拍摄一张照片。
411b、电视102接收到第一通知消息后,拍摄照片。
电视102可以通过屏幕拼接管理服务控制电视102自带的摄像头拍摄一张照片。电视101与电视102可以协商在同一时刻拍摄照片。
412a、电视101向电视102发送电视101拍摄的照片。
412b、电视102向电视101发送电视102拍摄的照片。
412c、电视101根据电视101拍摄的照片和电视102拍摄的照片确定电视101和电视102之间的方位关系。
电视101从电视102接收电视102拍摄的照片后,可以通过图像匹配算法对自身拍摄的照片和电视102拍摄的照片进行图像匹配(比对),确定出两张照片的重叠区域(即相似的影像部分/影像内容)。其中,图像匹配即通过对两张照片的影像内容、特征、结构、关系、纹理及灰度等的对应关系,以及相似性和一致性的分析,确定两张照片中的重叠部分。
举例来说,图像匹配算法可以包括尺度不变特征变换(ScaleInvariant Feature Transform,SIFT)、加速稳健特征(speed up robust features,SURF)、快速最近邻搜索算法(Flann-based matcher)等。
而后,电视101可以基于重叠区域在电视101拍摄的照片上的位置确定电视101与电视102的相对位置关系(相对方位关系)。即通过重叠区域位置跟摄像方位的映射关系,确定电视101和电视102的相对位置关系。电视101和电视102的相对方位关系例如可以是电视101位于电视102的上、下、左、右、左上、左下、右上、右下等方向。或者,电视101和电视102的相对方位关系可以是,电视102位于电视101的上、下、左、右、左上、左下、右上、右下等方向。
电视101和电视102的拼接模式可以包括上下拼接、左右拼接或对角拼接等三种模式。例如,当电视101位于电视102的上或下时,电视101和电视102的拼接模式可以是上下拼接;当电视101位于电视102的左或右时,电视101和电视102的拼接模式可以是左右拼接;当电视101位于电视102的左上、左下、右上或右下时,电视101和电视102的拼接模式可以是对角拼接。
示例性的,如图6所示,虚框代表屏幕电视101拍摄的照片,实线框代表屏幕电视102的拍摄的照片,则屏幕电视101相对于屏幕电视102的方位如表6所示。
表6
Figure PCTCN2021136884-appb-000005
示例性的,如图6中的(a)所示,若重叠区域位于电视101拍摄的照片的右下区(右下角),且位于电视102拍摄的照片的左上区(左上角),确定电视101位于电视102的左上方(左上角)。如图6中的(b)所示,若重叠区域位于电视101拍摄的照片的下半区(正下方),且位于电视102拍摄的照片的上半区(正上方),确定电视101位于电视102的上方(正上方);如图6中的(c)所示,若重叠区域位于电视101拍摄的照片的左下角(左下方),且位于电视102拍摄的照片的右上角(右上方),确定电视101位于电视102的右上方(右上角);如图6中的(d)所示,若重叠区域位于电视101拍摄的照片的右半区,且位于电视102拍摄的照片的左半区,确定电视101位于电视102的左方;如图6中的(e)所示,若重叠区域位于电视101拍摄的照片的左半区,且位于电视102拍摄的照片的右半区,确定电视101位于电视102的右方;如图6中的(f)所示,若重叠区域位于电视101拍摄的照片的右上角,且位于电视102拍摄的照片的左下角,确定电视101位于电视102的左下方;如图6中的(g)所示,若重叠区域位于电视101拍摄的照片的上半区,且位于电视102拍摄的照片的下半区,确定电视101位于电视102的下方;如图6中的(h)所示,若重叠区域位于 电视101拍摄的照片的左上角(左下方),且位于电视102拍摄的照片的右下角(右下方),确定电视101位于电视102的右下方(右下角)。
在一种可能的设计中,可以对电视101拍摄的照片和电视102拍摄的照片进行图像匹配,识别出两张照片的重叠区域。然后,分别计算重叠区域位于电视101拍摄的照片和电视102拍摄的照片中的哪个方位片区,而后通过表6查出电视101和电视102的相对位置。例如,重叠区域在电视101拍摄的照片的下半区,电视101相对于电视102的方位为上,即电视101位于电视102的上方。
在另一种可能的设计中,可以将每个设备(例如,电视101或电视102)拍摄的照片划分为若干个子区域(例如,6个/9个/12个等等,本申请不做限定),用电视101拍摄的照片中的每一个子区域与电视102拍摄的照片中的每个一个子区域进行匹配,确定匹配的子区域的编号,根据匹配的子区域的编号确定匹配的子区域位于电视101拍摄的照片和电视102拍摄的照片中的哪个方位片区,而后通过表6查出电视101和电视102的相对位置。
示例性的,如图7所示,假设电视101拍摄的照片可以划分为6个子区域,分别包括①④⑦②⑤⑧,电视102拍摄的照片可以划分为6个子区域,分别包括②⑤⑧③⑥⑨,可知匹配的子区域包括②⑤⑧,由于②⑤⑧位于电视101拍摄的照片中的右半区,通过查找表6可知电视101位于电视102的左边,即电视102位于电视101的右边。或者,由于②⑤⑧位于电视102拍摄的照片中的左半区,通过查找表6可知电视101位于电视102的左边,即电视102位于电视101的右边。
在又一种可能的设计中,在屏幕组合拼接的设备间相对方位识别过程中,可在设备的摄像头的视野中增加特定识别物,如人脸、人体动作、特定物体、器具等。
示例性的,屏幕组合拼装程序开始后,可以先在电视101和/或电视102上显示操作提示,让用户确保特定识别物(例如,一个人脸)在电视101和/或电视102的摄像头的画面内都可以看到,然后根据人脸在电视101和/或电视102拍摄的照片中的位置来确定设备间相对方位。
需要说明的是,特定识别物在每个设备拍摄的照片的位置可以有多个方向纬度,例如可以包括上下纬度和左右纬度。可以忽略相同方向纬度,以不同方向纬度作为判断设备间方位的依据。示例性的,如图8所示,假设电视101和电视102的拍摄的照片的子区域分别包括①②④⑤⑦⑧和②③⑤⑥⑧⑨,人脸位于区域②,由于区域②位于电视101拍摄的照片的右上方,且区域②位于电视102拍摄的照片的左上方,通过查找表7可以确定电视101位于电视102的左方。即忽略相同方向纬度(即忽略右上方和左上方中的“上”这一方向纬度),以“左”和“右”等方向纬度作为判断设备间方位的依据。
表7
Figure PCTCN2021136884-appb-000006
Figure PCTCN2021136884-appb-000007
可选的,可以在屏幕拼接管理服务中预设相关程序,以通过屏幕显示、声音提示等方式,提示用户给出配合措施,来加快摄像头对特定位置的识别,从而加快图片相对位置的识别速度,或者直接给某个设备的摄像头特定的图像输入来标记对应设备的方位。
412d、可选的,电视102根据电视102拍摄的照片和电视101拍摄的照片确定电视101和电视102之间的方位关系。
具体过程可以参考步骤412a中的描述,简单替换执行主体等内容即可,在此不做赘述。
412e、电视102向电视101发送电视102确定的电视101和电视102之间的方位关系。
413a、主设备确定屏组中的全部设备的相对方位关系。
主设备可以收集屏组中每两个设备之间的相对方位信息进行汇总,根据方位信息在一个坐标系中进行统一编排,为每一个设备分别记录编号、坐标等信息。
屏组中的所有设备的方位可以用数组表示,例如可以为(设备1,设备2,设备1相对与设备2的方向)。例如,假设屏组中仅包括电视101和电视102,则电视101相对与电视102的方位可以为:(电视101,电视102,上),表示电视101位于电视102的上方。或者,电视102相对与电视101的方位可以为:(电视102,电视101,下),表示电视102位于电视101的下方。
如图9中的(a)所示,若主设备确定电视101相对与电视102的方位为(电视101,电视102,左),即电视101位于电视102的左侧,则电视101与电视102可以在坐标系中左右排列,电视101和电视102的排序可以是(1)、(2),即从左至右分别为电视101、电视102。
如图9中的(b)所示,若主设备确定电视101相对与电视102的方位为(电视101,电视102,上),即电视101位于电视102的上方,则电视101与电视102可以在坐标系中下行排列,电视101和电视102的排序可以是(1)、(2),即从上至下分别为电视101、电视102。
在一种可能的设计中,主设备对屏组中的设备的编号方式可以是从左上到右下的方向逐个进行编码。示例性的,拼接在一起的设备可以排成n*m的矩阵,n可以表示行,m可以表示列,n为大于或等于1的整数,m为大于或等于1的整数,n和m不同时为1。例如,如图9中的(a)所示,假设n=1且m=2,在进行编码时,可以从第1列最上方的设备开始编码,将第1列的设备编码完毕后,可以接着从第2列最上方的设备开始编码,直至将第2列的设备编码完毕,这样可以将n*m个设备编码完毕。又例如,如图9中的(b)所示,假设n=2且m=1,在进行编码时,可以从第1行最左侧的设备开始编码,将第1行的设备编码完毕后,可以接着从第2行最左侧的设备开始编码,直至将第2行的设备编码完毕,这样可以将n*m个设备编码完毕。
413b、主设备将屏组拼接基础信息同步给电视102。
主设备可以将屏组拼接基础信息同步给屏组内的所有设备。屏组内的各个设备可 以接收主设备发送的同步消息。其中,同步消息中包括屏组拼接基础信息。屏组拼接基础信息包括屏组中包括的设备个数、各个设备的MAC/ID、主从信息(即主设备和从设备的信息)、设备间的方位信息等。示例性的,当前屏组拼接基础信息可以包括屏组中包括的设备个数(例如,2个)、各个设备的MAC/ID(例如,电视101和电视102的ID)、主从信息(例如,主设备是电视101,从设备是电视102)、设备间的方位信息(例如,电视101和电视102为左右拼接状态)。
屏组内的各个设备接收到主设备发送的同步消息后,每两个设备之间可以建立心跳链接,实时维持设备间的组合关系。示例性的,假设电视101和电视102之间建立心跳链接,电视101可以每隔1分钟(或者,30s、2分钟、3分钟等)向电视102发送一个心跳监测数据帧(也可以称为心跳包),电视102接收到心跳监测数据帧后可以发送响应帧,则电视101确定连接正常,否则表示连接断开或异常。
414a、电视101根据屏组拼接基础信息分别确定电视101和电视102的显示信息。
电视101可以根据屏组拼接基础信息分别确定电视101和电视102的显示信息。即屏组系统运行过程中,主设备可以基于屏组拼接基础信息实现屏组的画面输出显示排布、界面焦点切换等。
414b、电视101将电视102的显示信息发送给电视102。
414c、电视101根据电视101的显示信息显示对应的显示画面。
414d、电视102根据电视102的显示信息显示对应的显示画面。
例如,电视101可以将电视101的显示内容划分成N份(例如2份)后分配给屏组中的各个设备(例如,电视101(自身)和电视102)。其中,N小于等于屏组中包括的设备的数目。
示例性的,假设电视101和电视102在拼接前的显示内容分别如图3E中的(a)和(b)所示,拼接后,如图3E中的(c)所示,电视101和电视102可以共同显示主设备(例如,电视101)的桌面,或者,如图3E中的(d)所示,电视101和电视102可以分别显示主设备(例如,电视101)的桌面。
需要说明的是,屏幕组合使用过程中,屏组中的各个设备可以持续检测设备的增减,并对屏组拼接基础信息刷新处理。其中,新增设备可以是在当前屏组中新加入设备,减少设备可以是从当前屏组中主动拆除/移除部分设备,或者部分设备下电被动离线。电视101可以通过短距通信检测是否有新的设备加入。电视101可以根据心跳链接来检测确认对端设备是否离线。或者,电视101可以通过短距通信检测确定某个设备是否离线。或者,用户可以从管理界面手动移除某个设备。
在一些实施例中,若由电视101和电视102构成的屏组中新增电视103,如图10所示,屏组拼接方法还可以包括:
415a、电视101与电视102之间发送短距信号。
电视101与电视102之间可以周期性发送短距信号。
415b、电视102与电视103之间发送短距信号。
电视102与电视103之间可以周期性发送短距信号。
415c、电视101/电视102根据短距信号确定当前屏组中需要新增电视103。
电视101可以根据短距信号测量电视101与电视103的距离,电视102可以根据 短距信号测量电视102与电视103的距离。
若满足以下条件之一,即可认为当前屏组中需要新增电视103。(1)、电视101与电视103之间的距离D2小于等于电视101与电视103之间对应的最大组合半径R2,即D2≤R2;(2)、电视102与电视103之间的距离D4小于等于电视102与电视103之间对应的最大组合半径R4,即D4≤R4。
电视101比较电视101与电视103之间的距离D2,以及电视101与电视103之间对应的最大组合半径R2的大小。可选的,电视101还可以从电视102获取电视102与电视103之间的距离D4,以及电视102与电视103之间对应的最大组合半径R4的信息。
若电视101确定电视101与电视103之间的距离D2小于或等于电视101与电视103之间的最大组合半径R2,即D2≤R2,电视101确定电视101、电视102和电视103可以组成屏组。其中,电视101与电视103之间的最大组合半径R2的确定过程可以参考步骤406a中的相关描述,在此不做赘述。
电视102可以比较电视102与电视103之间的距离D4,以及电视102与电视103之间对应的最大组合半径R4的大小。若D4>R4,电视102还可以从电视101获取D2≤R2的信息,从而确定当前屏组需要新增电视103。
示例性的,在电视101、电视102、电视103等3个设备进行组合的场景下,电视101与电视102可以在相应的最大组合半径(即电视101与电视102对应的最大组合版半径)内,电视101与电视103可以在相应的最大组合半径(即电视101与电视103对应的最大组合版半径)内,电视102与电视103可以不在相应的最大组合半径(即电视102与电视103对应的最大组合版半径)内。即电视103可以和电视102间接拼接(电视103和电视102的距离D2大于电视103和电视102对应的最大组合半径),电视103可以和电视101直接拼接(电视103和电视101的距离D3小于等于电视103和电视101对应的最大组合半径)。由于电视101和电视102拼接在一起,且电视101和电视103拼接在一起,故电视101、电视102和电视103是拼接在一起的。
电视101、电视102、电视103可以两两之间进行方位识别(即识别设备两两之间的拼接模式)。例如,电视101与电视102可以根据拍摄的照片进行方位识别(即识别电视101与电视102是上下拼接、左右拼接还是对角拼接),电视101与电视103可以根据拍摄的照片进行方位识别,电视102与电视103可以根据拍摄的照片进行方位识别。
需要说明的是,若电视103是已加入局域网的设备,电视101/电视102可以直接执行步骤416a。若电视103是新加入局域网的设备,则电视101、电视102可以基于局域网与电视103建立连接并基于短距通信技术相互发现;或者,电视101、电视102可以与电视103建立直接连接;电视101、电视102可以刷新附近设备列表,电视103可以新建附近设备列表;而后,电视101/电视102可以执行步骤416a。
416a、电视101和电视102显示第二提示信息,第二提示信息用于提示用户检测到当前屏组中新增设备。
示例性的,如图3F中的(b)所示,在电视101和电视102上可以弹出弹框305,提示用户检测到设备123xxx(123xxx是电视103的ID),弹框305中可以包括是的 按钮302和否的按钮303,以便用户选择是否将该设备加入屏组。可选的,电视103也可以提示用户附近的设备的标识或ID。例如,电视103可以弹出弹框306,提示用户检测到设备111xxx(111xxx可以是电视101的ID),弹框306中可以包括是的按钮302和否的按钮303,以便用户选择是否将当前设备与电视103进行屏幕组合。
416b、用户点击同意将新增设备加入屏组的按钮。
响应于用户点击同意将新增设备加入屏组的按钮的操作,可以执行步骤417。
417、电视101、电视102和电视103组成屏组,选举主设备为电视101。
主设备选举过程可以参考步骤409中的描述,在此不做赘述。在这种情况下,电视101作为主设备时,可理解为包括用于控制屏组的主机,及用于显示图像的屏幕,所述主机集成在电视101中。
下面以电视101作为主设备进行说明:
418a、电视101向电视102发送第一通知消息。
第一通知消息用于通知电视102拍摄一张照片,并根据其拍摄的照片以及从其他设备获取的照片进行方位识别。
418b、电视101向电视103发送第二通知消息。
第二通知消息用于通知电视103拍摄一张照片,并根据其拍摄的照片以及从其他设备获取的照片进行方位识别。
419a、电视101拍摄照片。
电视101可以通过屏幕拼接管理服务控制电视101自带的摄像头拍摄一张照片。
419b、电视102接收到主设备发送的第一通知消息后,拍摄照片。
电视102可以通过屏幕拼接管理服务控制电视102自带的摄像头拍摄一张照片。电视101与电视102可以协商在同一时刻拍摄照片。
419c、电视103接收到主设备发送的第二通知消息后,拍摄照片。
电视103可以通过屏幕拼接管理服务控制电视103自带的摄像头拍摄一张照片。电视101与电视103可以协商在同一时刻拍摄照片。
可以理解的是,电视101、电视102、电视103可以协商在同一时刻拍摄照片。
电视101、电视102、电视103分别拍摄的照片可以共享,即可以执行步骤419d-419i:
419d、电视101向电视102发送电视101拍摄的照片。
419e、电视102向电视103发送电视102拍摄的照片。
419f、电视102向电视101发送电视102拍摄的照片。
419g、电视103向电视102发送电视103拍摄的照片。
419h、电视101向电视103发送电视101拍摄的照片。
419i、电视103向电视101发送电视103拍摄的照片。
420a、电视101分别获取电视102和电视103拍摄的照片,并识别电视101分别和电视102、电视103之间的方位关系。
相应的方位识别过程可以参考步骤412a的相关描述,在此不做赘述。
420b、电视102分别获取电视101和电视103拍摄的照片,并识别电视102分别和电视101、电视103之间的方位关系。
相应的方位识别过程可以参考步骤412a的相关描述,在此不做赘述。
420c、电视103分别获取电视101和电视102拍摄的照片,并识别电视103分别和电视101、电视102之间的方位关系。
相应的方位识别过程可以参考步骤412a的相关描述,在此不做赘述。
420d、电视102向电视101发送电视102与其他设备之间的方位关系。
420e、电视103向电视101发送电视103与其他设备之间的方位关系。
421、主设备确定屏组中的全部设备的相对方位关系。
示例性的,电视101和电视102的相对方位关系例如可以是电视101位于电视102的上、下、左、右、左上、左下、右上、右下等方向。
若屏组中包括两个以上的设备,则设备间两两识别相对方位。其中,每两个位于最大组合半径内的设备(即两个设备之间的距离小于等于这两个设备对应的最大组合半径)可以视为一个同半径屏幕组,识别每个同半径屏幕组的相对方位即可识别出每个设备相对其他设备的方位。
举例来说,假设屏组中包括3个设备,分别为电视101、电视102、电视103,设备之间两两识别相对方位,即可识别出每个设备相对其他设备的方位。
如图11A所示,以横向排列的3个设备为例,例如以电视101、电视102、电视103为例,假设设备间两两识别从左到右的顺序为电视103和电视101,电视101和电视102,确定该3个设备的排序过程可以为:先遍历两两设备间的方位确定电视103位于其余两个设备(电视101、电视102)的最左侧,再遍历两两设备间的方位确定电视101位于其余一个设备(电视102)的最左侧,再遍历两两设备间的方位确定电视102位于最右侧,最终确定电视103、电视101、电视102等3个设备的排序为(1)、(2)、(3),即从左至右分别为电视103、电视101、电视102。
在一些情况中,有部分设备之间的方位信息是冗余的,可以不使用这些信息,或者可以参考冗余的信息对识别结果进行校验。如图11A所示,根据电视103与电视101、电视101与电视102之间的方位完成整体屏组方位识别,此时电视103和电视102之间的方位信息是冗余的。可选的,可以基于电视103和电视102之间的方位对整体屏组方位进行校验,以便提高整体屏组方位识别的正确率。
另外,电视101、电视102、电视103也是下上拼接(纵向拼接)的,每个设备的相对方位关系参考上文的相关描述,在此不做赘述。
又例如,假设屏组中包括9个设备,分别为电视101、电视102、电视103、电视104、电视105、电视105、电视107、电视108和电视109。识别出设备两两之间的方位信息后,可以统计汇总两两设备的方位关系来汇总出屏组中的全部设备的相对方位关系。
如图11B所示,以横向排列的3个设备为例,例如以电视101、电视102和电视103为例,设备间两两识别从左到右的顺序为电视101和电视102,电视101和电视103,电视102和电视103,确定该3个设备的排序过程可以为:先读取到电视101和电视102的相对方位确定电视101在电视102的左侧,再读取电视101和电视103的相对方位确定电视103在电视101的右侧,至此还不能确定电视101和电视103的相对方位,需要进一步读取电视102和电视103的相对方位,最终确定电视101、电视102和电视103等3个设备的排序为(1)、(2)、(3),即从左至右分别为电视101、 电视102和电视103。
以对角排列的3个设备为例,例如以电视101、电视105和电视109为例,设备间两两识别从左上到右下的顺序为电视101和电视105,电视101和电视109,电视105和电视109,可以通过遍历两两设备的相对方位后可以确定最左上方设备为电视101,电视105位于电视101的右下方,且电视109又位于电视105的右下方,因此最终确定电视101和电视109,电视105和电视109等3个设备的排序为(1)、(5)、(9),即从左上右下分别为电视101、电视105和电视109。
需要说明的是,上述是一种确定屏组中的全部设备的相对方位关系的方法举例,实际确定屏组中的全部设备的相对方位关系的方法还有其他多种,本申请不做限定。
在另一些实施例中,步骤420a-步骤421可以替换为步骤S1:
S1、电视101分别获取电视102和电视103拍摄的照片,并分别识别电视101、电视102之间的方位关系,电视101、电视103之间的方位关系电视,以及电视102和电视103之间的方位关系。即可以由主设备识别屏组中各个设备的方位关系,这样,电视102和电视103可以无需进行方位识别,可以节省电视102和电视103的功耗。
422a、主设备将屏组拼接基础信息同步给电视102。
422b、主设备将屏组拼接基础信息同步给电视103。
主设备将屏组拼接基础信息同步给屏组内的各个设备。
示例性的,当前屏组拼接基础信息可以包括屏组中包括的设备个数(例如,3个)、各个设备的MAC/ID(例如,电视101、电视102和电视103的ID)、主从信息(例如,主设备是电视101,从设备包括电视102和电视103)、设备间的方位信息(例如,电视103、电视101和电视102从左至右依次拼接)。
422c、电视101根据屏组拼接基础信息分别确定电视101、电视102和电视103的显示信息。
电视101可以根据屏组拼接基础信息分别确定电视101和电视102的显示信息。即屏组系统运行过程中,主设备可以基于屏组拼接基础信息实现屏组的画面输出显示排布、界面焦点切换等。例如,电视101可以将电视101的显示内容划分成N份(例如3份)后分配给屏组中的各个设备(例如,电视101(自身)、电视102和电视103)。其中,N小于等于屏组中包括的设备的数目。
422d、电视101将电视102的显示信息发送给电视102。
422e、电视101将电视103的显示信息发送给电视103。
422f、电视101根据电视101的显示信息显示对应的显示画面。
422g、电视102根据电视102的显示信息显示对应的显示画面。
422h、电视103根据电视103的显示信息显示对应的显示画面。
在一些实施例中,可以从屏组中删除设备。其中,删除设备可以是从当前屏组中主动拆除/移除部分设备,或者部分设备下电被动离线。例如,屏组中的各个设备可以根据心跳链接来检测确认某个设备是否离线。或者,屏组中的各个设备可以通过短距通信检测确定某个设备是否离线。或者,响应于用户从管理界面手动删除屏组中某个设备的操作,可以标记该用户删除的设备的信息,从而屏组中的各个设备确定某个设备离线。
若从由电视101、电视102和电视103构成的屏组中删除(移除)电视103,如图12所示,屏组拼接方法还可以包括:
423a、电视101与电视102之间发送短距信号。
电视101与电视102之间可以周期性发送短距信号,以便进行根据短距信号测量电视101与电视102的距离。
423b、电视102与电视103之间发送短距信号。
电视101与电视103之间可以周期性发送短距信号,以便进行根据短距信号测量电视101与电视103的距离。
423c、电视101/电视102根据短距信号,从当前屏组中删除电视103。
示例性的,若电视101通过短距信号检测确定电视103是否离线,电视101可以比较电视101与电视103之间的距离D2,以及电视101与电视103之间对应的最大组合半径R2的大小。若电视101确定电视101与电视103之间的距离D2大于电视101与电视103之间的最大组合半径R2,即D2>R2,电视101确定电视101与电视103不处于拼接状态,从电视101、电视102和电视103组成的屏组中删除电视103。
电视102可以从电视101获取D2>R2的信息,从而确定当前屏组需要删除电视103。
424、电视101和电视102显示第三提示信息,第三提示信息用于提示用户检测到当前屏组中有设备被移除。
在一些实施例中,如图3G中的(a)所示,电视101、电视102和电视103构成的屏组可以共同显示相应的显示内容,若需要从由电视101、电视102和电视103构成的屏组中删除电视103(例如,当电视101、电视102和电视103拼接在一起时,移除了电视103),示例性的,如图3G中的(b)所示,电视101和电视102可以弹出弹框307,提示用户检测到当前屏组中设备123xxx被移除。弹框307中可以包括确定按钮308。响应于用户点击确定按钮308,电视101确定弹框307中的信息用户已知,可以隐去弹框307。或者,弹框307可以在出现几秒(例如,2s)后自动隐去,避免影响电视101和电视102的显示内容。
在另一些实施例中,当电视101/电视102检测到电视103被移除时,可以提示用户有设备被移除,响应于用户确定移除设备的操作,可以从当前屏组中移除该设备。示例性的,如图3H中的(a)所示,电视101、电视102和电视103构成的屏组可以共同显示相应的显示内容,若需要从由电视101、电视102和电视103构成的屏组中删除电视103(例如,当电视101、电视102和电视103拼接在一起时,移除了电视103),示例性的,如图3H中的(b)所示,电视101、电视102和电视103可以仍保持其之前的显示内容,且电视101和电视102可以弹出弹框309,提示用户确定将设备123xxx从当前屏组中移除?弹框309中可以包括确定按钮310和取消按钮311。响应于用户点击确定按钮310的操作,确定从屏组中移除电视103,如图3H中的(c)所示,电视101和电视102可以共同显示相应的显示内容(电视101和电视102的显示内容可以是电视101(主设备)的处理器确定的),电视103单独显示相应的显示内容(电视103显示的内容可以是电视103的处理器确定的)。另外,若用户点击取消按钮311,如图3H中的(b)所示,电视101、电视102和电视103可以仍保持其 之前的显示内容。
需要说明的是,若移除的设备是当前屏组中的主设备,屏组中剩余设备可以重新选举主设备。
主设备刷新屏组拼接基础信息,并同步给屏组内的所有设备,以便屏组中各个设备知道哪个设备被移除屏组。示例性的,刷新后的屏组拼接基础信息可以包括屏组中包括的设备个数(例如,2个)、各个设备的MAC/ID(例如,电视101和电视102的ID)、主从信息(例如,主设备是电视101,从设备是电视102)、设备间的方位信息(例如,电视101和电视102为左右拼接状态)。屏组系统运行过程中,主设备可以基于屏组拼接基础信息实现画面输出显示排布、界面焦点切换等。例如,电视101可以将电视101的显示内容划分成N份(例如2份)后分配给屏组中的各个设备(例如,电视101(自身)和电视102)。其中,N小于等于屏组中包括的设备的数目。
需要说明的是,当屏组有设备被移除时,可以认为屏组发生了重组,可以重新确定屏组中的各设备的相对方位关系,例如可以重新执行步骤410-414。
基于本申请实施例提供的方法,在屏幕组合拼接过程中,可以使用设备自带的摄像头拍摄照片,并对各个设备拍摄的照片进行识别比对,例如可以确定重叠区域所在照片的方位,进而识别出两个设备的相对方位关系,无需用户手动设置,可以提高用户体验。并且,本申请实施例可以通过动态监测设备间的距离,自动识别设备间的组合意图并启动屏幕拼装程序,无需用户手动设置,更加智能便捷。
另外,在一些实施例中,可以通过人机交互的方式确定出设备间的方位关系。例如,可以在电视101和电视102的摄像头的正前方区域使用不同的动作(手势)或物体来指示不同设备所处的方位。示例性的,如图13中的(a)所示,首先,电视101和电视102可以提示用户选择设备间的排列方式,设备间的排列方式例如可以包括:(1)上下排列;(2)左右排列。如图13中的(b)所示,响应于用户选择(2)左右排列时,电视101和电视102可以向用户提示“在左起第一个设备的摄像头的正前方区域比“手势1”,在左起第二个设备的摄像头的正前方区域比“手势2””。用户阅读提示后,可以在左起第一个设备(例如,电视101)的摄像头的正前方区域比“手势1”,在左起第二个设备(例如,电视102)的摄像头的正前方区域比“手势2”。电视101可以检测摄像头的视野中是否出现人的手部,若确定出现人的手部,可以抓拍一张图像。同时,电视102可以检测摄像头的视野中是否出现人的手部,若确定出现人的手部,可以抓拍一张图像。电视101确定自身拍摄的图像中的手势是否与“手势1”或“手势2”匹配,若与“手势1”匹配,确定电视101为左起第一个设备。电视102可以确定自身拍摄的图像中的手势是否与“手势1”或“手势2”匹配,若与“手势2”匹配,确定电视102为左起第二个设备。这样,可以确定出电视101在电视102的左边。这样,可以提升屏幕拼接过程中用户的参与度和趣味性。
如图14所示,本申请实施例提供一种屏幕组合方法,应用于屏幕拼接系统,屏幕拼接系统包括至少两个屏幕及主机,至少两个屏幕包括第一屏幕及第二屏幕,主机集成于第一屏幕或第二屏幕中;或者主机独立于第一屏幕或第二屏幕。该方法包括:
1401、第一屏幕及第二屏幕组成第一屏组,第一屏幕及第二屏幕通信连接。
可选的,第一屏幕及第二屏幕组成第一屏组之前,方法还包括:第一屏幕或第二 屏幕以预设频率互相发送第一短距信号,第一屏幕或第二屏幕根据第一屏幕和第二屏幕之间传输的第一短距信号的接收信号强度指示RSSI确定第一屏幕和第二屏幕的距离;当第一屏幕和第二屏幕的距离小于等于第一屏幕和第二屏幕对应的最大组合半径时,第一屏幕与第二屏幕组成第一屏组;其中,第一屏幕和第二屏幕对应的最大组合半径是根据第一屏幕和第二屏幕的尺寸和天线的位置确定的。
可选的,第一屏幕及第二屏幕组成第一屏组之前,第一屏幕和/或第二屏幕可以显示第一提示信息,第一提示信息用于提示用户检测到附近有设备,是否进行屏幕拼接。
1402、主机向第一屏幕发出第一指示。
在一些实施例中,若主机置于第一屏幕(例如电视101)中,第一指示可以是主机向电视101的摄像头发出的信号。
1403、主机向第二屏幕发出第二指示。
在一些实施例中,若主机置于第一屏幕(例如电视101)中,电视101可以向第二屏幕(例如,电视102)发出第二指示,第二指示可以参考上文中的第一通知消息,在此不做赘述。
1404、第一屏幕根据第一指示拍摄第一图像。
其中,第一图像是指第一屏幕(例如,电视101)拍摄的图像(照片/图片)。
1405、第二屏幕根据第二指示拍摄第二图像。
其中,第二图像是指第二屏幕(例如,电视102)拍摄的图像(照片/图片)。
1406、根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息。
在一些实施例中,根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:第一屏幕向第二屏幕发送第一图像;第二屏幕向第一屏幕发送第二图像;第一屏幕和第二屏幕分别根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息;第一屏幕和第二屏幕分别向主机发送第一屏幕确定的方位信息和第二屏幕确定的方位信息;主机根据第一屏幕确定的方位信息和第二屏幕确定的方位信息确定第一屏幕与第二屏幕的方位信息。
在另一些实施例中,根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:第一屏幕向主机发送第一图像;第二屏幕向主机发送第二图像;主机根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息。
在一种可能的设计中,根据第一图像和第二图像确定第一屏幕和第二屏幕的方位信息包括:根据图像匹配算法对第一图像和第二图像进行图像匹配,确定第一图像和第二图像的重叠区域;根据重叠区域在第一图像的方位和第二图像的方位确定第一屏幕相对于第二屏幕的方位。其中,图像匹配算法包括尺度不变特征变换SIFT算法、加速稳健特征SURF算法、快速最近邻搜索算法中的至少一种。
示例性的,若重叠区域位于第一图像的下半区,且位于第二图像的上半区,确定第一屏幕位于第二屏幕的上方;若重叠区域位于第一图像的左下角,且位于第二图像的右上角,确定第一屏幕位于第二屏幕的右上方;若重叠区域位于第一图像的左半区,且位于第二图像的右半区,确定第一屏幕位于第二屏幕的右方;若重叠区域位于第一图像的左上角,且位于第二图像的右下角,确定第一屏幕位于第二屏幕的右下方;若重叠区域位于第一图像的上半区,且位于第二图像的下半区,确定第一屏幕位于第二 屏幕的下方;若重叠区域位于第一图像的右上角,且位于第二图像的左下角,确定第一屏幕位于第二屏幕的左下方;若重叠区域位于第一图像的右半区,且位于第二图像的左半区,确定第一屏幕位于第二屏幕的左方;若重叠区域位于第一图像的右下区,且位于第二图像的左上区,确定第一屏幕位于第二屏幕的左上方。
在另一种可能的设计中,若确定第一图像和第二图像包括目标物体,根据目标物体在第一图像和第二图像的方位确定第一屏幕相对于第二屏幕的方位。
在又一些实施例中,第一屏幕根据第一指示拍摄第一图像,第二屏幕根据第二指示拍摄第二图像之前,方法还包括:主机向第一屏幕和第二屏幕发送布局信息,布局信息包括至少一种组合模式;响应于用户从至少一种组合模式选择一种组合模式的操作,主机向第一屏幕和第二屏幕发送操作信息,第一屏幕和/或第二屏幕根据操作信息指示用户在第一位置进行第一手势或动作,在第二位置进行第二手势或动作;根据第一图像和第二图像确定第一屏幕与第二屏幕的方位信息包括:若确定第一图像中包含第一手势或动作的区域大于等于预设阈值,确定第一屏幕位于第一位置;若确定第二图像中包含第二手势或动作的区域大于等于预设阈值,确定第二屏幕位于第二位置。
可选的,第一屏幕或第二屏幕对第一屏幕和第二屏幕的资源情况进行评分;其中,资源情况包括中央处理单元CPU处理能力、只读存储器ROM存储能力或随机存取存储器RAM存储能力中的至少一项;若第一屏幕的评分高于第二屏幕的评分,主机集成于第一屏幕中;若第二屏幕的评分高于第一屏幕的评分,主机集成于第二屏幕中。
可选的,主机根据第一屏幕与第二屏幕的方位信息确定第一屏幕和第二屏幕分别对应的显示信息;主机向第一屏幕发送第一屏幕对应的显示信息;第一屏幕根据第一屏幕对应的显示信息显示对应的显示画面;主机向第二屏幕发送第二屏幕对应的显示信息;第二屏幕接收第二屏幕对应的显示信息后,根据第二屏幕对应的显示信息显示对应的显示画面。
可选的,屏幕拼接系统还包括第三屏幕,方法还包括:第一屏幕与第三屏幕互相发送第二短距信号,第二屏幕与第三屏幕互相发送第三短距信号;根据第二短距信号的RSSI确定第一屏幕与第三屏幕的距离;根据第三短距信号的RSSI确定第二屏幕与第三屏幕的距离;当第一屏幕和第三屏幕的距离小于等于第一屏幕和第三屏幕对应的最大组合半径时,将第一屏幕、第二屏幕及第三屏幕组成第二屏组;其中,第一屏幕和第三屏幕对应的最大组合半径是根据第一屏幕和第三屏幕的尺寸和天线的位置确定的;或者当第二屏幕和第三屏幕的距离小于等于第二屏幕和第三屏幕对应的最大组合半径时,将第一屏幕、第二屏幕及第三屏幕组成第二屏组;其中,第二屏幕和第三屏幕对应的最大组合半径是根据第二屏幕和第三屏幕的尺寸和天线的位置确定的。
可选的,第一屏幕和/或第二屏幕显示第二提示信息,第二提示信息用于提示用户检测到新增设备,是否进行屏幕拼接。
在一种可能的设计中,若满足第一条件,方法还包括:第一屏幕和/或第二屏幕显示第三提示信息,第三提示信息用于提示用户第三屏幕从当前屏组中被移除。
其中,第一条件包括:第三屏幕与第一屏幕的心跳连接断开,或第三屏幕与第二屏幕的心跳连接断开;或者主机接收用户删除第三屏幕的操作;或者第一屏幕和第三屏幕的距离大于第一屏幕和第三屏幕对应的最大组合半径;或者第二屏幕和第三屏幕 的距离大于第二屏幕和第三屏幕对应的最大组合半径。
第三屏幕从当前屏组中被移除后,方法还包括:主机重新根据第一屏幕与第二屏幕的方位信息确定第一屏幕和第二屏幕分别对应的显示信息。
需要说明的是,图14所述的实施例中的第一屏幕可以为前述实施例中的电视101,第二屏幕可以为电视102,第三屏幕可以为电视103,图14所述的实施例中未详述的部分,可以参考前述实施例,在此不做赘述。
基于本申请实施例提供的方法,在屏幕组合拼接过程中,可以使用设备(第一屏幕或第二屏幕)自带的摄像头拍摄照片,并对各个设备拍摄的照片进行识别比对,例如可以确定重叠区域所在照片的方位,进而识别出两个设备的相对方位关系,无需用户手动设置,可以提高用户体验。并且,本申请实施例可以通过动态监测设备间的距离,自动识别设备间的组合意图并启动屏幕拼装程序,无需用户手动设置,更加智能便捷。
本申请另一实施例提供一种芯片系统,如图15所示,该芯片系统包括至少一个处理器1501和至少一个接口电路1502。处理器1501和接口电路1502可通过线路互联。例如,接口电路1502可用于从其它装置(例如,第一屏幕的存储器,第二屏幕的存储器或第三屏幕的存储器)接收信号。又例如,接口电路1502可用于向其它装置(例如处理器1501)发送信号。
例如,接口电路1502可读取设备中存储器中存储的指令,并将该指令发送给处理器1501。当所述指令被处理器1501执行时,可使得第一屏幕或第二屏幕(如图2A所示的屏幕110)执行上述实施例中的各个步骤。
当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请另一些实施例提供了一种第一屏幕(如图2A所示的屏幕110),该第一屏幕可以包括:通信模块、存储器和一个或多个处理器。该通信模块、存储器与处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质包括计算机指令,当所述计算机指令在第一屏幕或第二屏幕(如图2A所示的屏幕110)上运行时,使得屏幕110执行上述方法实施例中电视101或电视102执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中第一屏幕(例如,电视101)或第二屏幕(例如,电视102)执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执 行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (26)

  1. 一种屏幕组合方法,应用于屏幕拼接系统,所述屏幕拼接系统包括至少两个屏幕及主机,所述至少两个屏幕包括第一屏幕及第二屏幕,其特征在于,包括:
    所述第一屏幕及所述第二屏幕组成第一屏组,所述第一屏幕及所述第二屏幕通信连接;
    所述主机向所述第一屏幕发出第一指示,向所述第二屏幕发出第二指示;
    所述第一屏幕根据所述第一指示拍摄第一图像;
    所述第二屏幕根据所述第二指示拍摄第二图像;
    根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息包括:
    所述第一屏幕向所述第二屏幕发送所述第一图像;
    所述第二屏幕向所述第一屏幕发送所述第二图像;
    所述第一屏幕和所述第二屏幕分别根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息;
    所述第一屏幕向所述主机发送所述第一屏幕确定的方位信息;
    所述第二屏幕向所述主机发送所述第二屏幕确定的方位信息;
    所述主机根据所述第一屏幕确定的方位信息和所述第二屏幕确定的方位信息确定所述第一屏幕与所述第二屏幕的方位信息。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息包括:
    所述第一屏幕向所述主机发送所述第一图像;
    所述第二屏幕向所述主机发送所述第二图像;
    所述主机根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第一屏幕及所述第二屏幕组成第一屏组之前,所述方法还包括:
    所述第一屏幕或所述第二屏幕以预设频率互相发送第一短距信号,所述第一屏幕或所述第二屏幕根据所述第一屏幕和所述第二屏幕之间传输的第一短距信号的接收信号强度指示RSSI确定所述第一屏幕和第二屏幕的距离;
    当所述第一屏幕和所述第二屏幕的距离小于等于所述第一屏幕和所述第二屏幕对应的最大组合半径时,所述第一屏幕与所述第二屏幕组成所述第一屏组;
    其中,所述第一屏幕和所述第二屏幕对应的所述最大组合半径是根据所述第一屏幕和所述第二屏幕的尺寸和天线的位置确定的。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,所述第一屏幕及所述第二屏幕组成第一屏组之前,所述方法还包括:
    所述第一屏幕和/或所述第二屏幕显示第一提示信息;
    所述第一屏幕和/或所述第二屏幕获取用户的指示,所述指示用于确认进行屏幕拼接。
  6. 根据权利要求1-5任一项所述的方法,其特征在于,所述根据所述第一图像和所述第二图像确定所述第一屏幕和所述第二屏幕的方位信息包括:
    根据图像匹配算法对所述第一图像和所述第二图像进行图像匹配,确定所述第一图像和所述第二图像的重叠区域;
    根据所述重叠区域在所述第一图像的方位和所述第二图像的方位确定所述第一屏幕相对于所述第二屏幕的方位。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述重叠区域在所述第一图像的方位确定所述第一屏幕相对于所述第二屏幕的方位包括:
    若所述重叠区域位于所述第一图像的下半区,且位于所述第二图像的上半区,确定所述第一屏幕位于所述第二屏幕的上方;
    若所述重叠区域位于所述第一图像的左下角,且位于所述第二图像的右上角,确定所述第一屏幕位于所述第二屏幕的右上方;
    若所述重叠区域位于所述第一图像的左半区,且位于所述第二图像的右半区,确定所述第一屏幕位于所述第二屏幕的右方;
    若所述重叠区域位于所述第一图像的左上角,且位于所述第二图像的右下角,确定所述第一屏幕位于所述第二屏幕的右下方;
    若所述重叠区域位于所述第一图像的上半区,且位于所述第二图像的下半区,确定所述第一屏幕位于所述第二屏幕的下方;
    若所述重叠区域位于所述第一图像的右上角,且位于所述第二图像的左下角,确定所述第一屏幕位于所述第二屏幕的左下方;
    若所述重叠区域位于所述第一图像的右半区,且位于所述第二图像的左半区,确定所述第一屏幕位于所述第二屏幕的左方;
    若所述重叠区域位于所述第一图像的右下区,且位于所述第二图像的左上区,确定所述第一屏幕位于所述第二屏幕的左上方。
  8. 根据权利要求6或7所述的方法,其特征在于,
    所述图像匹配算法包括尺度不变特征变换SIFT算法、加速稳健特征SURF算法、快速最近邻搜索算法中的至少一种。
  9. 根据权利要求1-8任一项所述的方法,其特征在于,所述根据所述第一图像和所述第二图像确定所述第一屏幕和所述第二屏幕的方位信息包括:
    若确定所述第一图像和所述第二图像包括目标物体,根据所述目标物体在所述第一图像和所述第二图像的方位确定所述第一屏幕相对于所述第二屏幕的方位。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述第一屏幕根据所述第一指示拍摄第一图像,所述第二屏幕根据所述第二指示拍摄第二图像之前,所述方法还包括:
    所述主机向所述第一屏幕和所述第二屏幕发送布局信息,所述布局信息包括至少一种组合模式;
    响应于用户从所述至少一种组合模式选择一种组合模式的操作,所述主机向所述第一屏幕和所述第二屏幕发送操作信息,所述第一屏幕和/或所述第二屏幕根据所述操作信息指示所述用户在第一位置进行第一手势或动作,在第二位置进行第二手势或动 作;
    所述根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息包括:
    若确定所述第一图像中包含所述第一手势或动作的区域大于等于预设阈值,确定所述第一屏幕位于所述第一位置;若确定所述第二图像中包含所述第二手势或动作的区域大于等于所述预设阈值,确定所述第二屏幕位于所述第二位置。
  11. 根据权利要求1-10任一项所述的方法,其特征在于,所述主机集成于所述第一屏幕或所述第二屏幕中,所述第一屏幕及所述第二屏幕组成第一屏组,所述方法还包括:
    所述第一屏幕或所述第二屏幕对所述第一屏幕和所述第二屏幕的资源情况进行评分;其中,所述资源情况包括中央处理单元CPU处理能力、只读存储器ROM存储能力或随机存取存储器RAM存储能力中的至少一项;
    若所述第一屏幕的评分高于所述第二屏幕的评分,所述主机集成于所述第一屏幕中;
    若所述第二屏幕的评分高于所述第一屏幕的评分,所述主机集成于所述第二屏幕中。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述方法还包括:
    所述主机根据所述第一屏幕与所述第二屏幕的方位信息确定所述第一屏幕和所述第二屏幕分别对应的显示信息;
    所述主机向所述第一屏幕发送所述第一屏幕对应的显示信息;
    所述第一屏幕根据所述第一屏幕对应的显示信息显示对应的显示画面;
    所述主机向所述第二屏幕发送所述第二屏幕对应的显示信息;
    所述第二屏幕接收所述第二屏幕对应的显示信息后,根据所述第二屏幕对应的显示信息显示对应的显示画面。
  13. 根据权利要求1-12任一项所述的方法,其特征在于,所述屏幕拼接系统还包括第三屏幕,所述方法还包括:
    所述第一屏幕与所述第三屏幕互相发送第二短距信号;
    所述第二屏幕与所述第三屏幕互相发送第三短距信号;
    根据所述第二短距信号的RSSI确定所述第一屏幕与所述第三屏幕的距离;根据所述第三短距信号的RSSI确定所述第二屏幕与所述第三屏幕的距离;
    当所述第一屏幕和第三屏幕的距离小于等于所述第一屏幕和所述第三屏幕对应的最大组合半径时,将所述第一屏幕、所述第二屏幕及所述第三屏幕组成第二屏组;其中,所述第一屏幕和所述第三屏幕对应的所述最大组合半径是根据所述第一屏幕和所述第三屏幕的尺寸和天线的位置确定的;或者
    当所述第二屏幕和第三屏幕的距离小于等于所述第二屏幕和所述第三屏幕对应的最大组合半径时,将所述第一屏幕、所述第二屏幕及所述第三屏幕组成第二屏组;其中,所述第二屏幕和所述第三屏幕对应的所述最大组合半径是根据所述第二屏幕和所述第三屏幕的尺寸和天线的位置确定的。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    所述第一屏幕和/或所述第二屏幕显示第二提示信息;
    所述第一屏幕和/或所述第二屏幕获取用户的指示,所述指示用于确认进行屏幕拼接。
  15. 根据权利要求13或14所述的方法,其特征在于,所述方法还包括:
    所述第一屏幕和/或所述第二屏检测是否满足第一条件;
    若满足第一条件,所述第一屏幕和/或所述第二屏幕将所述第三屏幕从所述第二屏组中被移除。
  16. 根据权利要求15所述的方法,其特征在于,所述第一条件包括:
    所述第三屏幕与所述第一屏幕的心跳连接断开,或所述第三屏幕与所述第二屏幕的心跳连接断开;或者
    所述主机接收用户删除所述第三屏幕的操作;或者
    所述第一屏幕和所述第三屏幕的距离大于所述第一屏幕和所述第三屏幕对应的最大组合半径;或者所述第二屏幕和所述第三屏幕的距离大于所述第二屏幕和所述第三屏幕对应的最大组合半径。
  17. 一种屏幕组合方法,应用于屏幕拼接系统,所述屏幕拼接系统包括至少两个屏幕及主机,所述至少两个屏幕包括第一屏幕及第二屏幕,所述第一屏幕及所述第二屏幕组成第一屏组,所述第一屏幕及所述第二屏幕通信连接,其特征在于,包括:
    所述主机向所述第一屏幕发出第一指示,所述第一指示用于指示所述第一屏幕拍摄第一图像;
    所述主机向所述第二屏幕发出第二指示,所述第二指示用于指示所述第二屏幕拍摄第二图像;
    所述主机根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息。
  18. 根据权利要求17所述的方法,其特征在于,所述主机根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息包括:
    所述主机从所述第一屏幕接收所述第一图像;
    所述主机从所述第二屏幕接收所述第二图像;
    所述主机根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息。
  19. 根据权利要求17或18所述的方法,其特征在于,所述根据所述第一图像和所述第二图像确定所述第一屏幕和所述第二屏幕的方位信息包括:
    所述主机根据图像匹配算法对所述第一图像和所述第二图像进行图像匹配,确定所述第一图像和所述第二图像的重叠区域;
    根据所述重叠区域在所述第一图像的方位和所述第二图像的方位确定所述第一屏幕相对于所述第二屏幕的方位。
  20. 根据权利要求19所述的方法,其特征在于,所述根据所述重叠区域在所述第一图像的方位确定所述第一屏幕相对于所述第二屏幕的方位包括:
    若所述重叠区域位于所述第一图像的下半区,且位于所述第二图像的上半区,确定所述第一屏幕位于所述第二屏幕的上方;
    若所述重叠区域位于所述第一图像的左下角,且位于所述第二图像的右上角,确定所述第一屏幕位于所述第二屏幕的右上方;
    若所述重叠区域位于所述第一图像的左半区,且位于所述第二图像的右半区,确定所述第一屏幕位于所述第二屏幕的右方;
    若所述重叠区域位于所述第一图像的左上角,且位于所述第二图像的右下角,确定所述第一屏幕位于所述第二屏幕的右下方;
    若所述重叠区域位于所述第一图像的上半区,且位于所述第二图像的下半区,确定所述第一屏幕位于所述第二屏幕的下方;
    若所述重叠区域位于所述第一图像的右上角,且位于所述第二图像的左下角,确定所述第一屏幕位于所述第二屏幕的左下方;
    若所述重叠区域位于所述第一图像的右半区,且位于所述第二图像的左半区,确定所述第一屏幕位于所述第二屏幕的左方;
    若所述重叠区域位于所述第一图像的右下区,且位于所述第二图像的左上区,确定所述第一屏幕位于所述第二屏幕的左上方。
  21. 根据权利要求19或20所述的方法,其特征在于,
    所述图像匹配算法包括尺度不变特征变换SIFT算法、加速稳健特征SURF算法、快速最近邻搜索算法中的至少一种。
  22. 根据权利要求17所述的方法,其特征在于,所述根据所述第一图像和所述第二图像确定所述第一屏幕和所述第二屏幕的方位信息包括:
    若确定所述第一图像和所述第二图像包括目标物体,根据所述目标物体在所述第一图像和所述第二图像的方位确定所述第一屏幕相对于所述第二屏幕的方位。
  23. 根据权利要求17所述的方法,其特征在于,所述第一屏幕根据所述第一指示拍摄第一图像,所述第二屏幕根据所述第二指示拍摄第二图像之前,所述方法还包括:
    所述主机向所述第一屏幕和所述第二屏幕发送布局信息,所述布局信息包括至少一种组合模式;
    响应于用户从所述至少一种组合模式选择一种组合模式的操作,所述主机向所述第一屏幕和所述第二屏幕发送操作信息,所述操作信息用于指示用户在第一位置进行第一手势或动作,在第二位置进行第二手势或动作;
    所述根据所述第一图像和所述第二图像确定所述第一屏幕与所述第二屏幕的方位信息包括:
    若确定所述第一图像中包含所述第一手势或动作的区域大于等于预设阈值,确定所述第一屏幕位于所述第一位置;若确定所述第二图像中包含所述第二手势或动作的区域大于等于所述预设阈值,确定所述第二屏幕位于所述第二位置。
  24. 根据权利要求17-23任一项所述的方法,其特征在于,所述方法还包括:
    所述主机根据所述第一屏幕与所述第二屏幕的方位信息确定所述第一屏幕和所述第二屏幕分别对应的显示信息;
    所述主机向所述第一屏幕发送所述第一屏幕对应的显示信息;
    所述主机向所述第二屏幕发送所述第二屏幕对应的显示信息。
  25. 一种电子设备,其特征在于,所述电子设备包括:显示屏、无线通信模块、存 储器和一个或多个处理器;所述无线通信模块、所述存储器与所述处理器耦合;
    其中,所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令;当所述计算机指令被所述处理器执行时,使得所述电子设备执行如权利要求17-24中任一项所述的方法。
  26. 一种计算机可读存储介质,其特征在于,包括计算机指令;
    当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求17-24中任一项所述的方法。
PCT/CN2021/136884 2021-02-08 2021-12-09 一种屏幕组合方法和装置 WO2022166395A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21924388.8A EP4273691A1 (en) 2021-02-08 2021-12-09 Screen combination method and apparatus
US18/264,517 US20240045638A1 (en) 2021-02-08 2021-12-09 Screen Combination Method and Apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110171975.0 2021-02-08
CN202110171975.0A CN114942735A (zh) 2021-02-08 2021-02-08 一种屏幕组合方法和装置

Publications (1)

Publication Number Publication Date
WO2022166395A1 true WO2022166395A1 (zh) 2022-08-11

Family

ID=82741872

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/136884 WO2022166395A1 (zh) 2021-02-08 2021-12-09 一种屏幕组合方法和装置

Country Status (4)

Country Link
US (1) US20240045638A1 (zh)
EP (1) EP4273691A1 (zh)
CN (1) CN114942735A (zh)
WO (1) WO2022166395A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731541A (zh) * 2015-03-17 2015-06-24 联想(北京)有限公司 一种控制方法、电子设备及系统
CN106020757A (zh) * 2016-05-16 2016-10-12 联想(北京)有限公司 一种屏幕拼接方法及电子设备
US20170038923A1 (en) * 2015-08-07 2017-02-09 Canon Kabushiki Kaisha Information processing apparatus, display control method, and program
CN108304148A (zh) * 2017-01-11 2018-07-20 中兴通讯股份有限公司 一种多屏拼接显示的方法和装置
CN108509167A (zh) * 2018-02-12 2018-09-07 苏州佳世达电通有限公司 屏幕拼接方法及屏幕系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731541A (zh) * 2015-03-17 2015-06-24 联想(北京)有限公司 一种控制方法、电子设备及系统
US20170038923A1 (en) * 2015-08-07 2017-02-09 Canon Kabushiki Kaisha Information processing apparatus, display control method, and program
CN106020757A (zh) * 2016-05-16 2016-10-12 联想(北京)有限公司 一种屏幕拼接方法及电子设备
CN108304148A (zh) * 2017-01-11 2018-07-20 中兴通讯股份有限公司 一种多屏拼接显示的方法和装置
CN108509167A (zh) * 2018-02-12 2018-09-07 苏州佳世达电通有限公司 屏幕拼接方法及屏幕系统

Also Published As

Publication number Publication date
US20240045638A1 (en) 2024-02-08
EP4273691A1 (en) 2023-11-08
CN114942735A (zh) 2022-08-26

Similar Documents

Publication Publication Date Title
WO2021018008A1 (zh) 一种投屏方法与电子设备
WO2021057830A1 (zh) 一种信息处理方法及电子设备
US11057866B2 (en) Method and apparatus for providing notification
US20220163932A1 (en) Device control page display method, related apparatus, and system
US10397643B2 (en) Electronic device for identifying peripheral apparatus and method thereof
US9207902B2 (en) Method and apparatus for implementing multi-vision system by using multiple portable terminals
WO2020248626A1 (zh) 任务执行方法、装置、设备、系统及存储介质
US20220004316A1 (en) Touch control method and apparatus
US11372537B2 (en) Image sharing method and electronic device
US11895567B2 (en) Lending of local processing capability between connected terminals
US20240111473A1 (en) Distributed display method and terminal for application interface
US9947137B2 (en) Method for effect display of electronic device, and electronic device thereof
EP4318199A1 (en) Method for interaction between electronic devices and electronic device
US20160209997A1 (en) Apparatus and method for displaying connection status in network
CN112527174B (zh) 一种信息处理方法及电子设备
KR20210105938A (ko) 이미지 분류 방법 및 전자 디바이스
US9674686B2 (en) Apparatus and method for pairing mobile devices
US20210219127A1 (en) Data transmission method andterminal
WO2022166395A1 (zh) 一种屏幕组合方法和装置
WO2021052488A1 (zh) 一种信息处理方法及电子设备
US20230362782A1 (en) Data Sharing Method, Electronic Device, and System
CN115242994B (zh) 视频通话系统、方法和装置
WO2022110939A1 (zh) 一种设备推荐方法及电子设备
US10853412B2 (en) Scenario-based sound effect control method and electronic device
CN111158563A (zh) 电子终端及图片矫正方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21924388

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18264517

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2021924388

Country of ref document: EP

Effective date: 20230803

NENP Non-entry into the national phase

Ref country code: DE