CN113163144B - Wireless Presentation System - Google Patents

Wireless Presentation System Download PDF

Info

Publication number
CN113163144B
CN113163144B CN202010014552.3A CN202010014552A CN113163144B CN 113163144 B CN113163144 B CN 113163144B CN 202010014552 A CN202010014552 A CN 202010014552A CN 113163144 B CN113163144 B CN 113163144B
Authority
CN
China
Prior art keywords
video data
transmitting
control unit
receiving end
transmission module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010014552.3A
Other languages
Chinese (zh)
Other versions
CN113163144A (en
Inventor
施嘉南
吴镇吉
游琳源
江进富
曾荣堃
吴壮为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BenQ Intelligent Technology Shanghai Co Ltd
BenQ Corp
Original Assignee
BenQ Intelligent Technology Shanghai Co Ltd
BenQ Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BenQ Intelligent Technology Shanghai Co Ltd, BenQ Corp filed Critical BenQ Intelligent Technology Shanghai Co Ltd
Priority to CN202010014552.3A priority Critical patent/CN113163144B/en
Priority to EP21150540.9A priority patent/EP3849175A1/en
Priority to US17/143,221 priority patent/US11245867B2/en
Priority to EP21150536.7A priority patent/EP3849174A1/en
Priority to US17/143,214 priority patent/US11956563B2/en
Publication of CN113163144A publication Critical patent/CN113163144A/en
Priority to US18/203,108 priority patent/US20230300287A1/en
Application granted granted Critical
Publication of CN113163144B publication Critical patent/CN113163144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Abstract

The invention provides a wireless presentation system, which comprises a first transmitting end device and a receiving end device, wherein the first transmitting end device transmits first image data. The receiving end device comprises a receiving end control unit, and the receiving end control unit distributes a first identification code to the first transmitting end device. The receiving end control unit synthesizes the first image data with the first identification image corresponding to the first identification code, and outputs the first synthesized image data to the display device. The invention is helpful for meeting participants to quickly identify the corresponding video signal source presenter aiming at a certain picture in the divided pictures, thereby providing question discussion.

Description

Wireless presentation system
Technical Field
The invention relates to a wireless presentation system capable of identifying video signal sources.
Background
Conventionally, when a presenter wants to operate an information processing device (such as a notebook computer) to perform a presentation via a display device, a video signal line (such as an HDMI video transmission line) is used to connect the information processing device with the display device (such as a projector or a large-sized television) to transmit the presentation image data output from the information processing device to the display device, thereby displaying the presentation image data. However, there are limitations in using a wired video signal line to connect an information processing apparatus and a display apparatus for presentation, for example, when presentation data stored in a different information processing apparatus is to be switched during a conference, a presenter must first remove the video signal line from the original information processing apparatus and then connect the video signal line to another information processing apparatus, which often causes interruption of the conference agenda.
Aiming at the inconvenient operation, a multi-user wireless presentation system product appears on the market at present; such a multi-user wireless presentation system product generally includes a plurality of transmitting devices and a single receiving device. Each participating conference member may connect a transmitting device to the video output port of its own information processing device, while the receiving device is connected to the video input port of the display device. When only one presenter presses the projection button on the transmitting device, one of the transmitting devices and the receiving device perform wireless image data transmission, and the display device only displays the image data output by the single information processing device. However, when a plurality of briefers press the projection button on the transmitting device, the transmitting device will have a plurality of image data transmitted wirelessly with the receiving device, and the display device will display the image data outputted by the plurality of information processing devices in a split-screen manner. However, in the display state of the split screen mode, the members involved in the conference are easily confused, and the participants cannot identify which presenter the sources of the video signals in the split screen of the display device are. Therefore, when a participant has a question about a certain screen in the divided screens, the participant must additionally inquire about the presentation information for confirming who the screen is, so as to present question discussion to the related presenter.
Disclosure of Invention
The present invention provides a wireless presentation system capable of identifying the source of video signals, so as to effectively solve the above-mentioned problems encountered in the prior art.
According to a first aspect of the present invention, a wireless presentation system is provided, the wireless presentation system includes a first transmitting device and a receiving device, and the first transmitting device transmits first image data. The receiving end device comprises a receiving end control unit, and the receiving end control unit distributes a first identification code to the first transmitting end device. The receiving end control unit synthesizes the first image data with the first identification image corresponding to the first identification code, and outputs the first synthesized image data to the display device.
According to a second aspect of the present invention, a wireless presentation system is provided, which includes a first transmitting device and a receiving device. The first transmitting device transmits the first image data and the first identification code. The receiving end device comprises a receiving end control unit, the receiving end control unit synthesizes the first image data and a first identification image corresponding to the first identification code into first synthesized image data, and the receiving end control unit outputs the first synthesized image data to the display device.
According to a third aspect of the present invention, a wireless presentation system is provided, the wireless presentation system includes a first transmitting device, a second transmitting device and a receiving device. The first transmitting end device comprises a first transmitting end control unit, the first transmitting end control unit synthesizes first synthesized video data by the first video data and a first identification image corresponding to the first identification code, and the first transmitting end device transmits the first synthesized video data. The second transmitting end device comprises a second transmitting end control unit, the second transmitting end control unit synthesizes second synthesized video data with a second identification image corresponding to the second identification code, and the second transmitting end device transmits the second synthesized video data. And the receiving end device is used for receiving the first synthesized video data and the second synthesized video data, synthesizing the first synthesized video data and the second synthesized video data into the split-picture video data, and outputting the split-picture video data to the display device.
According to a fourth aspect of the present invention, a wireless presentation system is provided, the wireless presentation system includes a first transmitting device and a receiving device. The first transmitting end device comprises a first transmitting end control unit and a first transmission module. The first transmitting end control unit is used for receiving first image data, and the first transmission module is used for transmitting the first image data. The receiving end device is coupled to the display device and comprises a second transmission module and a receiving end control unit. The second transmission module is used for selectively establishing connection with the first transmission module and receiving the first image data. The receiving end control unit is used for outputting first image data to the display device, and when the first image data is displayed in the display device, the first image data corresponds to the first image area, and the receiving end control unit receives the position of the user operation provided by the user input device. When the receiving end control unit judges that the position operated by the user is located in the first image area, the receiving end control unit sends a prompt instruction to the first transmitting end device, so that the first transmitting end device sends a first acousto-optic signal.
Compared with the prior art, the invention can avoid the problem that in the traditional method, under the display state of the divided picture mode, the participants cannot identify which presenter the information processing device is the source of each video signal in the divided picture of the display device. The conference participants can quickly identify the corresponding video signal source presenter aiming at a certain picture in the divided pictures, so as to further put forward the question discussion.
For a better understanding of the above and other aspects of the invention, reference will now be made in detail to the following examples, examples of which are illustrated in the accompanying drawings
Drawings
FIG. 1 is a flowchart illustrating a method for identifying a source of a video signal according to a first embodiment of the present invention.
FIG. 2 is a detailed flowchart of the method of identifying the source of the video signal in FIG. 1.
Fig. 3 is a block diagram of a wireless presentation system according to a first embodiment of the present invention.
FIG. 4A is a schematic diagram illustrating an application scenario of the wireless presentation system of FIG. 3.
FIG. 4B is a schematic diagram of the image area and the identification area of the embodiment shown in FIG. 4A.
FIG. 4C is a schematic diagram illustrating the image area and the identification area of the embodiment shown in FIG. 4A.
FIG. 5 is a flowchart illustrating a method for identifying a source of a video signal according to a second embodiment of the present invention.
FIG. 6 is a detailed flowchart of the method of identifying the source of the video signal in FIG. 5.
FIG. 7 is a flowchart illustrating a method for identifying a source of a video signal according to a third embodiment of the present invention.
Fig. 8 is a block diagram of a wireless presentation system according to a third embodiment of the invention.
FIG. 9 is a schematic diagram illustrating an application scenario of the wireless presentation system of FIG. 8.
FIG. 10 is a flowchart illustrating a method for identifying a source of a video signal according to a fourth embodiment of the present invention.
FIG. 11 is a detailed flowchart of the method of identifying the source of the video signal in FIG. 10.
Fig. 12 is a block diagram of a wireless presentation system according to a fourth embodiment of the invention.
Detailed Description
For a further understanding of the objects, construction, features and functions of the invention, reference should be made to the following detailed description of the preferred embodiments.
Referring to fig. 1, a flowchart of a method for identifying a source of a video signal according to a first embodiment of the invention is shown. Referring to fig. 3, a block diagram of a wireless presentation system 300 according to a first embodiment of the invention is shown. As shown in FIG. 1, the method for identifying the source of the picture signal according to the present invention comprises the following steps: step 102: the receiving end control unit 324 in the receiving end device 320 assigns a first identification code to the first transmitting end device 310. Step 104: the first image data is transmitted through the first transmitting device 310. Step 106: the receiving end control unit 324 synthesizes the first image data with the first identification image corresponding to the first identification code. Step 108: the receiving end control unit 324 outputs the first synthesized image data to the display device 360.
As shown in fig. 3, the wireless presentation system includes a first transmitting device 310 and a receiving device 320, and optionally includes a second transmitting device 330. The first transmitting device 310 includes a first transmitting control unit 312 and a first transmission module 314, the first transmitting control unit 312 is configured to receive the first video data Vid1 from the first video source 350, and the first transmission module 314 is configured to transmit the first video data Vid1. Similarly, the second transmitting device 330 includes a second transmitting control unit 332 and a third transmitting module 334, wherein the second transmitting control unit 332 is configured to receive the second video data Vid2 from the second video source 352, and the third transmitting module 334 is configured to transmit the second video data Vid2. The receiver device 320 includes a second transmission module 322 and a receiver control unit 324. The second transmission module 322 selectively establishes a wireless transmission WL1 with the first transmission module 314 and receives the first video data Vid1, and the second transmission module 322 selectively establishes a wireless transmission WL2 with the third transmission module 334 and receives the second video data Vid2.
Referring to fig. 2, an example of a detailed flowchart of the method for identifying the source of the video signal in fig. 1 is shown. Corresponding to step 102, when the receiving end device 320 establishes a connection with the first transmitting end device 310, the receiving end control unit 324 assigns a first identification code to the first transmitting end device 310, and the detailed steps thereof include steps 204 to 206 shown in fig. 2. In step 204, it is determined whether or not the receiving device 320 establishes a connection with the transmitting device? If a connection has been established, step 206 is performed. In step 206, the receiving end device 320 records what order the transmitting end device is to establish a connection, and assigns an identification code corresponding to the order color to the transmitting end device. In addition, in step 204, after the receiving end device 320 and the first transmitting end device 310 are powered on, the user can press the pairing button on the receiving end device 320 and the first transmitting end device 310, and then both can attempt to establish a connection. After the connection between the first transmission module 314 of the first transmitting device 310 and the second transmission module 322 of the receiving device 320 is established, step 206 is performed, where the receiving device 320 and the first transmitting device 310 interact with each other to update the internal information, the receiving device 320 records what order the first transmitting device 310 is in order to establish the connection, and assigns the identification code corresponding to the order color to the transmitting device. The first transmitting end control unit 312 and the receiving end control unit 324 both preset a list storing the colors of the corresponding lights of the identification codes, for example, the first transmitting end device that successfully establishes a connection with the receiving end device 320 is scheduled to be assigned the identification code #1, the corresponding first identification image is red, the second transmitting end device that successfully establishes a connection is scheduled to be assigned the identification code #2, the corresponding second identification image is green. Thus, when the first transmitting device 310 is the first transmitting device for establishing a connection, the receiving end control unit 324 records that the first transmitting device 310 is the first connection established in the first order, and assigns the identification code #1 to the first transmitting device 310. Similarly, when the second transmitting device 330 is powered on later to become a second transmitting device that successfully establishes a connection with the receiving device 320, the receiving control unit 324 records that the second transmitting device 330 is a second-order established connection and assigns the identification code #2 to the second transmitting device 330. The receiving end control unit 324 can know that the first identification image corresponding to the identification code #1 is red and the second identification image corresponding to the identification code #2 is green by referring to the preset and stored list of the corresponding lamp colors. In the present embodiment, the first identification image and the second identification image are respectively illustrated with red and green, but not limited thereto, the first identification image and the second identification image may be respectively "1" and "2" of arabic numerals or "a" and "B" of english letters.
Corresponding to step 104, the detailed steps include step 208 shown in FIG. 2. In step 208, it is determined whether or not the transmitting device transmits video data to the receiving device 320? If yes, go to step 210. When the first transmission module 314 of the first transmitting device 310 establishes a connection with the second transmission module 322 of the receiving device 320, but the first user has not pressed the projection button on the first transmitting device 310, the first transmitting device 310 does not transmit the first video data Vid1 to the receiving device 320. When the first user presses the projection button on the first transmitting device 310, the first transmitting device 310 starts to transmit the first video data Vid1 to the receiving device 320, so that step 208 is satisfied, and step 210 is performed. Similarly, when the third transmission module 334 of the second transmitting device 330 is connected to the second transmission module 322 of the receiving device 320, the second transmitting device 330 starts to transmit the second video data Vid2 to the receiving device 320 after the second user presses the projection button on the second transmitting device 330.
As shown in fig. 2, the method for identifying the source of video signals of the present embodiment may optionally include step 209 of determining whether more than 1 transmitting device transmits video data to the receiving device 320? If yes, go to step 210, if not, skip step 210. Step 209 aims at the situation that the receiving end control unit 324 can dynamically turn off the video synthesis circuit in the receiving end control unit 324, and dynamically reduce the power consumption of the receiving end control unit 324, so as to achieve the power saving effect. The applicable scenarios of step 209 are: although both the first transmitting device 310 and the second transmitting device 330 are already connected to the receiving device 320, (1) when only the first transmitting device 310 is transmitting the first video data Vid1 to the receiving device 320 and the whole frame of the display device 360 is displaying the first video data Vid1, the participant can clearly understand that the presenter currently operating the first transmitting device 310 is outputting the first video data Vid1, so that the receiving control unit 324 can omit displaying the color corresponding to the identification code in the display device 360, for example, the receiving control unit 324 can turn off the video synthesis circuit in the receiving control unit 324, and skip step 210 to reduce the power consumption of the receiving control unit 324. (2) When the first transmitting device 310 transmits the first video data Vid1 to the receiving device and the second transmitting device 330 transmits the second video data Vid2 to the receiving device 320, the display device 360 displays the first video data Vid1 and the second video data Vid2 in a split-screen mode, and the participant cannot clearly recognize the video signal at this time, the receiving control unit 324 only needs to display the color corresponding to the identification code in the display device 360, for example, the receiving control unit 324 only starts the video synthesis circuit in the receiving control unit 324 at this time, and step 210 is performed.
Corresponding to steps 106-108, the detailed steps include step 210 shown in FIG. 2: the receiving end device 320 displays a color corresponding to the identification code in the display device 360 according to the identification code assigned to the transmitting end device, and the transmitting end device indicator displays the color corresponding to the identification code. When the first transmitting device 310 transmits the first video data Vid1 to the second transmitting module 322, the receiving control unit 324 synthesizes the first video data Vid1 and the first identification image (e.g. red identification image) into the first synthesized video data vid_c1, and outputs the first synthesized video data vid_c1 to the display device 360. When the first synthesized video data vid_c1 is displayed in the display device 360, as shown in fig. 4A, the first synthesized video data vid_c1 corresponds to the first video area 462 and the first recognition area 464, the first video area 462 is adjacent to the first recognition area 464, the first video area 462 displays the first video data Vid1, and the first recognition area 464 displays the first recognition image. The receiving end control unit 324 sends a prompt instruction to request the first indicating device 316 of the first transmitting end device 310 to display the first identification image. In this embodiment, the receiving end control unit 324 synthesizes the first video data Vid1 and the red identification image into the first synthesized video data vid_c1, so that the corresponding red identification image is displayed in the display device 360, and the indicator on the first transmitting end device 310 displays the red color corresponding to the identification code # 1. By analogy, the receiving end control unit 324 synthesizes the second video data Vid2 with the green identification image into the second synthesized video data vid_c2, and outputs the second synthesized video data vid_c2 to the display device 360, so that the corresponding green identification image is displayed in the display device 360, and the second indicator 336 (e.g. indicator lamp) on the second transmitting end device 330 displays the green corresponding to the identification code # 2.
Fig. 4A to fig. 4B are schematic diagrams illustrating an application scenario of the wireless presentation system of the present invention, and schematic diagrams illustrating an image area and an identification area according to an embodiment of the present invention. The first video source 350 (e.g., notebook) operated by the first user P1 outputs the first video data Vid1 to the first transmitting device 310, and the second video source 352 (e.g., notebook) operated by the second user P2 outputs the second video data Vid2 to the second transmitting device 330. Both the first transmitting device 310 and the second transmitting device 330 establish a connection with the receiving device 320. When the first transmitting device 310 is the transmitting device that establishes a connection with the first receiving device 320, it is assigned the identification code #1. When the first synthesized video data vid_c1 is displayed in the display device 360, the first synthesized video data vid_c1 corresponds to the first video area 462 and the first identification area 464 located at the left half of the display device 360, the first video area 462 is adjacent to the first identification area 464, the first video area 462 displays the first video data Vid1, and the first identification area 464 displays the red identification image corresponding to the identification code #1. As shown in fig. 4B, the first recognition area 464 is located at the upper left corner of the first video area 462, and a red color block is displayed at the upper left corner. Similarly, when the second transmitting device 330 is a transmitting device that establishes a connection with the second receiving device 320, it is assigned the identification code #2. When the second synthesized video data vid_c2 is displayed in the display device 360, the second synthesized video data vid_c2 corresponds to the second video area 466 and the second identification area 468 located at the right half of the display device 360, the second video area 466 is adjacent to the second identification area 468, the second video area 466 displays the second video data Vid2, and the second identification area 468 displays the green identification image corresponding to the identification code #2. The second identification area 468 is located at the upper left corner of the second video area 466, where a green block is displayed. As shown in fig. 4B, the receiving end control unit 324 assigns a second identifier (e.g., identifier # 2) to the second transmitting end device 330, transmits the second video data Vid2 via the second transmitting end device 330, the receiving end control unit 324 synthesizes the second video data vid_c2 with a second identifier image (e.g., a green identifier image) corresponding to the second identifier, and the receiving end control unit 324 outputs the first synthesized video data vid_c1 and the second synthesized video data vid_c2 to the display device 360, and the first synthesized video data vid_c1 and the second synthesized video data vid_c2 are displayed in the display device 360 in a split-screen mode. For example, the receiving end control unit 324 synthesizes the first synthesized video data vid_c1 and the second synthesized video data vid_c2 into a split-frame video data and outputs the split-frame video data to the display device 360.
Referring to fig. 4C, a schematic diagram of an image area and an identification area of another embodiment is shown, wherein the first identification area 464 surrounds the outer periphery of the first video area 462, and a red frame is displayed at the outer periphery. The second recognition area 468 surrounds the outer periphery of the second video area 466, and a green frame is displayed at the outer periphery.
The first transmitting device 310 has a first indicator 316, and the receiving control unit 324 sends a prompt to instruct the first indicator of the first transmitting device 310 to display a red identification image. Thus, the participant can clearly understand the correspondence between the red identification image in the first video area 462 and the first identification area 464 and the red identification image displayed by the first indicator lamp, so that the video signal source in the first video area 462 located at the left half of the display device 360 can be easily understood as the first video source 350 operated by the first user. Similarly, the second transmitting device 330 has a second indicator light 336, and the receiving control unit 324 sends a prompt to instruct the second indicator light of the second transmitting device 330 to display a green identification image. Thus, the conference participant can clearly understand the correspondence between the green recognition image in the second video region 466 and the second recognition region 468 and the green recognition image displayed by the second indicator lamp, so that the video signal source in the second video region 466 located at the right half of the display device 360 can be easily understood as the second video source 352 operated by the second user. In the present embodiment, the relationship between the first video area 462 and the first recognition area 464 is illustrated as being located at the corners of the video area or around the outer periphery of the video area, but is not limited thereto.
In the first embodiment, the wireless presentation system of the present invention can selectively execute the step of canceling the correspondence between the first identification code and the first transmitting device 310 by the receiving-end control unit 324 when the first transmitting device 310 stops transmitting the first video data Vid1 to the second transmitting module 322. Corresponding to this step of canceling the correspondence between the first id and the first transmitting device 310, the detailed steps include steps 212 and 216 shown in fig. 2. Step 212: determine if the transmitting device stops transmitting video data to the receiving device 320? If yes, go to step 216: the receiving end device 320 recovers the identification code of the corresponding cis-color assigned to the transmitting end device. For example, the receiving end control unit 324 retrieves the identification code #1 and the corresponding red identification image originally assigned to the first transmitting end device 310, and may assign the identification code #1 and the corresponding red identification image to other transmitting end devices of the new connection when the connection is established between the other transmitting end devices and the receiving end device 320 later.
In the first embodiment, the wireless presentation system of the present invention may optionally perform step 214 shown in fig. 2: determine whether the transmitting device is powered down? Thus, when the first transmitting device 310 continuously transmits the first video data Vid1 to the second transmitting module 322 and the transmitting device is not powered off, the receiving control unit 324 continuously outputs the first composite video data vid_c1 to the display device 360.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for identifying a source of a video signal according to a second embodiment of the invention. In the method for identifying the source of the video signal according to the second embodiment of the present invention, the hardware architecture of the block diagram of the wireless presentation system shown in fig. 3 may be adopted, and only the firmware programs in the first transmitting end control unit 312, the second transmitting end control unit 332 and the receiving end control unit 324 need to be modified, so that the first transmitting end control unit 312, the second transmitting end control unit 332 and the receiving end control unit 324 execute the method steps shown in fig. 5 to 6 together. As shown in FIG. 5, the method for identifying the source of the picture signal according to the present invention comprises the following steps: step 502: the first identification code is transmitted to the receiving end control unit 324 in the receiving end device 320 via the first transmitting end device 310. Step 504: the first video data Vid1 is transmitted to the receiving control unit 324 in the receiving device 320 via the first transmitting device 310. Step 506: the receiving end control unit 324 synthesizes the first video data vid_c1 with the first identification image corresponding to the first identification code. Step 508: the receiving end control unit 324 outputs the first synthesized video data vid_c1 to the display device 360.
Referring to fig. 6, an example of a detailed flowchart of the method for identifying the source of the video signal in fig. 5 is shown. Corresponding to step 502, when the receiving end device 320 establishes a connection with the first transmitting end device 310, the first identification code is transmitted to the receiving end control unit 324 in the receiving end device 320 via the first transmitting end device 310, and the detailed steps thereof include steps 604-606 shown in fig. 6. Step 604: determine whether or not the receiving device 320 establishes a connection with the transmitting device? If a connection has been established, step 606 is performed. Step 606: the first transmitting device 310 transmits the identification code of the built-in color to the receiving control unit 324 in the receiving device 320. In step 604, after the receiving end device 320 and the first transmitting end device 310 are powered on, the user can press the pairing button on the receiving end device 320 and the first transmitting end device 310, and then both devices attempt to establish a connection. After the connection between the first transmission module 314 of the first transmitting device 310 and the second transmission module 322 of the receiving device 320 is established, step 606 is performed, where the receiving device 320 and the first transmitting device 310 interact with each other to update the internal information, and the first transmitting device 310 transmits the first identification code to the receiving control unit 324 in the receiving device 320. The first transmitting end control unit 312 and the receiving end control unit 324 both preset a list storing the colors of the corresponding lights of the identification codes, for example, the first identification image corresponding to the identification code #1 is set to be red when the first transmitting end device 310 leaves the factory, the second identification image corresponding to the identification code #2 is set to be green when the second transmitting end device 330 leaves the factory, and so on. Thus, when the first transmitting device 310 and the receiving device 320 establish a connection, the receiving control unit 324 records that the first transmitting device 310 has the identification code #1. Similarly, when the second transmitter 330 and the receiver 320 establish a connection later, the receiver control unit 324 records that the second transmitter 330 has the identification number #2. The receiving end control unit 324 can know that the first identification image corresponding to the identification code #1 is red and the second identification image corresponding to the identification code #2 is green by referring to the preset and stored list of the corresponding lamp colors. In the present embodiment, the first identification image and the second identification image are respectively illustrated with red and green, but not limited thereto, the first identification image and the second identification image may be respectively "1" and "2" of arabic numerals or "a" and "B" of english letters.
In the present embodiment, the first identifier in the first transmitting end device and the second identifier in the second transmitting end device may be, for example, color identifiers allocated to the transmitting end device when shipped, but not limited thereto. The first identification code may also be selected from one of the group consisting of a hardware serial number, user input information, and a network address. For example, the first video source 350 providing the first video data Vid1 is an information processing device (such as a notebook computer) used by the first user, the information processing device executes an operating system (such as microsoft Windows operating system), the first identification code is user information provided by the operating system (such as the personal information of the first user set in the microsoft Windows operating system console), for example, "Peter"; the information processing apparatus transmits "" Peter "" to the first transmitting end apparatus 310 as the first identification code. For example, the first video data Vid1 source is an information processing device (such as a notebook computer) with a camera thereon for capturing a first user's image; the information processing apparatus transmits the first user's figure to the first transmitting end apparatus 310 as the first identification code. Also, for example, the first video data Vid1 source is an information processing device (such as notebook computer) connected to the Internet and having a network IP address; the information processing apparatus transmits the network IP address to the first transmitting end apparatus 310 as the first identification code. For example, the first transmitting device 310 has a factory hardware serial number, and the first identification code is the factory hardware serial number of the first transmitting device 310. For example, the information processing device executes communication software, (e.g., line, skype, etc.), and the information processing device obtains personal information of the user of the login user, such as "" Peter ""; the information processing apparatus transmits "" Peter "" to the first transmitting end apparatus 310 as the first identification code. Any of the above information that can identify the information processing apparatus can be applied as the first identification code. When the first identification code is Peter, the factory hardware number or the network IP address, the first identification image is the image of the text. When the first identification code is the first user's figure, the first identification image is the first user's figure.
Corresponding to step 504, the detailed steps include step 608 shown in FIG. 6: determine if the transmitting device is transmitting video data to the receiving device 320? If yes, go to step 610. The details of this step 608 are the same as the details of step 208.
As shown in fig. 6, the method for identifying the source of video signals of the present embodiment may optionally include step 609, determining whether more than 1 transmitting device transmits video data to the receiving device? If yes, go to step 610, if not, skip step 610. The details of this step 609 are the same as those of step 209.
Corresponding to steps 506-508, the detailed steps include step 610 shown in FIG. 6: the receiving end device 320 displays the color corresponding to the identification code in the display device 360 according to the identification code transmitted by the transmitting end device, and the transmitting end device indicator displays the color corresponding to the identification code. The details of step 610 are the same as those of step 210.
In this second embodiment, the wireless presentation system of the present invention further performs step 612 as shown in fig. 6: is the transmitting device powered down? Thus, when the first transmitting device 310 continuously transmits the first video data Vid1 to the receiving device 320 and the transmitting device is not powered off, the receiving control unit 324 continuously outputs the first composite video data vid_c1 to the display device 360.
Referring to fig. 7, a flowchart of a method for identifying a source of a video signal according to a third embodiment of the invention is shown. Referring to fig. 8, a block diagram of a wireless presentation system 800 according to a third embodiment of the invention is shown. As shown in FIG. 7, the method for identifying the source of the picture signal according to the present invention comprises the following steps: step 702: the first transmitting device 810 transmits the first video data Vid1 to the receiving device 820, and the receiving device 820 outputs the first video data Vid1 to the display device 860, where the first video data Vid1 corresponds to the first video area 862 in the display screen of the display device 860. Step 704: the user input device 870 provides a location for user manipulation. Step 706: determining whether the position operated by the user is located in the first video area 862 is the receiving end control unit 824? Step 708: when the user operated position is located in the first video area 862, the receiving end control unit 824 sends a prompt instruction to the first transmitting end device 810, so that the first transmitting end device 810 sends a first acousto-optic signal.
As shown in fig. 8, the wireless presentation system includes a first transmitting device 810 and a receiving device 820, and optionally includes a second transmitting device 830. The first transmitting device 810 includes a first transmitting control unit 812, a first transmitting module 814 and a first indicating device 816. The first transmit-side control unit 812 is configured to receive the first video data Vid1 from the first video source 850, and the first transmission module 814 can transmit the first video data Vid1. Similarly, the second transmitter device 830 includes a second transmitter control unit 832, a third transmission module 834, and a second indicator 836. The second transmit side control unit 832 is configured to receive the second video data Vid2 from the second video source 852, and the third transmission module 834 is configured to transmit the second video data Vid2. The receiver device 820 includes a second transmission module 822 and a receiver control unit 824. The second transmission module 822 selectively establishes a connection with the first transmission module 814 and receives the first video data Vid1, and the second transmission module 822 selectively establishes a connection with the third transmission module 834 and receives the second video data Vid2. The receiving end control unit 824 is configured to output the first video data Vid1 and the second video data Vid2 to the display device 860, when the first video data Vid1 is displayed in the display device 860, the first video data Vid1 corresponds to the first video area 862, the second video data Vid2 corresponds to the second video area 866, and the first video area 862 and the second video area 866 are displayed in a split-screen mode. The receiving end control unit 824 receives a user operated position provided by the user input device 870.
Referring to fig. 9, a schematic diagram of an application scenario of the wireless presentation system in fig. 8 is shown, and in this embodiment, the user input device 870 is a touch panel of a large-sized television, which can provide a user with a pressing touch operation, but is not limited thereto. The receiving end control unit 824 outputs the first video data Vid1 and the second video data Vid2 to the display device 860, when the first video data Vid1 is displayed in the display device 860, the first video data Vid1 corresponds to a first video area 862 on the left half of the display device 860, and the second video data Vid2 corresponds to a second video area 866 on the half of the display device 860, wherein the first video area 862 and the second video area 866 are displayed in a split-screen mode. When the user applies the first user operation Ipt1 (e.g., a pressing or touching operation) in the first video area 862 on the left half, the touch panel provides the position coordinate information of the related user operation to the receiving control unit 824. When the receiving-end control unit 824 receives the position of the user operation provided by the user input device 870 and determines that the position of the user operation is located in the first video area 862, the receiving-end control unit 824 sends a prompt instruction to the first transmitting-end device 810, so that the first indicating device 816 of the first transmitting-end device 810 sends a first audible and visual signal, for example, an indicating lamp on the first transmitting-end device 810 sends a red light signal, or a buzzer sends a short sound. Similarly, when the user applies a second user operation Ipt2 (e.g., a pressing or touching operation) in the second video area 866 on the right half, the touch panel provides the receiving control unit 824 with the position coordinate information of the relevant user operation. When the receiving-end control unit 824 receives the position of the user operation provided by the user input device 870 and determines that the position of the user operation is located in the second video area 866, the receiving-end control unit 824 sends a prompting instruction to the second transmitting-end device 830, so that the second indicating device 836 of the second transmitting-end device 830 sends a second audio signal, and the second audio signal is different from the first audio signal; such as a green light on the second transmitting end device 830, or a buzzer sounding a short sound.
Referring to fig. 10, a flowchart of a method for identifying a source of a video signal according to a fourth embodiment of the invention is shown. Referring to fig. 12, a block diagram of a wireless presentation system 1200 according to a fourth embodiment of the invention is shown. As shown in FIG. 10, the method for identifying the source of the picture signal according to the present invention comprises the following steps. In step 1002, a first sender control unit in a first sender device synthesizes first video data with a first identification image to form first synthesized video data, wherein the first identification image corresponds to a first identification code in the first sender device. In step 1004, the first synthesized video data is transmitted to a receiving end control unit in the receiving end device via the first transmitting end device. In step 1006, the receiving end control unit outputs the first synthesized video data to the display device.
As shown in fig. 12, the wireless presentation system includes a first transmitting device 1210, a second transmitting device 1230 and a receiving device 1220. The first transmitter 1210 includes a first transmitter control unit 1212, a first indicator 1216 and a first transmission module 1214, wherein the first transmitter control unit 1212 is configured to receive first video data Vid1 from a first video source 1250. The first transmit-side control unit 1212 may selectively synthesize the first synthesized video data vid_c1 with a first identification image corresponding to the first identification code in the first transmit-side device. The first transmission module 1214 may be configured to transmit the first video data Vid1 or the first composite video data vid_c1. Similarly, the second transmitter 1230 includes a second transmitter control unit 1232, a second indicator 1236 and a third transmission module 1234, wherein the second transmitter control unit 1232 is configured to receive the second video data Vid2 from the second video source 1252. The second transmit-side control unit 1232 may selectively synthesize the second synthesized video data vid_c2 with a second identification image corresponding to the second identification code in the second transmit-side device, where the second identification image corresponds to the second identification code in the second transmit-side device. The third transmission module 1234 is used for transmitting the second video data Vid2 or the second composite video data vid_c2. The receiver device 1220 includes a second transmission module 1222 and a receiver control unit 1224. The second transmission module 1222 can selectively establish a wireless transmission WL1 link with the first transmission module 1214 and receive the first video data Vid1 or the first composite video data vid_c1. The second transmission module 1222 may also selectively establish a wireless transmission WL2 link with the third transmission module 1234 and receive the second video data Vid2 or the second composite video data vid_c2. When the receiving end control unit 1224 receives the first synthesized video data vid_c1 and the second synthesized video data vid_c2 at the same time, the receiving end control unit 1224 outputs the first synthesized video data vid_c1 and the second synthesized video data vid_c2 to the display device 360, so that the first synthesized video data vid_c1 and the second synthesized video data vid_c2 are displayed in the display device 360 in a split-screen mode. For example, the receiving end control unit 1224 synthesizes the first synthesized video data vid_c1 and the second synthesized video data vid_c2 into a split-frame video data vid_c3, and outputs the split-frame video data vid_c3 to the display device 1260. In this embodiment, the first transmitting end control unit 1212, the second transmitting end control unit 1232 and the receiving end control unit 1224 have synthesizing circuits therein, so as to synthesize the received synthesized video data.
Referring to fig. 11, an example of a detailed flowchart of the method for identifying the source of the video signal in fig. 10 is shown. Wherein step 1002 shown in fig. 10 comprises step 1104 shown in fig. 11, step 1004 shown in fig. 10 comprises step 1108 shown in fig. 11, and step 1006 shown in fig. 10 comprises step 1112 shown in fig. 11.
As shown in fig. 11, in step 1104, the receiving end control unit 1224 notifies the first transmitting end control unit 1212 to synthesize the first video data Vid1 and the first identification image into the first synthesized video data vid_c1. After the first transmission module 1214 in the first transmitting device 1210 establishes a connection with the second transmission module 1222 in the receiving device 1220, the receiving device 1220 and the first transmitting device 1210 mutually update the internal information, and the first transmitting device 1210 transmits the first identification code to the receiving control unit 1224 in the receiving device 1220. The first transmitting end control unit 1212 and the receiving end control unit 1224 both preset a list storing the colors of the corresponding lights of the identification codes, for example, the first identification image corresponding to the identification code #1 is set to be red when the first transmitting end device 1210 leaves the factory, the second identification image corresponding to the identification code #2 is set to be green when the second transmitting end device 1230 leaves the factory, and so on. Thus, when the first transmitter 1210 and the receiver 1220 establish a connection, the receiver control unit 1224 records that the first transmitter 1210 has the identification code #1. Similarly, when the second transmitter 1230 and the receiver 1220 establish a connection later, the receiver control unit 1224 records that the second transmitter 1230 has the identification number #2. The receiving end control unit 1224 may learn that the first identification image corresponding to the identification code #1 is red and the second identification image corresponding to the identification code #2 is green by referring to the preset stored list of the corresponding light colors of the identification codes. In the present embodiment, the first identification image and the second identification image are respectively illustrated with red and green, but not limited thereto, the first identification image and the second identification image may be respectively "1" and "2" of arabic numerals or "a" and "B" of english letters.
In step 1106, the receiving end control unit 1224 notifies the second transmitting end control unit 1232 to synthesize the second video data Vid2 with the second identification image into the second synthesized video data vid_c2.
In step 1108, the first synthesized video data vid_c1 is transmitted to the receiving end control unit 1224 via the first transmitting end device 1210. In step 1110, the second synthesized video data vid_c2 is transmitted to the receiving control unit 1224 via the second transmitting device 1230. In step 1112, the receiving end control unit 1224 synthesizes the first synthesized video data vid_c1 and the second synthesized video data vid_c2 into a split-frame video data vid_c3, and the receiving end control unit 1224 outputs the split-frame video data vid_c3 to the display device 1260.
As shown in fig. 11, the method for identifying the source of the video signal of the present embodiment may optionally include steps 1102, 1114 and 1116. In step 1102, the receiving-end control unit 1224 in the receiving-end device 1220 determines whether to simultaneously receive (a) the first video data Vid1 transmitted by the first transmitting-end device 1210 and (b) the second video data Vid2 transmitted by the second transmitting-end device 1230, if so, step 1104 is performed, and if not, step 1114 is performed. In step 1102, the receiving end control unit 1224 determines whether more than 1 transmitting end device transmits video data to the receiving end device 1220. Steps 1114-1116 are for the following scenario: although both the first transmitting device 1210 and the second transmitting device 1230 are already connected to the receiving device 1220, (1) when only the first transmitting device 1210 is transmitting the first video data Vid1 to the receiving device 1220 and the whole frame of the display device 1260 is displaying the first video data Vid1, the participant can clearly understand that the presenter currently operating the first transmitting device 1210 is outputting the first video data Vid1, so that the first transmitting control unit 1212 can omit displaying the color corresponding to the identification code in the display device 1260, for example, the first transmitting control unit 1212 can close the video synthesis circuit in the first transmitting control unit 1212 to reduce the power consumption of the first transmitting control unit 1212. (2) When the first transmitting device 1210 transmits the first video data Vid1 to the receiving device 1220 and the second transmitting device 1230 simultaneously transmits the second video data Vid2 to the receiving device 1220, the display device 1260 displays the first video data Vid1 and the second video data Vid2 in a split-screen mode, and the participant cannot clearly recognize the video signal at this time, the first transmitting control unit 1212 only needs to display the color corresponding to the identification code in the display device 1260, for example, the first transmitting control unit 1212 starts the video synthesis circuit in the first transmitting control unit 1212 at this time. Thus, for the two situations (1) and (2), the first transmitting end control unit 1212 can dynamically turn off the video synthesis circuit in the first transmitting end control unit 1212, thereby dynamically reducing the power consumption of the first transmitting end control unit 1212 and achieving the power saving effect.
As described above, the situation corresponding to step 1114 is, for example, that the receiving end control unit 1224 only receives one of (a) the first video data Vid1 transmitted by the first transmitting end device 1210 and (b) the second video data Vid2 transmitted by the second transmitting end device 1230, and step 1114 is exemplified by receiving the first video data Vid1 transmitted by the first transmitting end device 1210. In step 1114, the first video data Vid1 is transmitted to the receiving control unit 1224 via the first transmitting device 1210. In step 1116, the receiving-end control unit 1224 displays the first video data Vid1 in the display device 1260. As shown in fig. 11, steps 1114 to 1116 are illustrated by the receiving end control unit 1224 receiving the first video data Vid1, but not receiving the second video data Vid2, but not limited thereto. Similarly, when the receiving control unit 1224 receives the second video data Vid2 but does not receive the first video data Vid1 in step 1114, the corresponding steps 1114-1116 are modified correspondingly as follows: in step 1114, the second video data Vid2 is transmitted to the receiving control unit 1224 via the second transmitting device 1230; in step 1116, the receiving-end control unit 1224 displays the second video data Vid2 on the display device.
Corresponding to step 1104, the first transmitter device 1210 synthesizes the first video data vid_c1 with the first identification image (e.g. red identification image) by the first transmitter control unit 1212 according to the first identification code, and outputs the first synthesized video data vid_c1 to the receiver device 1220. When the first synthesized video data vid_c1 is displayed in the display device 1260, the first synthesized video data vid_c1 corresponds to a first video area and a first recognition area, the first video area is adjacent to the first recognition area, the first video area displays the first video data Vid1, and the first recognition area displays the first recognition image; the first video area and the first recognition area of the present embodiment are the same as the first video area 462 and the first recognition area 464 of the first embodiment shown in fig. 4A to 4C. The first indication device 1216 of the first transmitting end device 1210 simultaneously displays a first identification image (e.g., a red identification image). In this embodiment, the first transmitting-end control unit 1212 synthesizes the first video data Vid1 and the red identification image into the first synthesized video data vid_c1, so that the corresponding red identification image is displayed in the display device 1260, and the indicator lamp on the first transmitting-end device 1210 displays the red color corresponding to the identification code # 1. By analogy, the second transmitting-end control unit 1232 synthesizes the second video data Vid2 and the green identification image into the second synthesized video data vid_c2, and outputs the second synthesized video data vid_c2 to the receiving-end device 1220, so that the corresponding green identification image is displayed in the display device 1260, and the second indicating device 1236 (e.g. indicator lamp) on the second transmitting-end device 1230 displays the green corresponding to the identification code # 2.
In the present embodiment, the first identifier in the first transmitting end device and the second identifier in the second transmitting end device may be, for example, color identifiers allocated to the transmitting end device when shipped, but not limited thereto. The first identification code may also be selected from one of the group consisting of a hardware serial number, user input information, and a network address. For example, the first video source 1250 for providing the first video data Vid1 is an information processing device (such as a notebook computer) used by the first user, the information processing device executes an operating system (such as microsoft Windows operating system), the first identification code is user information provided by the operating system (such as the personal information of the first user set in the microsoft Windows operating system console), for example, "Peter"; the information processing apparatus transmits "" Peter "" to the first transmitting end apparatus 1210 as the first identification code. For example, the first video data Vid1 source is an information processing device (such as a notebook computer) with a camera thereon for capturing a first user's image; the information processing apparatus transmits the first user's figure to the first transmitting device 1210 as the first identification code. Also, for example, the first video data Vid1 source is an information processing device (such as notebook computer) connected to the Internet and having a network IP address; the information processing apparatus transmits the network IP address to the first transmitting end apparatus 1210 as the first identification code. For example, the first transmitting device 1210 has a factory hardware serial number, and the first identification code is the factory hardware serial number of the first transmitting device 1210. For example, the information processing device executes a communication software, (e.g., line, skype, etc.), and the information processing device obtains the personal information of the user of the login user, such as "" Peter ""; the information processing apparatus transmits "" Peter "" to the first transmitting end apparatus 1210 as the first identification code. Any of the above information that can identify the information processing apparatus can be applied as the first identification code. When the first identification code is Peter, the factory hardware number or the network IP address, the first identification image is the image of the text. When the first identification code is the first user's figure, the first identification image is the first user's figure.
The method for identifying the source of the video signal and the wireless presentation system using the method of the embodiments of the present invention enable the participant to easily understand which user is the source of the video signal in the first video area 362 in the divided frame of the display device 360 by the correspondence between (1) the first video area 362 in the display frame of the display device 360, (2) the first identification image in the first identification area 464 adjacent to the first video area 362 in the display frame of the display device 360 and (3) the first identification image correspondingly displayed by the first indication device 316 on the first transmitting device 310. On the other hand, the user input device 870 detects the position of the user performing the input operation on the split screen of the display device 860, and when the receiving end control unit 824 determines that the position of the user operation is located in the first video area 862, the receiving end control unit 824 sends a prompt command to the first transmitting end device 810, so that the first transmitting end device 810 sends a first acousto-optic signal, and the participating conference user can easily understand which user is the source of the video signal for the current user performing the input operation. Therefore, the problem that in the conventional method, in the display state of the split-screen mode, the participant cannot identify which presenter the source of each video signal in the split-screen of the display device is in the information processing device can be avoided. The conference participants can quickly identify the corresponding video signal source presenter aiming at a certain picture in the divided pictures, so as to further put forward the question discussion.
The invention has been described with respect to the above-described embodiments, however, the above-described embodiments are merely examples of practicing the invention. It should be noted that the disclosed embodiments do not limit the scope of the invention. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (15)

1. A wireless presentation system, comprising:
a first transmitting device for transmitting first video data;
the receiving end device comprises a receiving end control unit, and the receiving end control unit distributes a first identification code to the first transmitting end device;
the receiving end control unit synthesizes the first video data and the first identification image corresponding to the first identification code into first synthesized video data, and the receiving end control unit outputs the first synthesized video data to the display device; the first transmitting end device comprises a first transmitting end control unit and a first transmission module, wherein the first transmitting end control unit is used for receiving the first video data, and the first transmission module is used for transmitting the first video data; the receiving end device also comprises a second transmission module which is used for selectively establishing connection with the first transmission module and receiving the first video data; when the second transmission module establishes a connection with the first transmission module, the receiving end control unit distributes the first identification code to the first transmitting end device.
2. The wireless presentation system of claim 1, wherein the receiving end control unit issues a prompt to instruct a first indication device of the first transmitting end device to display the first identification image when the first transmission module transmits the first video data to the second transmission module.
3. The wireless presentation system of claim 1, further comprising a second transmitting device, the second transmitting device comprising a second transmitting control unit and a third transmitting module, the second transmitting control unit being configured to receive a second video data, the third transmitting module being configured to transmit the second video data; when the second transmission module of the receiving end device and the third transmission module establish a connection, the receiving end control unit in the receiving end device distributes a second designated code to the second transmitting end device; when the third transmission module transmits the second video data to the second transmission module, the receiving end control unit synthesizes the second video data and a second identification image corresponding to the second identification code into second synthesized video data, and the receiving end control unit synthesizes the first synthesized video data and the second synthesized video data into split-picture video data and outputs the split-picture video data to the display device.
4. The wireless presentation system of claim 1, wherein the first synthesized video data corresponds to a first video region and a first recognition region when the first synthesized video data is displayed in the display device, the first video region being adjacent to the first recognition region, the first video region displaying the first video data, the first recognition region displaying the first recognition image.
5. The wireless presentation system of claim 1, wherein the receiving end control unit cancels the correspondence between the first identification code and the first transmitting end device when the first transmitting module stops transmitting the first video data to the second transmitting module.
6. The wireless presentation system of claim 5, wherein the first identification image is a first color, and wherein the first indication device on the first transmitting device is caused to display the first color when the receiving end control unit requests the first transmitting device to display the first identification image.
7. A wireless presentation system, comprising:
a first transmitting device for transmitting the first video data and the first identification code;
The receiving end device comprises a receiving end control unit, the receiving end control unit synthesizes the first video data and a first identification image corresponding to the first identification code into first synthesized video data, and the receiving end control unit outputs the first synthesized video data to the display device;
the first transmitting end device comprises a first transmitting end control unit and a first transmission module, wherein the first transmitting end control unit is used for receiving the first video data, and the first transmission module is used for transmitting the first video data; the receiving end device also comprises a second transmission module which is used for selectively establishing connection with the first transmission module and receiving the first video data; when the second transmission module establishes a connection with the first transmission module, the first transmitting end device transmits the first identification code to the receiving end control unit.
8. The wireless presentation system of claim 7, further comprising a second transmitting device, the second transmitting device comprising a second transmitting control unit and a third transmitting module, the second transmitting control unit being configured to receive a second video data, the third transmitting module being configured to transmit the second video data; when the second transmission module of the receiving end device and the third transmission module establish a connection, the receiving end control unit reads a second designated code of the second transmitting end device; when the third transmission module transmits the second video data to the second transmission module, the receiving end control unit synthesizes the second video data and a second identification image corresponding to the second identification code into second synthesized video data, the receiving end control unit synthesizes the first synthesized video data and the second synthesized video data into split-picture video data and outputs the split-picture video data to the display device, and when the second synthesized video data is displayed in the display device, the second synthesized video data corresponds to a second video area and a second identification area, the second video area is adjacent to the second identification area, the second video area displays the second video data, and the second identification area displays the second identification image.
9. The wireless presentation system of claim 7, wherein the first identification code is selected from the group consisting of a hardware serial number, user input information, and a network address.
10. The wireless presentation system of claim 7, wherein the source of the first video data is an information processing device that executes an operating system, the first identification code being user information provided by the operating system.
11. The wireless presentation system of claim 7, wherein the first synthesized video data corresponds to a first video region and a first recognition region when the first synthesized video data is displayed in the display device, the first video region being adjacent to the first recognition region, the first video region displaying the first video data, the first recognition region displaying the first recognition image.
12. A wireless presentation system, comprising:
the first transmission end device comprises a first transmission end control unit and a first transmission module, wherein the first transmission end control unit synthesizes first synthesized video data with a first identification image corresponding to a first identification code, and the first transmission module of the first transmission end device transmits the first synthesized video data; the second transmitting end device comprises a second transmitting end control unit and a third transmission module, the second transmitting end control unit synthesizes second synthesized video data with a second identification image corresponding to a second identification code, and the third transmission module of the second transmitting end device transmits the second synthesized video data; and
The receiving end device comprises a receiving end control unit and a second transmission module, wherein the second transmission module is used for selectively establishing connection with the first transmission module and the third transmission module and receiving the first synthesized video data and the second synthesized video data, and the receiving end control unit receives the first synthesized video data and the second synthesized video data, synthesizes the first synthesized video data and the second synthesized video data into split-picture video data and outputs the split-picture video data to the display device.
13. The wireless presentation system of claim 12, wherein the first synthesized video data corresponds to a first video region and a first recognition region when the first synthesized video data is displayed in the display device, the first video region being adjacent to the first recognition region, the first video region displaying the first video data, the first recognition region displaying the first recognition image.
14. A wireless presentation system, comprising:
a first transmitting end device comprising:
the first transmitting end control unit is used for receiving first video data; and (3) with
A first transmission module for transmitting the first video data; and
The receiving end device is used for being coupled with the display device and comprises:
the second transmission module is used for selectively establishing a connection with the first transmission module and receiving the first video data; and (3) with
The receiving end control unit is used for outputting the first video data to the display device, when the first video data is displayed in the display device, the first video data corresponds to the first video area, and the receiving end control unit receives the position of the user operation provided by the user input device;
when the receiving end control unit judges that the position operated by the user is located in the first video area, the receiving end control unit sends a prompt instruction to the first transmitting end device so that the first transmitting end device sends a first acousto-optic signal.
15. The wireless presentation system of claim 14, further comprising a second transmitting device, the second transmitting device comprising a second transmitting control unit and a third transmitting module, the second transmitting control unit being configured to receive a second video data, the third transmitting module being configured to transmit the second video data; the second transmission module is also used for selectively establishing a connection with the third transmission module and receiving the second video data; the receiving end control unit is further used for outputting the second video data to the display device, when the second video data is displayed in the display device, the second video data corresponds to a second video area, and the first video area and the second video area are displayed in a split picture mode; when the receiving end control unit judges that the position operated by the user is located in the second video area, the receiving end control unit sends the prompt instruction to the second transmitting end device, so that the second transmitting end device sends a second sound optical signal which is different from the first sound optical signal.
CN202010014552.3A 2020-01-07 2020-01-07 Wireless Presentation System Active CN113163144B (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN202010014552.3A CN113163144B (en) 2020-01-07 2020-01-07 Wireless Presentation System
EP21150540.9A EP3849175A1 (en) 2020-01-07 2021-01-07 Video conference system
US17/143,221 US11245867B2 (en) 2020-01-07 2021-01-07 Video conference system
EP21150536.7A EP3849174A1 (en) 2020-01-07 2021-01-07 Method for identifying video signal source
US17/143,214 US11956563B2 (en) 2020-01-07 2021-01-07 Method for identifying video signal source
US18/203,108 US20230300287A1 (en) 2020-01-07 2023-05-30 Method for identifying video signal source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014552.3A CN113163144B (en) 2020-01-07 2020-01-07 Wireless Presentation System

Publications (2)

Publication Number Publication Date
CN113163144A CN113163144A (en) 2021-07-23
CN113163144B true CN113163144B (en) 2024-04-09

Family

ID=76881733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014552.3A Active CN113163144B (en) 2020-01-07 2020-01-07 Wireless Presentation System

Country Status (1)

Country Link
CN (1) CN113163144B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002111663A (en) * 2000-09-28 2002-04-12 Nippon Telegraph & Telephone West Corp Electronic conference system and information distribution method therein
JP2009065696A (en) * 2008-10-27 2009-03-26 Toshiba Corp Device, method and program for synthesizing video image
TW201121324A (en) * 2009-12-02 2011-06-16 Avermedia Information Inc Method, system and device of idntification tag representation
CN103795965A (en) * 2012-10-29 2014-05-14 株式会社理光 Communication terminal, teleconference system, and recording medium
TW201528155A (en) * 2013-07-09 2015-07-16 3M Innovative Properties Co Systems and methods for note content extraction and management by segmenting notes
CN105324989A (en) * 2013-06-28 2016-02-10 株式会社理光 Transmission terminal, program, image display method and transmission system
CN105356915A (en) * 2015-10-27 2016-02-24 明基电通有限公司 Wireless projected brief report receiver and operation method for wireless projected brief report receiver
US10264213B1 (en) * 2016-12-15 2019-04-16 Steelcase Inc. Content amplification system and method
CN110460890A (en) * 2018-05-07 2019-11-15 青岛海尔多媒体有限公司 A kind of method, apparatus and computer readable storage medium controlling multi-channel video

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5120136B2 (en) * 2008-08-05 2013-01-16 ブラザー工業株式会社 Display control apparatus, remote controller used for the display control apparatus, and video conference system
GB2501471A (en) * 2012-04-18 2013-10-30 Barco Nv Electronic conference arrangement
JP6384095B2 (en) * 2013-06-06 2018-09-05 株式会社リコー Transmission terminal, program, image display method, transmission system
US10560499B2 (en) * 2015-12-31 2020-02-11 Screenbeam Inc. Displaying content from multiple devices
US20180077207A1 (en) * 2016-09-15 2018-03-15 Takeru Inoue Information processing terminal, communication system, information processing method, and recording medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002111663A (en) * 2000-09-28 2002-04-12 Nippon Telegraph & Telephone West Corp Electronic conference system and information distribution method therein
JP2009065696A (en) * 2008-10-27 2009-03-26 Toshiba Corp Device, method and program for synthesizing video image
TW201121324A (en) * 2009-12-02 2011-06-16 Avermedia Information Inc Method, system and device of idntification tag representation
CN103795965A (en) * 2012-10-29 2014-05-14 株式会社理光 Communication terminal, teleconference system, and recording medium
CN105324989A (en) * 2013-06-28 2016-02-10 株式会社理光 Transmission terminal, program, image display method and transmission system
TW201528155A (en) * 2013-07-09 2015-07-16 3M Innovative Properties Co Systems and methods for note content extraction and management by segmenting notes
CN105356915A (en) * 2015-10-27 2016-02-24 明基电通有限公司 Wireless projected brief report receiver and operation method for wireless projected brief report receiver
US10264213B1 (en) * 2016-12-15 2019-04-16 Steelcase Inc. Content amplification system and method
CN110460890A (en) * 2018-05-07 2019-11-15 青岛海尔多媒体有限公司 A kind of method, apparatus and computer readable storage medium controlling multi-channel video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"交互式电子白板系统软件的设计与实现";吴敏;《中国优秀硕士学位论文全文数据库(电子期刊)》;全文 *
"基于一体化标识网络的多宿终端关键技术研究";江海昇;《中国学位论文全文数据库》;全文 *

Also Published As

Publication number Publication date
CN113163144A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
US11457177B2 (en) Video conferencing system and transmitter thereof
US7682028B2 (en) Image transmission system and image transmission method
KR101510723B1 (en) Mobile terminal having projector and method for displaying data thereof
US20080170058A1 (en) Display apparatus and method for implementing screen saver for the same
US20230300287A1 (en) Method for identifying video signal source
US5812785A (en) Method and apparatus for monitoring video signals in a computer
US20120066643A1 (en) System, method and apparatus for presenting a user interface
US7849410B2 (en) Pointing-control system for multipoint conferences
CN109753259B (en) Screen projection system and control method
US20160205348A1 (en) Video conferencing system and associated interation display method
CN113163144B (en) Wireless Presentation System
TWI724746B (en) Method for identifying video signal source
CN114915833B (en) Display control method, display device and terminal device
CN113163151A (en) Method for identifying video signal source
TWI767176B (en) Video conference system can identify video signal source
TWI693834B (en) Video conferencing system and transmitter thereof
CN111885344A (en) Data transmission method, equipment and system
US11849253B2 (en) Transmitter device applied to video conference system
JP2018194584A (en) Display, method for setting information terminal units in display, and display system
US20080174746A1 (en) Method for projecting an image from one of a plurality of hosts and projection device thereof
JP2017011610A (en) Image projection system and image projection device
CN109660776B (en) Projection system, intermediary device, communication device and conference control method
CN115390732A (en) Data transmission method and system
KR20070055709A (en) Beam projector and remote announce system and method having that
JP2022099487A (en) Image display system, method for controlling image display system, and method for controlling display unit

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant