CN113163151A - Method for identifying video signal source - Google Patents

Method for identifying video signal source Download PDF

Info

Publication number
CN113163151A
CN113163151A CN202010014560.8A CN202010014560A CN113163151A CN 113163151 A CN113163151 A CN 113163151A CN 202010014560 A CN202010014560 A CN 202010014560A CN 113163151 A CN113163151 A CN 113163151A
Authority
CN
China
Prior art keywords
video data
control unit
transmitting
receiving end
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010014560.8A
Other languages
Chinese (zh)
Inventor
施嘉南
吴镇吉
游琳源
江进富
曾荣堃
吴壮为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Benq Dentsu Co ltd
Mingji Intelligent Technology Shanghai Co ltd
Original Assignee
Benq Dentsu Co ltd
Mingji Intelligent Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Benq Dentsu Co ltd, Mingji Intelligent Technology Shanghai Co ltd filed Critical Benq Dentsu Co ltd
Priority to CN202410554544.6A priority Critical patent/CN118233593A/en
Priority to CN202010014560.8A priority patent/CN113163151A/en
Priority to US17/143,214 priority patent/US11956563B2/en
Priority to EP21150536.7A priority patent/EP3849174A1/en
Priority to US17/143,221 priority patent/US11245867B2/en
Priority to EP21150540.9A priority patent/EP3849175A1/en
Publication of CN113163151A publication Critical patent/CN113163151A/en
Priority to US18/203,108 priority patent/US20230300287A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • H04N5/9206Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal the additional signal being a character code signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a method for identifying video signal source, comprising the following steps: a receiving end control unit in the receiving end device distributes a first identification code to a first transmitting end device. The first video data is transmitted through the first transmitting device. The receiving end control unit synthesizes the first video data and the first identification image corresponding to the first identification code into first synthesized video data, and outputs the first synthesized video data to the display device. The invention is helpful for meeting participants to quickly identify the corresponding video signal source presenter aiming at a certain picture in the divided pictures so as to provide problem discussion.

Description

Method for identifying video signal source
Technical Field
The invention relates to a method for identifying video signal source.
Background
Conventionally, when a presenter wants to operate an information processing device (e.g., a notebook computer) to present a presentation through a display device, a video signal line (e.g., an HDMI video transmission line) is used to connect the information processing device and the display device (e.g., a projector or a large-sized television) so as to transmit presentation image data output by the information processing device to the display device and further display the presentation image data. However, there are limitations in using the method of connecting the information processing apparatus and the display apparatus with the wired video signal line for presentation, for example, when switching and displaying the presentation data stored in different information processing apparatuses in a conference, a presenter must first remove the video signal line from the original information processing apparatus and then connect the video signal line to another information processing apparatus, which often causes interruption of the conference session.
Aiming at the inconvenient operation, a multi-person wireless bulletin system product appears in the market at present; such multi-user wireless bulletin system products generally include a plurality of transmitting devices and a single receiving device. Each participating conference member can connect a transmitting end device to the video output port of its own information processing device, and the receiving end device is connected to the video input port of the display device. When only one presenter presses the projection button on the transmitting end device, one of the transmitting end devices will perform wireless image data transmission with the receiving end device, and the display device will only display the image data outputted by a single information processing device. However, when a plurality of presenter presses the projection button on the transmitting device, the transmitting device will have a plurality of image data transmission devices wirelessly connected to the receiving device, and the display device will display the image data outputted from the plurality of information processing devices in a split-screen manner. However, in the split-screen display mode, it is easy to confuse the participants of the conference, and the participants of the conference cannot identify which presenter's information processing apparatus is from which each video signal in the split screen of the current display apparatus. Therefore, when a participant has a question about a particular frame in the divided frames, the participant must ask additional information to confirm who the frame is, so as to ask the question to the relevant presenter.
Disclosure of Invention
Accordingly, the present invention is directed to a method for identifying a video signal source, so as to effectively solve the above-mentioned problems encountered in the prior art.
According to a first aspect of the present invention, a method for identifying a source of a frame signal is provided, which includes a receiver control unit in a receiver device allocating a first identification code to a first transmitter device. The first image data is transmitted through the first transmitting device. The receiving end control unit synthesizes the first image data and the first identification image corresponding to the first identification code into first synthesized image data, and outputs the first synthesized image data to the display device.
According to a second aspect of the present invention, a method for identifying a source of a frame signal is provided, which includes transmitting a first image data and a first identification code to a receiver control unit in a receiver device via a first transmitter device. The receiving end control unit synthesizes the first image data and the first identification image corresponding to the first identification code into first synthesized image data, and outputs the first synthesized image data to the display device.
According to a third aspect of the present invention, a method for identifying a source of a video signal is provided, which comprises the following steps: a first transmitting end control unit in the first transmitting end device synthesizes the first video data and the first identification image into first synthesized video data, wherein the first identification image corresponds to the first identification code. The first composite video data is transmitted to a receiving end control unit in the receiving end device through the first transmitting end device. The receiving end control unit outputs the first synthesized video data to the display device.
According to a fourth aspect of the present invention, a method for identifying a source of a video signal is provided, comprising: a receiving end control unit in the receiving end device distributes a first identification code to a first transmitting end device, and the receiving end control unit distributes a second identification code to a second transmitting end device. When the receiving end control unit receives the first video data transmitted by the first transmitting end device and the second video data transmitted by the second transmitting end device at the same time, the receiving end control unit executes the following steps: the first video data and the first identification image corresponding to the first identification code are combined into first combined video data. The second video data and a second identification image corresponding to the second identification code are synthesized into second synthesized video data, the first synthesized video data and the second synthesized video data are synthesized into a divided-screen video data, and the divided-screen video data are output to the display device.
According to a fifth aspect of the present invention, a method for identifying a source of a video signal is provided, comprising: when a receiving end control unit in a receiving end device receives first video data and a first identification code transmitted by a first transmitting end device and simultaneously receives second video data and a second identification code transmitted by a second transmitting end device, the receiving end control unit executes the following steps: the receiving end control unit synthesizes the first video data and the first identification image corresponding to the first identification code into first synthesized video data. The receiving end control unit synthesizes the second video data and the second identification image corresponding to the second identification code into second synthesized video data, and synthesizes the first synthesized video data and the second synthesized video data into a divided-screen video data, and outputs the divided-screen video data to the display device.
According to a sixth aspect of the present invention, a method for identifying a source of a video signal is provided, which includes: when the receiving end control unit in the receiving end device receives the first video data transmitted by the first transmitting end device and the second video data transmitted by the second transmitting end device at the same time, the following steps are executed: the receiving end control unit informs the first transmitting end control unit to synthesize the first video data and the first identification image into first synthesized video data, wherein the first identification image corresponds to the first identification code. The receiving end control unit informs the second transmitting end control unit to synthesize the second video data and the second identification image into second synthesized video data, wherein the second identification image corresponds to the second identification code. The first composite video data is transmitted to the receiving end control unit via the first transmitting end device. The second composite video data is transmitted to the receiving-end control unit via the second transmitting-end device. And the receiving end control unit combines the first combined video data and the second combined video data into a divided-screen video data and outputs the divided-screen video data to the display device.
Compared with the prior art, the embodiment of the invention can avoid the problem that in the traditional method, under the display state of a split picture mode, a participant can not identify which presenter's information processing device each video signal source in the split picture of the current display device is. It is helpful for the participants to quickly identify the corresponding video signal source presenter with respect to a certain picture in the divided pictures, and further to provide question discussion.
For a better understanding of the above and other aspects of the invention, reference should be made to the following detailed description of the embodiments taken in conjunction with the accompanying drawings
Drawings
FIG. 1 is a flowchart illustrating a method for identifying a video signal source according to a first embodiment of the invention.
FIG. 2 is a detailed flowchart of the method for identifying the source of the video signal shown in FIG. 1.
Fig. 3 is a block diagram of a wireless presentation system according to a first embodiment of the invention.
Fig. 4A is a diagram illustrating an application scenario of the wireless bulletin system of fig. 3.
FIG. 4B is a schematic diagram illustrating the image area and the identification area in the embodiment of FIG. 4A.
FIG. 4C is a schematic diagram illustrating the image area and the identification area in the embodiment shown in FIG. 4A.
FIG. 5 is a flowchart illustrating a method for identifying a video signal source according to a second embodiment of the invention.
FIG. 6 is a detailed flowchart of the method for identifying the source of the video signal shown in FIG. 5.
FIG. 7 is a flowchart illustrating a method for identifying a video signal source according to a third embodiment of the invention.
Fig. 8 is a block diagram of a wireless presentation system according to a third embodiment of the invention.
Fig. 9 is a diagram illustrating an application scenario of the wireless bulletin system of fig. 8.
FIG. 10 is a flowchart illustrating a method for identifying a video signal source according to a fourth embodiment of the invention.
FIG. 11 is a detailed flowchart of the method for identifying the source of the video signal shown in FIG. 10.
Fig. 12 is a block diagram of a wireless presentation system according to a fourth embodiment of the invention.
Detailed Description
In order to further understand the objects, structures, features and functions of the present invention, the following embodiments are described in detail.
Referring to fig. 1, a flow chart of a method for identifying a video signal source according to a first embodiment of the invention is shown. Referring to fig. 3, a block diagram of a wireless presentation system 300 according to a first embodiment of the invention is shown. As shown in FIG. 1, the method for identifying the source of the frame signal of the present invention comprises the following steps: step 102: the sink control unit 324 in the sink device 320 assigns the first identification code to the first sender device 310. Step 104: the first image data is transmitted through the first transmitting device 310. Step 106: the receiver control unit 324 synthesizes the first image data and the first identification image corresponding to the first identification code into first synthesized image data. Step 108: the receiver control unit 324 outputs the first synthesized image data to the display device 360.
As shown in fig. 3, the wireless presentation system includes a first transmitting device 310 and a receiving device 320, and optionally includes a second transmitting device 330. The first sink device 310 includes a first sink control unit 312 and a first transmission module 314, the first sink control unit 312 is used to receive the first video data Vid1 from the first video source 350, and the first transmission module 314 is used to transmit the first video data Vid 1. Similarly, the second transmitting device 320 includes a second transmitting control unit 332 and a third transmitting module 334, the second transmitting control unit 332 is used for receiving the second video data Vid2 from the second video source 352, and the third transmitting module 334 is used for transmitting the second video data Vid 2. The sink device 320 includes a second transmission module 322 and a sink control unit 324. The second transmission module 322 selectively connects to the first transmission module 314 for wireless transmission WL1 and receives the first video data Vid1, and the second transmission module 322 selectively connects to the third transmission module 334 for wireless transmission WL2 and receives the second video data Vid 2.
Referring to fig. 2, a detailed flowchart of the method for identifying the source of the video signal in fig. 1 is shown. Corresponding to step 102, when the sink device 320 establishes a connection with the first sender device 310, the sink control unit 324 allocates the first identification code to the first sender device 310, and the detailed steps include steps 204-206 shown in fig. 2. In step 204, determine if the receiving device 320 is connected to the transmitting device? If a connection has been established, step 206 is performed. In step 206, the receiving device 320 records that the transmitting device is the order of establishing the connection, and assigns the identification code corresponding to the order color to the transmitting device. In addition, in step 204, after the sink device 320 and the first transmitter device 310 are powered on, the user can press the pairing button on the sink device 320 and the first transmitter device 310, and then both devices try to establish a connection. When the first transmission module 314 of the first sender device 310 establishes a connection with the second transmission module 322 of the receiver device 320, step 206 is executed, in which the receiver device 320 and the first sender device 310 will mutually update internal information, the receiver device 320 will record that the first sender device 310 is the connection established in the order, and assign the identification code corresponding to the order color to the sender device. For example, a first transmitting device successfully establishing a connection with the receiving device 320 is predetermined to be assigned the identification code #1, the corresponding first identification image is red, a second transmitting device successfully establishing a connection is predetermined to be assigned the identification code #2, and the corresponding second identification image is green. Thus, when the first sender device 310 is the first sender device to establish the connection, the receiver control unit 324 records that the first sender device 310 is the first order connection and assigns the identification number #1 to the first sender device 310. Similarly, when the second sender device 320 is powered on later and becomes a second sender device that successfully establishes a connection with the receiver device 320, the receiver control unit 324 records that the second sender device 320 is the second priority established connection and assigns the identification number #2 to the second sender device 320. The sink control unit 324 can determine that the first identification image corresponding to the identification code #1 is red and the second identification image corresponding to the identification code #2 is green by referring to the list of the colors of the lights corresponding to the identification codes preset and stored therein. In the present embodiment, the first identification icon and the second identification icon are illustrated as red and green, respectively, but not limited thereto, and the first identification icon and the second identification icon may also be arabic numerals "1" and "2", or english letters "a" and "B", respectively.
Corresponding to step 104, the detailed steps include step 208 shown in FIG. 2. In step 208, determine if the sending device is sending video data to the receiving device 320? If yes, go to step 210. When the first transmitting module 314 of the first transmitting device 310 is connected to the second transmitting module 322 of the receiving device 320, but the first user has not pressed the projection button on the first transmitting device 310, the first transmitting device 310 does not transmit the first video data Vid1 to the receiving device 320. When the first user presses the projection button on the first sender device 310, the first sender device 310 starts to send the first video data Vid1 to the receiver device 320, so that step 208 is satisfied, and step 210 is executed. In this way, when the third transmission module 334 of the second transmitting device 320 is connected to the second transmission module 322 of the receiving device 320, the second transmitting device 320 starts to transmit the second video data Vid2 to the receiving device 320 after the second user presses the projection button on the second transmitting device 320.
As shown in fig. 2, the method for identifying the source of the video signal of the embodiment optionally includes step 209, determining whether more than 1 transmitting device transmits video data to the receiving device 320? If so, step 210 is performed, otherwise step 210 is skipped. Step 209 is directed to the situation that the receiving-end control unit 324 can dynamically turn off the video synthesizing circuit in the receiving-end control unit, thereby dynamically reducing the power consumption of the receiving-end control unit 324 and achieving the power saving effect. The applicable scenario of step 209 is: although both the first sender device 310 and the second sender device 330 are already connected to the sink device 320, (1) when only the first sender device 310 is sending the first video data Vid1 to the sink device 320 and the entire screen of the display device 360 is displaying the first video data Vid1, the participant can clearly understand that the presenter who is currently operating the first sender device 310 is outputting the first video data Vid1, so that the sink control unit 324 can omit displaying the color corresponding to the identification code on the display device 360, for example, the sink control unit 324 can turn off the video composition circuit in the sink control unit 324, and skip step 210, to reduce the power consumption of the sink control unit 324. (2) When the first transmitting device 310 transmits the first video data Vid1 to the receiving device, and the second transmitting device 330 transmits the second video data Vid2 to the receiving device 320, the display device 360 displays the first video data Vid1 and the second video data Vid2 simultaneously in a split-screen manner, and the participant cannot clearly recognize the video signal at this time, the receiving-end control unit 324 only needs to display the color corresponding to the identification code in the display device 360, for example, the receiving-end control unit 324 only starts the video synthesizing circuit in the receiving-end control unit 324 at this time, and step 210 is executed.
Corresponding to steps 106-108, the detailed steps include step 210 shown in FIG. 2: the receiving end device 320 displays the color corresponding to the identification code in the display device 360 according to the identification code assigned to the transmitting end device, and the transmitting end device indicator light displays the color corresponding to the identification code. When the first sink device 310 transmits the first video data Vid1 to the second transmission module 322, the sink control unit 324 synthesizes the first video data Vid1 and the first identification picture (e.g., red identification picture) into the first synthesized video data Vid _ C1, and outputs the first synthesized video data Vid _ C1 to the display device 360. When the first composite video data Vid _ C1 is displayed on the display device 360, as shown in fig. 4A, the first composite video data Vid _ C1 corresponds to the first video area 462 and the first identification area 464, the first video area 462 is adjacent to the first identification area 464, the first video area 462 displays the first video data Vid1, and the first identification area 464 displays the first identification image. The receiving end control unit 324 sends a prompt to request the first indicating device 316 of the first transmitting end device 310 to display the first identification image. In the present embodiment, the sink control unit 324 synthesizes the first video data Vid1 and the red identification icon into the first synthesized video data Vid _ C1, so that the corresponding red identification icon is displayed on the display device 360, and the indicator light on the first transmitting device 310 displays the red color corresponding to the identification code # 1. In analogy, the sink control unit 324 synthesizes the second video data Vid2 and the green identification image into the second synthesized video data Vid _ C2, and outputs the second synthesized video data Vid _ C2 to the display device 360, so that the corresponding green identification image is displayed on the display device 360, and the second indicating device 336 (e.g., an indicator lamp) on the second transmitting device 320 displays the green corresponding to the identification code # 2.
Fig. 4A to 4B are schematic diagrams respectively illustrating an application scenario of the wireless bulletin system of the present invention and schematic diagrams of an image area and an identification area according to an embodiment of the present invention. A first video source 350 (e.g., a notebook computer) operated by a first user P1 outputs first video data Vid1 to the first transmitting device 310, and a second video source 352 (e.g., a notebook computer) operated by a second user P2 outputs second video data Vid2 to the second transmitting device 330. The first transmitting end device 310 and the second transmitting end device 330 both establish a connection with the receiving end device 320. When the first sender device 310 is the first sender device to establish a connection with the receiver device 320, it is assigned the identification number # 1. When the first composite video data Vid _ C1 is displayed on the display device 360, the first composite video data Vid _ C1 corresponds to the first video area 462 and the first identification area 464 located on the left half of the display device 360, the first video area 462 is adjacent to the first identification area 464, the first video area 462 displays the first video data Vid1, and the first identification area 464 displays the red identification image corresponding to the identification code # 1. As shown in FIG. 4B, the first identification area 464 is located at the top left corner of the first video area 462, where a red block is displayed. Similarly, the second sender device 330 is the second sender device that establishes a connection with the receiver device 320 and is assigned ID # 2. When the second composite video data Vid _ C2 is displayed on the display device 360, the second composite video data Vid _ C2 corresponds to a second video region 466 and a second identification region 468 located on the right half of the display device 360, the second video region 466 is adjacent to the second identification region 468, the second video region 466 displays the second video data Vid2, and the second identification region 468 displays the green identification picture corresponding to the identification code # 2. The second identified region 468 is located at the top left corner of the second video region 466 where a green block is displayed. As shown in fig. 4B, thus, the sink control unit 324 allocates the second identification code (e.g., identification code #2) to the second transmitting device 330, transmits the second video data Vid2 through the second transmitting device 330, the sink control unit 324 synthesizes the second video data Vid2 and the second identification image (e.g., green identification image) corresponding to the second identification code into the second synthesized video data Vid _ C2, the sink control unit 324 outputs the first synthesized video data Vid _ C1 and the second synthesized video data Vid _ C2 to the display device 360, and the first synthesized video data Vid _ C1 and the second synthesized video data Vid _ C2 are displayed on the display device 360 in a split-screen manner. For example, the sink control unit 324 synthesizes the first synthesized video data Vid _ C1 and the second synthesized video data Vid _ C2 into a divided-screen video data and outputs the divided-screen video data to the display device 360.
Referring to fig. 4C, a schematic diagram of an image area and an identification area according to another embodiment is shown, in which a first identification area 464 surrounds an outer periphery of the first video area 462, and a red outer frame is displayed at the outer periphery. The second identification area 468 surrounds the outer periphery of the second video area 466, where a green frame is displayed.
The first transmitting end device 310 has a first indicator light 316, and the receiving end control unit 324 sends a prompt to request the first indicator light of the first transmitting end device 310 to display a red identification image. Thus, the participant can clearly understand the corresponding relationship between the red identification image in the first video area 462, the first identification area 464 and the red identification image displayed by the first indicator light, and can easily understand that the video signal source of the first video area 462 located at the left half of the display device 360 is the first video source 350 operated by the first user. In this way, the second transmitting device 320 has the second indicator light 336, and the receiving-end control unit 324 sends a prompt to request the second indicator light of the second transmitting device 320 to display a green identification image. Thus, the participant can clearly understand the corresponding relationship between the green identification icon in the second video area 466, the second identification area 468 and the green identification icon displayed by the second indicator light, and can easily understand that the video signal source of the second video area 466 located at the right half of the display device 360 is the second video source 352 operated by the second user. In the present embodiment, the relationship between the first video area 462 and the first identification area 464 is illustrated as being located at the corner of the video area or surrounding the outer periphery of the video area, but is not limited thereto.
In the first embodiment, the wireless bulletin system of the invention can selectively execute the step of canceling the correspondence between the first identification code and the first sending-end device 310 by the receiving-end control unit 324 when the first sending-end device 310 stops sending the first video data Vid1 to the second sending module 322. In response to the step of canceling the first identification code from the first sender apparatus 310, the detailed steps include steps 212 and 216 shown in fig. 2. Step 212: determine if the sending device stops sending video data to the receiving device 320? If yes, go to step 216: the receiving device 320 recovers the identification code assigned to the corresponding cis-color of the transmitting device. For example, the sink control unit 324 retrieves the identification code #1 originally allocated to the first transmitting device 310 and the corresponding red identification image, and when other transmitting devices establish a connection with the sink device 320 later, the identification code #1 and the corresponding red identification image may be allocated to other transmitting devices of the new connection.
In the first embodiment, the wireless presentation system of the present invention can selectively execute step 214 shown in fig. 2: determine if the transmitting device is powered off? Thus, when the first sink device 310 continues to transmit the first video data Vid1 to the second transmission module 322 and the sink device is not powered off, the sink control unit 324 continues to output the first composite video data Vid _ C1 to the display device 360.
Referring to fig. 5, a flowchart of a method for identifying a video signal source according to a second embodiment of the invention is shown in fig. 5. The method for identifying the video signal source according to the second embodiment of the present invention can also adopt the hardware architecture of the block diagram of the wireless presentation system shown in fig. 3, and only the firmware programs in the first transmitting-end control unit 312, the second transmitting-end control unit 332 and the receiving-end control unit 324 need to be modified, so that the first transmitting-end control unit 312, the second transmitting-end control unit 332 and the receiving-end control unit 324 execute the method steps shown in fig. 5 to fig. 6 together. As shown in FIG. 5, the method for identifying the source of the frame signal of the present invention comprises the following steps: step 502: the first identification code is transmitted to the sink control unit 324 in the sink device 320 via the first transmitting side device 310. Step 504: the first video data Vid1 is transmitted to the sink control unit 324 in the sink device 320 via the first transmitting device 310. Step 506: the sink control unit 324 synthesizes the first video data Vid1 and the first identification picture corresponding to the first identification code into first synthesized video data Vid _ C1. And step 508: the sink control unit 324 outputs the first synthesized video data Vid _ C1 to the display device 360.
Referring to FIG. 6, a detailed flowchart of the method for identifying the source of the video signal in FIG. 5 is shown. Corresponding to step 502, when the sink device 320 establishes a connection with the first sender device 310, the first sender device 310 sends the first identification code to the sink control unit 324 in the sink device 320, and the detailed steps include steps 604-606 shown in fig. 6. Step 604: determine if the receiving device 320 is connected to the transmitting device? If the connection has been established, go to step 606. Step 606: the first transmitting end device 310 transmits the identification code of the built-in color to the receiving end control unit 324 in the receiving end device 320. In step 604, after the sink device 320 and the first transmitter device 310 are powered on, the user can press the pairing button on the sink device 320 and the first transmitter device 310, and then both devices try to establish a connection. When the first transmission module 314 of the first transmitting end device 310 establishes a connection with the second transmission module 322 of the receiving end device 320, step 606 is executed, the receiving end device 320 and the first transmitting end device 310 mutually update internal information, and the first transmitting end device 310 transmits the first identification code to the receiving end control unit 324 in the receiving end device 320. The first transmitting-side control unit 312 and the receiving-side control unit 324 both have a list of lamp colors corresponding to the stored identification codes, for example, the first transmitting-side device 310 is configured to be assigned with the identification code #1 when it leaves the factory, and the corresponding first identification image is red, and the second transmitting-side device 320 is configured to be assigned with the identification code #2 when it leaves the factory, and the corresponding second identification image is green. Thus, when the first sender device 310 and the receiver device 320 establish a connection, the receiver control unit 324 records that the first sender device 310 has the identification number # 1. Similarly, when the second sending device 330 and the receiving device 320 establish a connection later, the receiving control unit 324 records that the second sending device 330 has the identification number # 2. The sink control unit 324 can determine that the first identification image corresponding to the identification code #1 is red and the second identification image corresponding to the identification code #2 is green by referring to the list of the colors of the lights corresponding to the identification codes preset and stored therein. In the present embodiment, the first identification icon and the second identification icon are illustrated as red and green, respectively, but not limited thereto, and the first identification icon and the second identification icon may also be arabic numerals "1" and "2", or english letters "a" and "B", respectively.
In the embodiment, the first identification code in the first transmitting end device and the second identification code in the second transmitting end device may be, for example, color identification codes assigned to the transmitting end devices when the transmitting end devices are shipped, but the invention is not limited thereto. The first identification code may also be selected from one of the group consisting of a hardware serial number, user input information, and a network address. For example, the first video source 350 providing the first video data Vid1 is an information processing device (e.g. notebook computer) used by the first user, the information processing device executes an operating system (e.g. microsoft Windows operating system), the first identifier is user information provided by the operating system (e.g. personal information of the first user set in the microsoft Windows operating system console), such as "Peter"; the information processing apparatus transmits "" Peter "" to the first transmitting device 310 as the first identification code. Also, for example, the first video data Vid1 is sourced from an information processing apparatus (e.g., a notebook computer) having a camera thereon for capturing a portrait of a first user; the information processing apparatus transmits the portrait of the first user to the first transmitting terminal apparatus 310 as the first identification code. Also, for example, the first video data Vid1 source is an information processing device (e.g., notebook computer) connected to the Internet and having a network IP address; the information processing apparatus transmits the IP address to the first transmitting end apparatus 310 as the first identification code. For another example, the first sender device 310 has a factory hardware serial number, and the first identification code is the factory hardware serial number of the first sender device 310. For example, the information processing device executes the communication software (such as Line, Skype, etc.), the information processing device obtains the personal information of the user of the login user, such as "" Peter ""; the information processing apparatus transmits "" Peter "" to the first transmitting device 310 as the first identification code. Any of the above-described information that can identify the information processing apparatus can be applied as the first identification code. When the first identification code is "" Peter "", the serial number of the factory hardware or the IP address, the first identification image is the image of the above-mentioned character. When the first identification code is the portrait of the first user, the first identification image is the portrait of the first user.
Corresponding to step 504, the detailed steps include step 608 of FIG. 6: determine if the transmitting device transmits video data to the receiving device 320? If yes, go to step 610. The details of this step 608 are the same as those of step 208.
As shown in fig. 6, the method for identifying the source of the video signal of the present embodiment optionally includes step 609, determining whether more than 1 transmitting device transmits video data to the receiving device? If so, go to step 610, otherwise skip step 610. The details of this step 609 are the same as those of step 209.
Corresponding to steps 506-508, the detailed steps include step 610 shown in FIG. 6: the receiving device 320 displays the color corresponding to the identification code in the display device 360 according to the identification code transmitted by the transmitting device, and the indicator light of the transmitting device displays the color corresponding to the identification code. The details of this step 610 are the same as those of step 210.
In this second embodiment, the wireless bulletin system of the present invention further performs step 612 shown in fig. 6: is the transmitting device powered off? Thus, when the first sink device 310 continues to transmit the first video data Vid1 to the sink device 320 and the sink device is not powered off, the sink control unit 324 continues to output the first composite video data Vid _ C1 to the display device 360.
Referring to fig. 7, a flowchart of a method for identifying a video signal source according to a third embodiment of the invention is shown. Referring to fig. 8, a block diagram of a wireless presentation system 800 according to a third embodiment of the invention is shown. As shown in FIG. 7, the method for identifying the source of the frame signal of the present invention comprises the following steps: step 702: the first transmitting device 810 transmits the first video data Vid1 to the sink device 820, and the sink device 820 outputs the first video data Vid1 to the display device 860, wherein the first video data Vid1 corresponds to a first video area 862 in the display of the display device 860. Step 704: the user input device 870 provides a location for user manipulation. Step 706: determine whether the receiver control unit 824 determines that the position operated by the user is located in the first video area 862? Step 708: when the user operation position is located in the first video area 862, the receiving end control unit 824 sends a prompt instruction to the first transmitting end device 810, so that the first transmitting end device 810 sends out a first acousto-optic signal.
As shown in fig. 8, the wireless presentation system includes a first transmitting device 810 and a receiving device 820, and optionally includes a second transmitting device 830. The first transmitter 810 includes a first transmitter control unit 812, a first transmitter module 814 and a first indicator 816. The first transmitting end control unit 812 is used for receiving the first video data Vid1 from the first video source 850, and the first transmitting module 814 can transmit the first video data Vid 1. Similarly, the second tx device 830 includes a second tx control unit 832, a third transmission module 834 and a second indicating device 836. The second transmitter control unit 832 is used for receiving the second video data Vid2 from the second video source 852, and the third transmission module 834 is used for transmitting the second video data Vid 2. The sink device 820 includes a second transmission module 822 and a sink control unit 824. The second transmission module 822 selectively connects to the first transmission module 814 and receives the first video data Vid1, and the second transmission module 822 selectively connects to the third transmission module 834 and receives the second video data Vid 2. The sink control unit 824 is configured to output the first video data Vid1 and the second video data Vid2 to the display device 860, when the first video data Vid1 is displayed on the display device 860, the first video data Vid1 corresponds to the first video area 862, the second video data Vid2 corresponds to the second video area 866, and the first video area 862 and the second video area 866 are displayed in a split frame manner. The receiver control unit 824 receives a user-manipulated position provided by the user input device 870.
Referring to fig. 9, a schematic diagram of an application scenario of the wireless bulletin system in fig. 8 is shown, in the embodiment, the user input device 870 is a touch panel of a large-sized television, and can provide a user to press and touch the touch panel, but is not limited thereto. The sink control unit 824 outputs the first video data Vid1 and the second video data Vid2 to the display device 860, when the first video data Vid1 is displayed on the display device 860, the first video data Vid1 corresponds to a first video area 862 on the left half of the display device 860, the second video data Vid2 corresponds to a second video area 866 on the left half of the display device 860, and the first video area 862 and the second video area 866 are displayed in a split screen manner. When a user applies a first user operation Ipt1 (e.g., a pressing or touching operation) in first video area 862 on the left half, the touch panel provides position coordinate information of the relevant user operation to receiving side control unit 824. When the receiving-end control unit 824 receives the position of the user operation provided by the user input device 870 and determines that the position of the user operation is located in the first video area 862, the receiving-end control unit 824 sends a prompt instruction to the first transmitting-end device 810, so that the first indicating device 816 of the first transmitting-end device 810 sends a first acousto-optic signal, for example, an indicator light on the first transmitting-end device 810 sends a red light signal or a buzzer sends a short sound. Similarly, when the user applies a second user operation Ipt2 (e.g., a pressing or touching operation) in the second video area 866 on the right half, the touch panel provides position coordinate information associated with the user operation to the receiving-end control unit 824. When the receiving-end control unit 824 receives the position of the user operation provided by the user input device 870 and determines that the position of the user operation is located in the second video area 866, the receiving-end control unit 824 sends a prompt instruction to the second transmitting-end device 820, so that the second indication device 836 of the second transmitting-end device 820 sends a second audio-visual signal, where the second audio-visual signal is different from the first audio-visual signal; for example, an indicator light on the second transmitting end device 820 emits a green light signal, or a buzzer emits two short sounds.
Referring to fig. 10, a flowchart of a method for identifying a video signal source according to a fourth embodiment of the invention is shown. Referring to fig. 12, a block diagram of a wireless presentation system 1200 according to a fourth embodiment of the invention is shown. As shown in FIG. 10, the method for identifying the source of the frame signal according to the present invention includes the following steps. In step 1002, a first sender control unit in a first sender device synthesizes first video data and a first identification picture into first synthesized video data, wherein the first identification picture corresponds to a first identification code in the first sender device. In step 1004, the first composite video data is transmitted to a sink control unit in the sink device via the first sender device. In step 1006, the sink control unit outputs the first synthesized video data to the display device.
As shown in fig. 12, the wireless presentation system includes a first transmitting device 1210, a second transmitting device 1230 and a receiving device 1220. The first transceiver 1210 includes a first transceiver control unit 1212, a first pointing device 1216, and a first transmission module 1214, wherein the first transceiver control unit 1212 is configured to receive the first video data Vid1 from the first video source 1250. The first sender control unit 1212 selectively synthesizes the first video data Vid1 and the first identification icon corresponding to the first identification code into the first synthesized video data Vid _ C1, where the first identification icon corresponds to the first identification code in the first sender device. The first transmission module 1214 can be used to transmit the first video data Vid1 or the first composite video data Vid _ C1. Similarly, the second tx device 1230 includes a second tx control unit 1232, a second pointing device 1236 and a third transmission module 1234, the second tx control unit 1232 is used to receive the second video data Vid2 from the second video source 1252. The second transmitter control unit 1232 selectively synthesizes the second video data Vid2 with a second identification picture corresponding to a second identification code corresponding to the second identification code in the second transmitter device into second synthesized video data Vid _ C2. The third transmitting module 1234 is used to transmit the second video data Vid2 or the second composite video data Vid _ C2. The receiving device 1220 includes a second transmission module 1222 and a receiving control unit 1224. The second transmission module 1222 can selectively establish a wireless transmission WL1 connection with the first transmission module 1214 and receive the first video data Vid1 or the first composite video data Vid _ C1. The second transmitting module 1222 can also selectively establish a wireless transmission WL2 connection with the third transmitting module 1234 and receive the second video data Vid2 or the second composite video data Vid _ C2. When the sink control unit 1222 receives the first composite video data Vid _ C1 and the second composite video data Vid _ C2 at the same time, the sink control unit 1224 outputs the first composite video data Vid _ C1 and the second composite video data Vid _ C2 to the display device 360, so that the first composite video data Vid _ C1 and the second composite video data Vid _ C2 are displayed on the display device 360 in a split-screen manner. For example, the sink control unit 1222 synthesizes the first synthesized video data Vid _ C1 and the second synthesized video data Vid _ C2 into one split-screen video data Vid _ C3, and outputs the split-screen video data Vid _ C3 to the display device 1260. In this embodiment, the first tx control unit 1212, the second tx control unit 1232, and the rx control unit 1222 have synthesis circuits therein for synthesizing the received synthesized video data.
Referring to fig. 11, a detailed flowchart of the method for identifying the source of the video signal in fig. 10 is shown. Wherein step 1002 shown in FIG. 10 comprises step 1104 shown in FIG. 11, step 1004 shown in FIG. 10 comprises step 1108 shown in FIG. 11, and step 1006 shown in FIG. 10 comprises step 1112 shown in FIG. 11.
As shown in fig. 11, in step 1104, the sink control unit 1224 notifies the first sender control unit 1212 to synthesize the first video data Vid1 and the first identification picture into the first synthesized video data Vid _ C1. When the first transmitter 1214 of the first transmitter 1210 is connected to the second transmitter 1222 of the receiver 1220, the receiver 1220 and the first transmitter 1210 update their internal information, and the first transmitter 1210 transmits the first identification code to the receiver control unit 1224 of the receiver 1220. The first transmitter control unit 1212 and the receiver control unit 1224 each have a preset list of colors of lights corresponding to the identification codes, for example, the first transmitter device 1210 is configured to assign the identification code #1 to a red color when it leaves the factory, the second transmitter device 1230 is configured to assign the identification code #2 to a green color when it leaves the factory, and so on. Thus, when the first sender device 1210 and the receiver device 1220 establish a connection, the receiver control unit 1224 records that the first sender device 1210 has the identification number # 1. Similarly, when the second sender device 1230 establishes a connection with the receiver device 1220 later, the receiver control unit 1224 records that the second sender device 1230 has the identification number # 2. The receiver control unit 1224 looks up the list of the colors of the lights corresponding to the identification codes preset and stored therein to know that the first identification image corresponding to the identification code #1 is red and the second identification image corresponding to the identification code #2 is green. In the present embodiment, the first identification icon and the second identification icon are illustrated as red and green, respectively, but not limited thereto, and the first identification icon and the second identification icon may also be arabic numerals "1" and "2", or english letters "a" and "B", respectively.
In step 1106, the sink control unit 1224 notifies the second transmitter control unit 1232 to combine the second video data Vid2 and the second identification picture into the second composite video data Vid _ C2.
In step 1108, the first composite video data Vid _ C1 is transmitted to the sink control unit 1224 via the first transmitting device 1210. In step 1110, the second composite video data Vid _ C2 is transmitted to the sink control unit 1224 via the second transmitting device 1230. In step 1112, the sink control unit 1224 combines the first combined video data Vid _ C1 and the second combined video data Vid _ C2 into a split-screen video data Vid _ C3, and the sink control unit 1224 outputs the split-screen video data Vid _ C3 to the display device 1260.
As shown in fig. 11, the method for identifying a video signal source of the present embodiment optionally includes steps 1102, 1114, and 1116. In step 1102, the sink control unit 1224 of the sink device 1220 determines whether (a) the first video data Vid1 transmitted by the first sender device 1210 and (b) the second video data Vid2 transmitted by the second sender device 1230 have been received simultaneously, if yes, step 1104 is performed, otherwise, step 1114 is performed. With step 1102, the sink control unit 1224 determines whether more than 1 of the transmitting devices transmits video data to the sink device 1220. Steps 1114 through 1116 are directed to the following scenarios: although both the first tx device 1210 and the second tx device 1230 have already been connected to the sink device 1220, (1) when only the first tx device 1210 is transmitting the first video data Vid1 to the sink device 1220 and the entire screen of the display device 1260 is displaying the first video data Vid1, the participant can clearly understand that the presenter currently operating the first tx device 1210 is outputting the first video data Vid1, so that the first tx control unit 1212 can omit displaying the color corresponding to the identification code on the display device 1260, for example, the first tx control unit 1212 can turn off the video synthesizing circuit in the first tx control unit 1212 to reduce the power consumption of the first tx control unit 1212. (2) When the first tx device 1210 transmits the first video data Vid1 to the sink device 1220 and the second tx device 1230 transmits the second video data Vid2 to the sink device 1220, the display device 1260 displays the first video data Vid1 and the second video data Vid2 simultaneously in a split-frame manner, and the participant cannot clearly recognize the video signals at this time, the first tx control unit 1212 needs to display the color corresponding to the identification code on the display device 1260, for example, the first tx control unit 1212 starts the video synthesizing circuit in the first tx control unit 1212 at this time. Thus, for the two situations (1) and (2), the first transmitter control unit 1212 may dynamically turn off the video synthesizing circuit in the first transmitter control unit 1212, so as to dynamically reduce the power consumption of the first transmitter control unit 1212, thereby achieving the power saving effect.
As mentioned above, the corresponding situation of step 1114 is that the receiver control unit 1224 receives only one of (a) the first video data Vid1 transmitted by the first sender device 1210 and (b) the second video data Vid2 transmitted by the second sender device 1230, and step 1114 of the present invention is exemplified by receiving the first video data Vid1 transmitted by the first sender device 1210. In step 1114, the first video data Vid1 is transmitted to the sink control unit 1224 via the first transmitting device 1210. In step 1116, the sink control unit 1224 displays 1260 the first video data Vid1 on a display device. As shown in FIG. 11, the steps 1114 to 1116 are exemplified by the receiving-end control unit 1224 receiving the first video data Vid1 but not receiving the second video data Vid2, but not limited thereto. Similarly, when the receiving-end control unit 1224 receives the second video data Vid2 but does not receive the first video data Vid1 in step 1114, the corresponding steps 1114-1116 are modified correspondingly as follows: in step 1114, transmit the second video data Vid2 to the sink control unit 1224 via the second transmitting device 1230; in step 1116, the sink control unit 1224 displays the second video data Vid2 on the display device.
In step 1104, the first sender device 1210 synthesizes the first video data Vid1 and the first identification image (e.g., red identification image) into first synthesized video data Vid _ C1 according to the first identification code, and outputs the first synthesized video data Vid _ C1 to the receiver device 1220. When the first composite video data Vid _ C1 is displayed in the display device 1260, the first composite video data Vid _ C1 corresponds to a first video region adjacent to the first identification region displaying the first video data Vid1 and a first identification region displaying the first identification image; the first video area and the first identification area of the present embodiment are the same as the first video area 462 and the first identification area 464 of the first embodiment shown in fig. 4A to 4C. The first indicating means 1216 of the first transmitting end means 1210 simultaneously displays the first identification image (e.g., red identification image). In this embodiment, the first sender control unit 1212 synthesizes the first video data Vid1 and the red identification icon into the first synthesized video data Vid _ C1, so that the corresponding red identification icon is displayed on the display device 1260, and the indicator light on the first sender device 1210 displays the red corresponding to the identification code # 1. In this way, the second transmitting-end control unit 1232 synthesizes the second video data Vid2 and the green identification image into the second synthesized video data Vid _ C2, and outputs the second synthesized video data Vid _ C2 to the receiving-end device 1220, so as to display the corresponding green identification image on the display device 1260, and to display the green corresponding to the identification code #2 on the second indicating device 1236 (e.g., indicator light) on the second transmitting-end device 1220.
In the embodiment, the first identification code in the first transmitting end device and the second identification code in the second transmitting end device may be, for example, color identification codes assigned to the transmitting end devices when the transmitting end devices are shipped, but the invention is not limited thereto. The first identification code may also be selected from one of the group consisting of a hardware serial number, user input information, and a network address. For example, the first video source 1250 providing the first video data Vid1 is an information processing device (e.g., notebook computer) used by the first user, the information processing device executes an operating system (e.g., microsoft Windows operating system), the first identifier is user information provided by the operating system (e.g., personal information of the first user set in the microsoft Windows operating system console), such as "Peter"; the information processing apparatus transmits "" Peter "" to the first transmitting device 1210 as the first identification code. Also, for example, the first video data Vid1 is sourced from an information processing apparatus (e.g., a notebook computer) having a camera thereon for capturing a portrait of a first user; the information processing apparatus transmits the portrait of the first user to the first transmitting end apparatus 1210 as the first identification code. Also, for example, the first video data Vid1 source is an information processing device (e.g., notebook computer) connected to the Internet and having a network IP address; the information processing apparatus transmits the network IP address to the first transmitting end apparatus 1210 as the first identification code. For example, the first transmitter 1210 has a factory hardware serial number, and the first identification code is the factory hardware serial number of the first transmitter 1210. For example, the information processing device executes a communication software (e.g. Line, Skype, etc.), the information processing device obtains the personal information of the user of the login user, such as "" Peter ""; the information processing apparatus transmits "" Peter "" to the first transmitting device 1210 as the first identification code. Any of the above-described information that can identify the information processing apparatus can be applied as the first identification code. When the first identification code is "" Peter "", the serial number of the factory hardware or the IP address, the first identification image is the image of the above-mentioned character. When the first identification code is the portrait of the first user, the first identification image is the portrait of the first user.
The method for identifying the video signal source and the wireless bulletin system using the method of the embodiment of the invention enable the participant to easily understand which user the video signal source of the first video area 362 is located in the divided picture of the display device 360 by the correspondence relationship among (1) the first video area 362 in the display picture of the display device 360, (2) the first identification image in the first identification area 464 adjacent to the first video area 362 in the display picture of the display device 360, and (3) the first identification image correspondingly displayed by the first indicating device 316 on the first transmitting end device 310. On the other hand, the position of the user performing the input operation in the divided frame of the display device 860 is detected by the user input device 870, and when the receiving end control unit 824 determines that the position of the user operation is located in the first video area 862, the receiving end control unit 824 sends a prompt instruction to the first transmitting end device 810, so that the first transmitting end device 810 sends the first acousto-optic signal, and the participant can easily understand which user the video signal source of the current position of the user performing the input operation is. Therefore, the problem that the participants can not identify which presenter's information processing device each video signal source in the split screen of the current display device is in the split screen display state in the conventional method can be avoided. It is helpful for the participants to quickly identify the corresponding video signal source presenter with respect to a certain picture in the divided pictures, and further to provide question discussion.
The present invention has been described in relation to the above embodiments, which are only exemplary of the implementation of the present invention. It should be noted that the disclosed embodiments do not limit the scope of the invention. Rather, it is intended that all such modifications and variations be included within the spirit and scope of this invention.

Claims (21)

1. A method for identifying a source of a video signal, comprising:
a receiving end control unit in the receiving end device distributes a first identification code to a first transmitting end device;
transmitting first video data through the first transmitting device;
the receiving end control unit synthesizes the first video data and the first identification image corresponding to the first identification code into first synthesized video data; and
the receiving end control unit outputs the first synthesized video data to the display device.
2. The method of claim 1, wherein in the step of assigning the first identification code by the sink control unit in the sink device, the sink control unit assigns the first identification code to the first sender device when the sink device is connected to the first sender device.
3. The method of identifying a source of a video signal according to claim 1, further comprising: when the first synthesized video data is displayed in the display device, the first synthesized video data corresponds to a first video area adjacent to the first identification area and displaying the first video data and a first identification area displaying the first identification image.
4. The method of identifying a source of a video signal according to claim 1, further comprising: the receiving end control unit distributes a second identification code to a second transmitting end device, transmits second video data through the second transmitting end device, synthesizes the second video data and a second identification image corresponding to the second identification code into second synthesized video data, synthesizes the first synthesized video data and the second synthesized video data into divided picture video data, and outputs the divided picture video data to the display device.
5. The method of identifying a source of a video signal according to claim 1, further comprising:
when the first transmitting end device stops transmitting the first video data to the second transmission module of the receiving end device, the receiving end control unit cancels the corresponding relation between the first identification code and the first transmitting end device.
6. The method of claim 1, wherein when the first transmitting device transmits the first video data to the second transmitting module of the receiving device, the receiving control unit issues a prompt to request the first indicating device of the first transmitting device to display the first identification icon.
7. The method of claim 6, wherein the first identification icon is a first color, and when the receiver control unit requests the first transmitter device to display the first identification icon, the first indicator device on the first transmitter device displays the first color.
8. A method for identifying a source of a video signal, comprising:
transmitting the first video data and the first identification code to a receiving end control unit in the receiving end device through the first transmitting end device;
the receiving end control unit synthesizes the first video data and the first identification image corresponding to the first identification code into first synthesized video data; and
the receiving end control unit outputs the first synthesized video data to the display device.
9. The method of claim 8, wherein in the step of the first transmitting device transmitting the first video data and the first identification code, the first transmitting device transmits the first identification code when the receiving device is connected to the first transmitting device.
10. The method of identifying a source of a video signal according to claim 8, further comprising: the receiving end control unit synthesizes the second video data and a second identification image corresponding to the second identification code into second synthesized video data, synthesizes the first synthesized video data and the second synthesized video data into divided-screen video data, and outputs the divided-screen video data to the display device.
11. A method for identifying a source of a video signal, comprising:
a first transmitting end control unit in the first transmitting end device synthesizes the first video data and a first identification image into first synthesized video data, wherein the first identification image corresponds to a first identification code;
transmitting the first composite video data to a receiving end control unit in a receiving end device through the first transmitting end device; and
the receiving end control unit outputs the first synthesized video data to a display device.
12. The method of identifying a source of a video signal of claim 11, further comprising: the second transmitting end control unit in the second transmitting end device synthesizes the second video data and the second identification image into second synthesized video data, the second identification image corresponds to the second identification code, the receiving end control unit synthesizes the first synthesized video data and the second synthesized video data into divided picture video data, and outputs the divided picture video data to the display device.
13. A method for identifying a source of a video signal, comprising:
a receiving end control unit in the receiving end device distributes a first identification code to a first transmitting end device, and the receiving end control unit distributes a second identification code to a second transmitting end device; and
when the receiving end control unit receives the first video data transmitted by the first transmitting end device and the second video data transmitted by the second transmitting end device at the same time, the receiving end control unit executes the following steps:
synthesizing the first video data and the first identification image corresponding to the first identification code into first synthesized video data;
synthesizing the second video data and a second identification image corresponding to the second identification code into second synthesized video data; and
synthesizing the first synthesized video data and the second synthesized video data into divided-screen video data, and outputting the divided-screen video data to the display device.
14. A method for identifying a source of a video signal, comprising:
when a receiving end control unit in a receiving end device receives first video data and a first identification code transmitted by a first transmitting end device and simultaneously receives second video data and a second identification code transmitted by a second transmitting end device, the receiving end control unit executes the following steps:
the receiving end control unit synthesizes the first video data and the first identification image corresponding to the first identification code into first synthesized video data;
the receiving end control unit synthesizes the second video data and a second identification image corresponding to the second identification code into second synthesized video data; and
the receiving end control unit synthesizes the first synthesized video data and the second synthesized video data into divided-screen video data and outputs the divided-screen video data to the display device.
15. A method for identifying a source of a video signal according to claim 13 or 14, further comprising: when the receiving end control unit receives the first video data transmitted by the first transmitting end device, but the receiving end control unit does not receive the second video data transmitted by the second transmitting end device, the following steps are executed: the receiving end control unit does not execute the step of synthesizing the first video data and the first identification image, and outputs the first video data to the display device.
16. A method for identifying a source of a video signal, comprising:
when the receiving end control unit in the receiving end device receives the first video data transmitted by the first transmitting end device and the second video data transmitted by the second transmitting end device at the same time, the following steps are executed:
the receiving end control unit informs the first transmitting end control unit to synthesize the first video data and a first identification image into first synthesized video data, wherein the first identification image corresponds to a first identification code;
the receiving end control unit informs the second transmitting end control unit to synthesize the second video data and a second identification image into second synthesized video data, wherein the second identification image corresponds to a second identification code;
transmitting the first composite video data to the receiving end control unit via the first transmitting end device;
transmitting the second composite video data to the receiver control unit via the second transmitter device; and
the receiving end control unit synthesizes the first synthesized video data and the second synthesized video data into divided-screen video data and outputs the divided-screen video data to a display device.
17. A method for identifying a source of video signals as in claim 16, further comprising: when the receiving end control unit receives the first video data transmitted by the first transmitting end device, but the receiving end control unit does not receive the second video data transmitted by the second transmitting end device, the following steps are executed:
the receiving end control unit informs the first transmitting end control unit to stop synthesizing the first video data and the first identification image into the first synthesized video data;
transmitting the first video data to the receiving end control unit via the first transmitting end device; and
the receiving end control unit displays the first video data in the display device.
18. A method for identifying a source of video signals according to claim 8, 11, 13, 14 or 16, further comprising: when the first synthesized video data is displayed in the display device, the first synthesized video data corresponds to a first video area adjacent to the first identification area and displaying the first video data and a first identification area displaying the first identification image.
19. The method of claim 8, 11, 13, 14 or 16, wherein when the first transmitting device transmits the first video data to the second transmitting module of the receiving device, the receiving control unit issues a prompt to request the first indicating device of the first transmitting device to display the first identification image.
20. The method of claim 8, 11, 14 or 16, in which the first identification code is selected from one of the group consisting of a hardware serial number, user input information and a network address.
21. The method of claim 20, wherein the source of the first video data is an information processing apparatus executing an operating system, and the first identifier is user information provided by the operating system.
CN202010014560.8A 2020-01-07 2020-01-07 Method for identifying video signal source Pending CN113163151A (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
CN202410554544.6A CN118233593A (en) 2020-01-07 2020-01-07 Method for identifying video signal source
CN202010014560.8A CN113163151A (en) 2020-01-07 2020-01-07 Method for identifying video signal source
US17/143,214 US11956563B2 (en) 2020-01-07 2021-01-07 Method for identifying video signal source
EP21150536.7A EP3849174A1 (en) 2020-01-07 2021-01-07 Method for identifying video signal source
US17/143,221 US11245867B2 (en) 2020-01-07 2021-01-07 Video conference system
EP21150540.9A EP3849175A1 (en) 2020-01-07 2021-01-07 Video conference system
US18/203,108 US20230300287A1 (en) 2020-01-07 2023-05-30 Method for identifying video signal source

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014560.8A CN113163151A (en) 2020-01-07 2020-01-07 Method for identifying video signal source

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410554544.6A Division CN118233593A (en) 2020-01-07 2020-01-07 Method for identifying video signal source

Publications (1)

Publication Number Publication Date
CN113163151A true CN113163151A (en) 2021-07-23

Family

ID=76881734

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202410554544.6A Pending CN118233593A (en) 2020-01-07 2020-01-07 Method for identifying video signal source
CN202010014560.8A Pending CN113163151A (en) 2020-01-07 2020-01-07 Method for identifying video signal source

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202410554544.6A Pending CN118233593A (en) 2020-01-07 2020-01-07 Method for identifying video signal source

Country Status (1)

Country Link
CN (2) CN118233593A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022529A (en) * 2006-02-15 2007-08-22 力新国际科技股份有限公司 Several to one transmission via local network and displaying information method and system
CN101150705A (en) * 2007-10-19 2008-03-26 中兴通讯股份有限公司 A method for realizing multi-point conference video control in initial session protocol
JP2008085677A (en) * 2006-09-27 2008-04-10 Toshiba Corp Information control device, information synthesizer and program
TW201123831A (en) * 2009-12-30 2011-07-01 Tung-Hsiao Chen Tele-communication link and conference system and method thereof
CN102638671A (en) * 2011-02-15 2012-08-15 华为终端有限公司 Method and device for processing conference information in video conference
JP2012205274A (en) * 2011-03-28 2012-10-22 Nec Corp Multipoint connection device, multipoint video conference system, display control method and display control program
JP2013005282A (en) * 2011-06-17 2013-01-07 Ntt Docomo Inc Radio communication system and message notification method
US20170195374A1 (en) * 2015-12-31 2017-07-06 Actiontec Electronics, Inc. Displaying content from multiple devices
JP2017130881A (en) * 2016-01-22 2017-07-27 株式会社ナカヨ Telephone controller with television conference function
JP2019049999A (en) * 2018-10-24 2019-03-28 株式会社スクウェア・エニックス Communication terminal, display method, and program
US10264213B1 (en) * 2016-12-15 2019-04-16 Steelcase Inc. Content amplification system and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101022529A (en) * 2006-02-15 2007-08-22 力新国际科技股份有限公司 Several to one transmission via local network and displaying information method and system
JP2008085677A (en) * 2006-09-27 2008-04-10 Toshiba Corp Information control device, information synthesizer and program
CN101150705A (en) * 2007-10-19 2008-03-26 中兴通讯股份有限公司 A method for realizing multi-point conference video control in initial session protocol
TW201123831A (en) * 2009-12-30 2011-07-01 Tung-Hsiao Chen Tele-communication link and conference system and method thereof
CN102638671A (en) * 2011-02-15 2012-08-15 华为终端有限公司 Method and device for processing conference information in video conference
JP2012205274A (en) * 2011-03-28 2012-10-22 Nec Corp Multipoint connection device, multipoint video conference system, display control method and display control program
JP2013005282A (en) * 2011-06-17 2013-01-07 Ntt Docomo Inc Radio communication system and message notification method
US20170195374A1 (en) * 2015-12-31 2017-07-06 Actiontec Electronics, Inc. Displaying content from multiple devices
JP2017130881A (en) * 2016-01-22 2017-07-27 株式会社ナカヨ Telephone controller with television conference function
US10264213B1 (en) * 2016-12-15 2019-04-16 Steelcase Inc. Content amplification system and method
JP2019049999A (en) * 2018-10-24 2019-03-28 株式会社スクウェア・エニックス Communication terminal, display method, and program

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢晶明;: "K80视频终端", 办公自动化, no. 19, 1 October 2008 (2008-10-01) *

Also Published As

Publication number Publication date
CN118233593A (en) 2024-06-21

Similar Documents

Publication Publication Date Title
US11457177B2 (en) Video conferencing system and transmitter thereof
CN108293104B (en) Information processing system, wireless terminal, and information processing method
US20080170058A1 (en) Display apparatus and method for implementing screen saver for the same
US20230300287A1 (en) Method for identifying video signal source
US7849410B2 (en) Pointing-control system for multipoint conferences
CN109753259B (en) Screen projection system and control method
US20050185102A1 (en) Single touch launch of remote applications over video
WO2021242347A1 (en) Secure conferencing device
US9648276B2 (en) Transmission management apparatus, transmission system, transmission management method and recording medium
TWI724746B (en) Method for identifying video signal source
CN113163144B (en) Wireless Presentation System
CN113163151A (en) Method for identifying video signal source
TWI767176B (en) Video conference system can identify video signal source
TWI693834B (en) Video conferencing system and transmitter thereof
JP2022034506A (en) Av transmission device
TW202218404A (en) An audio-visual content sharing system and method thereof
EP3186957B1 (en) Transmission terminal, transmission method, and non-transitory storage medium storing program
TWI802258B (en) Video conference system
US20220075630A1 (en) Non-transitory recording medium, information processing device, and information processing system
WO2020220969A1 (en) Intelligent interactive device and control method therefor
CN115390732A (en) Data transmission method and system
US20080174746A1 (en) Method for projecting an image from one of a plurality of hosts and projection device thereof
CN111885344A (en) Data transmission method, equipment and system
CN116208732A (en) Transmitting end device applied to video conference system
US20190095088A1 (en) Electronic apparatus and method for controlling electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination