CN101489140B - Video/audio reproducing apparatus - Google Patents

Video/audio reproducing apparatus Download PDF

Info

Publication number
CN101489140B
CN101489140B CN2008101809253A CN200810180925A CN101489140B CN 101489140 B CN101489140 B CN 101489140B CN 2008101809253 A CN2008101809253 A CN 2008101809253A CN 200810180925 A CN200810180925 A CN 200810180925A CN 101489140 B CN101489140 B CN 101489140B
Authority
CN
China
Prior art keywords
data
output
image
delay
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008101809253A
Other languages
Chinese (zh)
Other versions
CN101489140A (en
Inventor
金丸隆
鹤贺贞雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxell Holdings Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2008022265A external-priority patent/JP2009182912A/en
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN101489140A publication Critical patent/CN101489140A/en
Application granted granted Critical
Publication of CN101489140B publication Critical patent/CN101489140B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a video/audio reproducing apparatus, easily obtains the synchronization for the video and the audio according to the invention, more particularly when a video display apparatus and an audio output apparatus are connected in series, according to a transmission delay amount, an audio delay method respectively uses an offset part and an offset amount supply method.

Description

Video/audio reproducing apparatus
Technical field
The present invention relates to device that image, sound are reproduced.
Background technology
Relevant above-mentioned technical field, in patent documentation 1, the technical task of record is that " a kind of audio and video transfer system is provided, and it can suitably reduce to reproduce the voice signal and the deviation of signal of video signal owing to reproduction sound that causes with different respectively load mode transmission and reproduction image that regularly should mate."; the technological means of dealing with problems of record is " audio and video a conveyer, signal of video signal is by image wireless transmission part wireless transmission, voice signal is by the wired transmission of audio output unit, in the case, make the voice signal of transmission and the phase difference of signal of video signal be reduced to the degree that the hearer can't discover of looking that sound and image by these signal reproductions are carried out audiovisual that makes, adjust by postponing voice signal with the sound delay time portion that is arranged on the audio and video conveyer.”。
Patent documentation 1: TOHKEMY 2004-88442 communique
For example, in the digital broadcast that adopts the MPEG2 transport stream etc., image harmony cent is not encoded, multiplexing transmission is carried out in packing, and receiver one side is imbedded to be useful on and made image and the sound timestamp information of (1ip sync: audio-visual synchronously) synchronously.In the receiver, find this information then by decoding processing portion carry out image and sound synchronously.In recent years, follow popularizing of slim TV that pixel shows, carried out the processing of various image high image quality, its result, the time lengthening that spends till the show image.Therefore, in order to mate and postpone voice signal with the time of delay that till image reproduction, spends, use the output timing of adjustment sound such as storage part.
In patent documentation 1,, postpone sound to solve synchronism deviation by using storage part to delay owing to the different images that produce of load mode of image and sound.This retardation is by based on user's (user) setting or the transfer rate when transmitting image and as one man determine set point.
But it is unusual difficulty that the image that has deviation significantly and user that sound carries out synchronous audio and video transfer system are started anew to adjust.In addition, not necessarily will be by the transfer rate unanimous decision retardation of image.This be because, though the delay of image take place as the result who carries out a plurality of image processing, the transfer rate of image be not with image treatment method one to one.
Summary of the invention
Therefore, the invention provides, can carry out simply synchronously, and be conceived to improve the system of operability for the image that produces because of a variety of causes and the synchronism deviation of sound.
Specifically, a kind of video/audio reproducing system is provided, in this system, for example, receive and comprise image data at least, voice data and being used to is established the first synchronous digital content of reproducing synchronizing information of the reproduction of image data and voice data, the image data of digital content and the reproduction of voice data are established synchronously, be connected with the demonstration output device that image data and voice data is shown output in the output of receiving system of output, show output device, send the second synchronous reproduction synchronizing information of reproduction that is used to establish from the image data and the voice data of above-mentioned demonstration output device output for receiving system, receiving system reproduces synchronizing information and second reproduces synchronizing information by what show that output device receives according to first, establishes by the reproduction of the image data of the demonstration output device output of digital content and voice data synchronous.
According to above-mentioned means, can make the output of image and sound synchronous simply.
Description of drawings
Fig. 1 is the schematic diagram of example of the input and output of expression receiving system 3.
Fig. 2 is the schematic diagram that is connected example of expression receiving system 3 and display unit 4.
Fig. 3 is the block diagram of the structure example of expression receiving system 3.
Fig. 4 is the block diagram of the structure example of expression display unit 4.
Fig. 5 is the block diagram of the structure example of expression wire signal sending part 311.
Fig. 6 is the block diagram of the structure example of expression wire signal acceptance division 403.
Fig. 7 is the block diagram of the structure example of expression wireless signal sending part 312.
Fig. 8 is the block diagram of the structure example of the wireless delivery unit 9 of expression.
Fig. 9 is the block diagram of the structure example of the wireless delivery unit 10 of expression.
Figure 10 is the block diagram of the structure example of expression wireless signal acceptance division 404.
Figure 11 is the block diagram of the structure example of expression sound signal processing portion 206.
Figure 12 is the flow chart that the slow time set of expression initial stage processing delay is handled example.
Figure 13 is the flow chart of the processing example of expression when changing time of delay.
Figure 14 is the flow chart that expression postpones the detailed process example of setting S1207, S1302.
Figure 15 is that expression postpones the flow chart that buffering is set the detailed process example of S1401.
Figure 16 is the flow chart that expression PTS side-play amount is set the detailed process example of S1402.
Figure 17 is the flow chart that the detailed process example of S1403 is set in the expression voice output.
The flow chart of Figure 18 processing example that to be expression change the method for attachment of receiving system 3 and display unit 4 from wireless transmission.
Figure 19 is the flow chart that expression changes to the method for attachment of receiving system 3 and display unit 4 the processing example of wireless transmission.
Figure 20 is the schematic diagram that an example of menu demonstration is set in expression.
Figure 21 is the schematic diagram that an example of menu demonstration is set in expression.
Figure 22 is the schematic diagram that an example of menu demonstration is set in expression.
Figure 23 is the schematic diagram that an example of menu demonstration is set in expression.
Figure 24 is the schematic diagram that an example of menu demonstration is set in expression.
Figure 25 is the schematic diagram of the example that shows of the information of expression receiving system 3 when not being connected with display unit 4.
Figure 26 is the schematic diagram that is connected example of expression receiving system 3 and image display 14.
Figure 27 is the schematic diagram that is connected example of expression receiving system 3 and display unit 4.
Symbol description
3: receiving system; 4: display unit; 9: wireless delivery unit; 10: wireless delivery unit.
Embodiment
The example (embodiment) that is suitable for embodiments of the present invention is described.But the present invention is not limited to present embodiment.
Embodiment 1
The explanation of<structure 〉
At first, use Fig. 1 to Figure 11 that the structure example of receiving system and peripheral equipment is described.
Fig. 1 is the schematic diagram of the example of the data input and output of expression receiving system.1 expression TV and Radio Service.2 expression broadcasting satellites.The receiving system 3 of the signal that reception is sent by broadcasting satellite is carried out the processing that the data of compression are expanded etc., and is exported display unit 4 to.Except broadcast wave, input can also be used the ripple of various forms, also can receive the signal that TV and Radio Service sends with surface wave.Perhaps can consider from the transmission information server 5 reception information that send data via networks such as internets or from the video recording equipment place reception information of the load mode connection that can carry out high-speed data transfer by local area network (LAN) (home network), for example HDMI (High-Definition Multimedia Interface, high-definition media interface) etc. etc.Various kinds can also be considered in the output destination of receiving system 3 except display unit 4.For example can list the situation that only voice data exported to external audio amplifier 7, to by local area network (LAN) or for example HDMI etc. can carry out reproducer 8 dateouts that the cable of high-speed data transfer connects.The load mode of output, can use for example above-mentioned HDMI to connect or the agreement of controlling between the connection device (below be designated as CEC (Consumer ElectronicControl, consumer electronics control).
The example that Fig. 2 transmits for the data between expression receiving system 3 and the display unit 4.Macrotaxonomy has wireless data to transmit and cable data transmits.
Using under the wireless condition, receiving system 3 and display unit 4 are connected to wireless data delivery unit 9 and wireless data delivery unit 10 respectively.Connected by cable 11,12 between each device and each unit, this cable for example can send and receive image, voice data as the HDMI cable, can send to receive simple command code.Between data transfer unit 9 and the data transfer unit 10, carry out the boil down to transmission of the data of form alone that depends on for example installation of wireless data delivery unit.Received the data transfer unit 10 of data, the signal of compression has been separated pressed to the display unit transmission.Postpone because this transmits to handle to produce, depend on the classification (compressed format etc.) of data transfer unit this time of delay.In addition, the wireless data delivery unit also can be built in receiving system 3, the display unit 4, or not connect via cable.
On the other hand, use under wired situation, use transmission cable 13 directly to connect receiving system 3 and display unit 4.This cable for example can send as the HDMI cable and receive image, voice data, can send and receive simple command code.
In this structure, receiving system 3 and display unit 4 can be discerned based on wireless delivery unit 9 and 10 wireless being connected.After the identification, the control part of receiving system 3 is recorded as on volatile memory based on wireless connection.In addition, also can after startup, use what load mode (using wireless transmission also to be to use wired transmission) from decisions such as for example menu screens by the user.
In addition, in the following description, the situation that is connected to HDMI of receiving system 3, display unit 4, wireless delivery unit 9 and 10 is described mainly, also can be the communication mode that between connection device, to control.
Fig. 3 has represented the structure example of the inside of receiving system 3.301 and 302 is input terminal.Send data to digital received portion 303 and simulation acceptance division 304 respectively.For example, if be broadcast wave, then digital received portion 303 only extracts the data that the user selects out from the data of input digit tuner, and image data transmits to signal of video signal handling part 305, and voice data transmits to sound signal processing portion 306.The analogue data of 304 pairs of receptions of simulation acceptance division is carried out the digital conversion processing, and image data transmits to signal of video signal handling part 305, and voice data transmits to sound signal processing portion 306.The numerical data of 305 pairs of transmission of signal of video signal handling part is decoded.The sound figure data of 306 pairs of transmission of sound signal processing portion are decoded, and result is transmitted to delay portion 307 and delay portion 308.
Delay portion 307 comprises the buffer that the deviation in the processing time of image data and voice data is carried out revisal.Do not carry out the change of data, transmit data to sending part 311 or 312.
Delay portion 308, identical with delay portion 307, comprise the buffer that the deviation in processing time of image data and voice data is carried out revisal.But it is different with delay portion 307, delay portion 308 for light output with and the delay circuit that is provided with separately, for example with the data mode of light output with the situation of decoded data (PCM (Pulse Code Modulation, pulse code modulation) form) output under effectively.
Use switch 309 to select which different sending part to transmit data to switch 310.In sending part (also have acceptance division, below identical) 311 and sending part 312,, image data and voice data are transmitted together in order to transmit data to display unit.Light output part 313, the light output of voice data is carried out in for example audio frequency amplification etc.314,315,316 is lead-out terminal.The 317 remote controller I/F that operate by remote control for the user.To be sent to control part 320 by the data of input terminal 318 inputs.Even 319 are the do not use a teleswitch operating portion (button etc.) of the operation that also can carry out receiving system 3 and display unit 4 of user.Control part 320 is carried out the control of the each several part of receiving system 3 via system bus 322.In nonvolatile memory 321, preserve the set point that routine data or user set.
Each several part can constitute by hardware, also can realize (following identical) by executive programs such as CPU.
In the present embodiment, output intent is distributed to wire signal handling part 311 and wireless signal sending part 312 by switch 309 and switch 310, can be one also, still be connected with the structure that display unit 4 is differentiated automatically to being connected with unlimited transmitting element 9 above it by communication for for example exporting the identical sending part of I/F.Embodiment after this kind situation also goes for.
Fig. 4 has represented the structure example of display unit 4.(a) represented that in display unit inside image processing to display unit inside postpones to carry out the example of structure of the situation of revisal, (b) has represented not carry out in display unit inside an example of the situation of revisal.
401 and 402 is input terminal.Receive the signal that transmits by wired connection by wire signal acceptance division 403 respectively, receive the signal of wireless transmission by wireless signal acceptance division 404.Image data transmits to image quality adjustment part 405 from acceptance division, and voice data transmits to audio output unit 407 from acceptance division under the situation of Fig. 4 (a), and voice data transmits to delay portion 410 from acceptance division under the situation of Fig. 4 (b).In image quality adjustment part 405, for example carry out the picture size of input is adjusted cooperating the processing of display format, by image display part 406 show image on panel such as PDP or LCD for example.Audio output unit 407 is by for example loud speaker output sound.Control part 408 carries out the control of the each several part of display unit 4 via system bus 409.Delay portion 410 is according to the 405 demonstration control lags that take place carry out revisal to output time in the image quality adjustment part.On the other hand, Fig. 4 (a) postpones the output of voice data at receiving system 3 under the situation that connects receiving system 3, and the processing time amount of image quality adjustment part 405 is carried out revisal.
Fig. 5 has represented the structure example of the wire signal sending part 311 of Fig. 3, and Fig. 6 has represented the structure example of wired acceptance division 403 of Fig. 4.The transmission of the data under the situation of illustrated together wired connection receives to be handled.
501 is the input terminal of signal of video signal, and 502 is the input terminal of voice signal.Data volume when transmitting in order to reduce, the signal of video signal of input is encoded in signal of video signal compression unit 503.By the processing time amount of this coding, voice data postpones by delay portion 506.In the signal multiplexing handling part 504, image data and voice data are carried out multiplexed, by sending part 505 to lead-out terminal 507 outputs.
These data are received at acceptance division 602 via the input terminal 601 of wired acceptance division 403.Image data after compression handled separates in signal separation process portion 603 with voice data, for image data, separates at signal of video signal and to decode in the splenium 604 and export lead-out terminal 606 to.For voice data, the processing time amount by delay portion 605 postpones to separate based on signal of video signal splenium 604 exports lead-out terminal 607 to.
Fig. 7 has represented the structure example of the wireless signal sending part 312 of Fig. 3, and Fig. 8 has represented the structure example of the wireless delivery unit 9 of Fig. 2, and Fig. 9 has represented the structure example of the wireless delivery unit 10 of Fig. 2, and Figure 10 has represented the structure example of the wireless signal acceptance division 404 of Fig. 4.Data under the situation of illustrated together is wireless delivery unit 9 and 10 wireless connections send to receive and handle.
701 is the input terminal of signal of video signal, and 702 is the input terminal of voice signal.These signals are carried out in signal multiplexing handling part 703 multiplexed, sending part 704 via lead-out terminal 705 to 9 outputs of wireless delivery unit.In wireless delivery unit 9, receive image, voice signal at acceptance division 802 via input terminal 801.Data volume when reducing wireless transmit is separated signal of video signal and voice signal in signal separation process portion 803, signal of video signal is encoded in signal of video signal compression unit 804.For voice signal, postpone the processing time amount of signal of video signal compression unit 804 by delay portion 805.With the signal of video signal after the compression with carry out multiplexedly in signal multiplexing handling part 806 through the voice signal after the delay portion, 807 send via lead-out terminal 808 from the wireless transmission part.
In the wireless delivery unit 10 that receives a side, wireless receiving portion 902 separates signal of video signal and voice signal via input terminal 901 received signals in signal separation process portion 903.Separate in the splenium 904 at signal of video signal signal of video signal is decoded, voice signal is postponed this process of decoding time quantum in delay portion 905.Undertaken multiplexedly by 906 pairs of images of signal multiplexing handling part, voice signal, sending part 907 transmits images, voice signal via lead-out terminal 908 to display unit 4.In display unit 4, via input terminal 1001 reception information in acceptance division 1002, in signal separation process portion 1003, separate signal of video signal and voice signal by wireless receiving portion 404, export lead-out terminal 1004, lead-out terminal 1005 respectively to.
In wireless delivery unit, do not possess delay portion, can carry out equal delay yet by receiving system 3 or display unit 4.
Figure 11 has represented the structure example of the sound signal processing portion 306 of Fig. 3.
Behind input terminal 1101 input audio datas, pay in the portion 1102 in side-play amount, the side-play amount of control part 320 appointments is added to PTS (Presentation Time Stamp shows time mark).Via being used to adjust decoding buffer 1103 regularly, in decoding processing portion 1104, decode.For example in post-processed portion 1105, carry out the sound of 5.1ch is contracted mixed (Downmix) to 2ch, with the processing of the quiet grade of voice output.By the loud speaker dateout of lead-out terminal 1106 to display unit, and by lead-out terminal 1107 to the audio frequency amplifier dateout.
In decoding processing portion 1104, by to the STC (System Time Clock, system clock) of the value of the PTS that reads from voice signal and control part 320 controls thus value compare calculate decoding processing execution regularly.
In addition, also can not pay portion 1102, and in decoding processing portion 1104, to pts value or the STC value additional offset amount that control part 320 is read, the execution of the decoding processing that staggers regularly by side-play amount.
The explanation of<delay 〉
Below, describe for postponing.
In above structure example, as being used to make image output and the synchronous method of voice output, exist in delay portion 307 and the method for delay portion 308 delays and decoding regularly the method for by the side-play amount of Fig. 5 paying the control lag voice data of portion 502 of the inside of receiving system 3 by Fig. 3.In the former method, can setting of image export the data of display unit 4 to and by the delay of the data of light output part 313 outputs, in order to postpone, can produce the cost that needs preparation and retardation corresponding memory (not record among the figure), in the latter's method, all can impact because of delay equally for any one output.
For in order to make image output and the synchronous method of voice output,, can use the delay portion 308 of Fig. 3 when decoding under the situation of PCM conversion.On the other hand, for example MPEG-AAC (Advanced Audio Coding with input, Advanced Audio Coding) bit stream of data mode keeps the output of original state (for example it is also referred to as AAC output form, ES (Elementary Stream under the situation of not conversion, basic stream) under situation output form), can't use delay portion 308.
At this, consider the situation etc. that has the retardation change according to the wireless delivery unit that uses, be difficult to the buffer sizes of unanimous decision delay portion 307.Therefore, to the setting of the buffering capacity of delay portion 307 and 308 with to pay the control that method that portion 502 postpones separately uses by side-play amount very effective.
Total delay time till the image output has with inferior main cause: the video recording data processing time of (1) receiving system 3 inside (decoding processing delay); (2) compression of the signal of video signal of wire signal sending part 311 and wire signal acceptance division 403 or wireless delivery unit 9 and 10, the processing time (propagation delay) that decompression processing spent; (3) processing time (demonstration control lag) in the image quality adjustment part 405 of display unit 4 inside.Above processing delay amount is described.
(1) delay in signal of video signal processing time, the input data that will export according to receiving system be digital broadcasting, still from the analog acquisition image input decision amount of for example outside video recording equipment, ratio, value etc. (below the amount of being referred to as).Under the situation of analog acquisition image input, there is the trend of retardation increase under most of situation for further implementation image processing at the leading portion of acquisition process.
(2) propagation delay between the wireless delivery unit, if for example the mode difference is extended in the kind of the unit of Shi Yonging, load mode, compression, its retardation is also different.In order to obtain to be fit to the retardation of other wireless delivery unit, via for example DDC (Display Data Channel, display data channel) EDID (the Extended Display Identification Data of acquisition unit, the extending display identification data) information, as the part of EDID information, obtain the retardation that set.Perhaps in the nonvolatile memory 321 of receiving system 3 inside, the title (the perhaps eigenvalue of equipment such as manufacturer name, the ID of manufacturer) of a plurality of unit and the information that is fit to its retardation are noted in pairs in advance, obtained user's title (the perhaps eigenvalue of equipment such as manufacturer name, the ID of manufacturer) by EDID after, control part 320 reference records are partitioned into suitable retardation in the information of nonvolatile memory 321.The place of recorded information for for example built-in storage medium and movably memory etc. also can access same effect.
In addition, wireless delivery unit 9,10 can be considered by suitable CEC communication receiving system 3 notice propagation delay amounts under load mode and compression, a plurality of situations of applying in a flexible way of decompress(ion) mode.As mentioned above, the retardation that is fit to wireless delivery unit is reflected on the sound delay time of receiving system 3.
(3) show control lag, for example under the situation of Fig. 4 (a), for example delivery unit 10 obtains the EDID information of display unit 4, and the EDID information of the display unit 4 that delivery unit 9 obtains from delivery unit 10 and the EDID information of delivery unit 9 generate the EDID information that sends to receiving system 3.Receiving system 3 obtains the EDID information of the information that comprises display unit 4 from delivery unit 9, and the retardation that is fit to is sent to control part 320, is carried out the settings of retardations automatically by receiving system 3.
At this, be illustrated use the example of the situation of EDID information via DDC, also can use CEC communication to obtain above-mentioned EDID information or be similar to the information of EDID information.
In addition, show under the situation of control lag amount, do not use EDID information each time but when using CEC to change, can make retardation always keep optimality by control part 408 to control part 320 transmission information according to the picture size change that shows.Certainly, also can use EDID information via DDC.In this case, for example show that in the change change according to the picture size that shows the EDID information of display unit 4 is updated under the situation of control lag amount, this upgrades by display unit 4 via the notified delivery unit 10 of DDC.Delivery unit 10 obtains the EDID information of display unit 4, and the EDID information of the display unit 4 that delivery unit 9 obtains from conveyer 10 and the EDID information of delivery unit 9 generate the EDID information that sends to receiving system 3.Receiving system 3 is obtained the EDID information of the information that comprises display unit 4 by delivery unit 9.In control part 320, the information of the demonstration control lag that sends with fixed form and the information that transmits processing delay are discerned.
According to above key element promptly for example decoding processing postpone, transmit processing delay, show the combination of each retardation of control lag, the sound delay time amount that control part 320 decisions should be set by receiving system 3.
For example, consider that there is the situation of the upper limit in the memory space of preparing for delay owing to reasons such as costs in delay portion 307 and 308, control part 320 also can be worked as the sound delay time amount of decision and pay portion's 502 delays by side-play amount under certain situation more than the threshold values, set etc. by delay portion 307 and 308 after a little while, carry out the control of sound delay time than threshold values.
Do not have under the situation of delay portion 506, delay portion 605 at wired sending part 311 and wired acceptance division 403 inside, in above-mentioned wireless transmission processing delay, the processing time of signal of video signal compression unit 503, signal of video signal being separated splenium 604 added to the time of above-mentioned transmission processing delay.
Wireless delivery unit 9 and wireless delivery unit 10 do not have in inside under the situation of delay portion 805, delay portion 905, and the processing time of signal of video signal compression unit 804, signal of video signal being separated splenium 904 in above-mentioned wireless transmission processing delay added to the time of above-mentioned transmission processing delay.
In addition, be under the situation of ES output in the light output form, pay portion 502 in side-play amount and set above-mentioned retardation.In the case, consider the result who sets retardation synchronously of image output and the output of optoacoustic sound, existence can't obtain that image is exported and from the synchronous situation of the output of audio output unit 407.In this case, can consider that for example control part 320 carries out the quiet setting of sound in post-processed portion 505, stop the output of audio output unit 407 automatically.
On the other hand, output form is can pay portion 502 and delay portion 308 sets in side-play amount under the situation of PCM, if delay portion 307 is identical with the setting of the buffering capacity of delay portion 308, then the output of audio output unit 207 and audio frequency amplifier regularly can be carried out synchronously.
As mentioned above, according to the propagation delay amount, the method that the method for sound delay time separately uses delay portion and side-play amount to pay can be adjusted the synchronous of image and sound automatically, does not need the user to start anew to adjust.In addition, for also establishing synchronously owing to the reason generation image beyond the transfer rate of image and the synchronism deviation of sound.
The explanation of<processing 〉
In the said structure example, the processing example of the automatic setting of carrying out above-mentioned delay is illustrated.In addition, the example as execution mode goes on to say as prerequisite preferentially to carry out wireless transmission by receiving system 3 to the data mode of display unit 4 in following processing example.
Figure 12 has represented the setting processing example of the retardation of initial stage processing.
In step (following province is S slightly) 1201, detect wireless delivery unit and whether connect.Detect and just often advance to S1202, detect under the normal situation of malunion and advance to S1204.In S1202, obtain the device id of detected wireless delivery unit.Judge wireless propagation delay amount by this device id.In S1203, advance to S1206 under the situation about can normally discern, advance to S1204 under the situation of recognition failures.In S1204, detect wired connection, detect and just often advance to S1206, detect under the normal situation of malunion and advance to S1205.In S1205, owing to can't establish being connected of receiving system 3 and display unit 4, time-out time etc. for example is set on display unit 4, show and for example ask the user to confirm the information and the end process of connection status and so on.
Owing to established connection, import image, voice data at S1206.Can obtain the size of display image, whole parameters that light is exported decision sound delay time amounts such as setting in this stage.More than, in the processing till the S1206, determine that the key element of above-mentioned retardation is all got all the ready, in S1207, use above-mentioned value to carry out the setting of retardation and advance to S1208.In S1208, begin voice codec, and remove the quiet implementation voice output of voice output, end process.
Above embodiment be the preferential example of wireless connections under the effective situation of wired connection and wireless connections two sides, opposite also preferential wired connection with it.In addition, by the data mode of receiving system 3 to display unit 4, also can select the user by for example setting menu etc., the method for attachment of selecting is only used in the preferential method of attachment of selecting.
Figure 13 has represented that the user is postponed the processing example of setting by changes such as for example actions menu fine adjustment delay times.
In S1301, carry out the quiet setting of voice output and stop decoding processing, stop voice output.At this moment, at S1302, identical implementation with S1207 postpones to set.Begin decoding processing and carry out the quiet releasing of voice output at S1303, restart voice output and end process.
Figure 14 has represented the detailed processing example that the delay of S1207, S1302 is set.
The setting of carrying out delay portion 307 and delay portion 308 at S1401 as required advances to S1402.Carrying out side-play amount as required at S1402 pays the setting of portion 1102 and advances to S1403.Carry out the setting and the end process of loud speaker output at S1403.
Figure 15 has represented the detailed processing example of S1401, has promptly determined the processing example of the delay buffering capacity of delay portion 307 and delay portion 308.
At S1501, if the data mode between receiving system 3 and the display unit 4 is wireless, advance to S1502, if be wired, then advance to S1503.At S1502 the processing time of wireless transmissioning mode is added to retardation.If the light output at the S1503 voice signal is effective, advance to S1505, if invalid, advance to S1504.Light is exported, and effective/invalid setting is for example set by menu by the user and is got final product.In S1504 to setting and end process the time of delay (buffering capacity) of the delay portion 307 that is connected to display unit 4.The summation that set point is served as reasons and calculated time of delay of each handling part till image output.S1505 under the situation of digital signal such as for example broadcast reception, advances to S1506, if be to for example carrying out digital conversion from the analog signals such as sampled data of external equipment, then advances to S1507.If the data mode of the light of voice signal output is the AAC output form in S1506, thereby because the invalid end process of setting of delay portion 308 if be the PCM output form, then advances to S1507.Be under the situation of PCM output form in S1507, set the time of delay (buffering capacity) of effective delay portion 308, end process.
Figure 16 has represented the detailed processing example of S1402, has promptly determined side-play amount to pay the processing example of the side-play amount in the portion 1102.
At S1601, if the data mode between receiving system 3 and the display unit 4 is wireless, advance to S1602, if be wired, then advance to S1603.Processing time with wireless transmissioning mode in S1602 adds to retardation.If in the output of the light of S1603 voice signal effectively, advance to S1604, if invalid, end process then.If the data mode of voice signal light output is AAC in S1604, advance to S1605, if be PCM, then end process.Be to advance to S1606 under the situation of digital signal at S1605, if analog signal is carried out digital conversion, then end process when the data mode of receiving system 3 output.In S1606, control part 320 will add to the offset value of PTS and be calculated by the signal of video signal processing time of each handling part, be set in side-play amount and pay portion 1102, end process.
Figure 17 has represented the detailed processing example of S1403, has promptly set processing example from the output of audio output unit 407.
At S1701, if light output effectively, advances to S1702, if invalid then advance to S1704.When the data mode by receiving system 3 outputs is under the situation of digital signal, advance to S1703 at S1702,, then advance to S1704 if for analog signal is carried out digital conversion.Be to advance to S1705 under the situation of AAC at S1703, advance to S1704 under the situation for PCM when the data mode of voice signal light output.At S1704 the output of audio output unit 407 is made as effectively end process.It is invalid at S1705 the output of audio output unit 407 to be made as, end process.
In addition, the processing example that the output of audio output unit 407 is automaticallyed switch is illustrated, for example the output of audio output unit 407 is made as effectively usually, by allowing the user use suitable silencing function, also can not carry out the voice output of S1403 and set processing.
Figure 18 has represented receiving system 3 and display unit 4 are being carried out during wireless connections use, and wireless connections disconnect the synchronous processing example of keeping image, voice output under the situation of connection for a certain reason.
Whether in S1801, it is effective to detect wired connection, if effectively then advance to S1803, if invalid then advance to S1802.In S1802, detect the also unexecuted wireless connections of the both unexecuted wired connection of control part 408 of display unit 4, show the information and the end process of urging the user to connect affirmation at image display part 406.
Control part 320 implementation quiet settings of voice output and voice codec stop to handle in S1803, stop voice output, advance to S1804.Control part 320 is carried out tut delay setting in S1804.Control part 320 switches to wire signal sending part 311 by switch 309,310 with connecting path in S1805, restarts the output of signal of video signal, advances to S1806.Control part 320 beginning voice output and end process in S1806.
According to the processing example of Figure 18, under the interrupted situation of wireless transmission, can set suitable sound delay time, keep image, voice output restart synchronously show.In addition, detect the interruption of wireless connections self, make the output of audio output unit 407 invalid, can realize that also the voice output shown in the S1802 stops by control part 408.In addition, when the user switches to load mode under wired situation from for example setting menu, this treatment step is identical.
Figure 19 has represented that the user carries out keeping the synchronous processing example of image, voice output under the situation of wireless connections in the wired connection use to receiving system 3 and display unit 4.
Control part 320 implementation quiet settings of voice output and voice codec stop to handle among the S1901, stop voice output, advance to S1902.Control part 320 detects the device id of detected wireless delivery unit in S1902, obtains the processing time that wireless transmission spends.Control part 320 is carried out the setting of above-mentioned sound delay time and is advanced to S1904 in S1903.By switch 309,310 connecting path is switched to wireless signal sending part 312 at S1904 control part 320, restart the output of signal of video signal, advance to S1905.Identical at S1905 with S1806, beginning voice output, end process.
According to the processing example of Figure 19, can switch to the suitable sound delay time of setting under the situation of wireless transmission, keep the synchronous of image, voice output, restart to show.In addition, control part 408 detects the establishment of wireless connections, makes the output of audio output unit 407 invalid, can realize that also the voice output shown in the S1901 stops.When the user switches to load mode under the wireless condition from for example setting menu, this treatment step is identical.
The explanation that<picture shows 〉
Describe according to the processing example of above explanation demonstration example picture displayed.
Figure 20 has represented the demonstration example of menu of the method for attachment of user's selective reception device 3 and display unit 4.The image display part of 2001 expression display unit 4.The user can be by using a teleswitch or operating portion 319 is selected the connected mode that needs by the project of " panel connected mode ".Select under the situation of " wired connection " setting when 320 pairs of sound delay time amounts of control part are carried out wired connection.Select under the situation of " wireless connections " setting when 320 pairs of sound delay time amounts of control part are carried out wired connection.
Figure 21 has represented the demonstration example of the menu of the data mode that user's selective light is exported.The user can by use a teleswitch or operating portion 319 for the light of voice signal output select the form that needs.When selecting " AAC ", the output of optoacoustic sound is not handled it to input audio signal and is exported, and when selecting " PCM ", input audio signal is modulated to pulse code exports.In addition, owing to can not carry out AAC output, therefore also can make " output of light digital audio is set " project self become and to select by for example becoming grey (site demonstration) etc. by input signal.
Figure 22 has represented the demonstration example of the menu that the user finely tunes the retardation of delay portion 307 and delay portion 308.The user can be by using a teleswitch or the increase and decrease of 319 pairs of sound delay time amounts of operating portion is finely tuned with for example interval of millisecond, and image, voice output be made as the setting that needs synchronously.Figure 22 (a) has represented that the user specifies the situation of " automatically " in order to use with the retardation of receiving system 3 automatic settings.Figure 22 (b) has represented that the user specifies the situation that the retardation of receiving system 3 automatic settings is added 35 milliseconds delay.From these the 35 milliseconds operation keyss up and down that pass through for example to press remote controller, can carry out for example setting the operation of retardation by millisecond.
In addition, the user selects under " automatically " situation in addition, in control part 320, can the retardation of fine setting should be adjusted to big still little from image processing time and sound retardation guiding user.The method of use arrow guiding for example shown in Figure 23 and the method for passing through the character string guiding shown in Figure 24 etc.
In addition, also can for example the situation user according to method of attachment be judged as suitable sound delay time amount is recorded in the nonvolatile memory 321, as the setting item of the fine setting of above-mentioned retardation, make after for the second time and can easyly select by options such as " user sets 1 ", " user set 2 ".
Figure 25 has represented that S1205 or S1803 are in the demonstration example of display unit 4 display message under the situation that the connection between receiving system 3 and the display unit 4 normally do not establish.2501 for urging the user to connect the information of affirmation.2502 buttons for user's selection.Select back control part 320 cancellation display message.Also button can be set, just cancellation automatically after the time showing of appointment.
Embodiment 2
Consider delay portion 506 and 605, perhaps delay portion 805 and 905 does not exist, the different situation of delivery time of image and sound during the data between receiving system 3 and the display unit 4 transmit.Under this situation, if according to delivery unit unanimous decision time difference in advance, then control part 320 is paid portion 1102 by delay portion 307 or side-play amount and is carried out revisal.For example represented to obtain automatically the method for its delivery time difference.
In wireless signal sending part 312, the image of transmission and the data additional period of sound are stabbed in certain timing.In wireless signal acceptance division 404, obtain the value of the timestamp after adding, calculate its difference by control part 408.Under the situation of Fig. 4 (b), the time difference of calculating revisal can be set in delay portion 410, also the time difference of calculating portion's 1102 revisals can be paid to control part 320 transmissions and by delay portion 307 or side-play amount.
Embodiment 3
In embodiment 1, the method that postpones voice output is illustrated, but also can be before the decoding of for example image data, by to DTS (Decoding Time Stamp, decoded time stamp), PTS pays side-play amount, and front and back adjustment is carried out in the timing of the decoding processing of image data.For example under the situation of sound output delay in the audio frequency amplifier 7 on output ground that light is exported, can set suitable side-play amount by DTS, PTS and reach synchronous image data.
Embodiment 4
In embodiment 1, constitute the structure of image output data and voice data in output device 4, consider for example image output device and the indivedual situations about existing of voice output.Show that the control lag time is identical with obtaining from image output device, transcriber 3 obtains the demonstration control lag time of voice data from voice output.Control part 320 deducts the processing delay amount that voice output spends by the processing delay amount that image output is spent, and calculates time of delay.Thus, identical with embodiment 1, can reach synchronous.Can be wired and wireless combination with being connected of its, perhaps all be wired, all be wireless.
Embodiment 5
In embodiment 1, constitute the structure of image output data and voice data in display unit 4, consider the situation that for example image output device and voice output and transcriber 3 are connected in series.
Figure 26 has represented the summary of the situation that transcriber 3 is connected with image display 14, image display 14 and voice output 15a, 15b.Connect for example to use and to carry out the transmission reception of image, voice data, the method for carrying out the transmission reception of simple command code as the HDMI cable.Obtain image with transcriber 3 from image display 14 and show that the control lag time is identical, from the demonstration control lag time that voice output 15a and 15b obtain voice data.Voice output 15a and 15b think to be same structure owing to carry out for example stereo sound output.Therefore above-mentioned output delay time equates.Control part 320 shows that from image the control lag time deducts the processing delay time that voice output spends, and calculates time of delay.In voice output 15a and 15b, do not exist under the situation of demonstration control lag of voice data, do not obtain above-mentioned output delay time, take the processing identical can yet with embodiment 1.Under the situation of wireless connections, can further add the transmission processing delay time between each device, calculate time of delay.
Figure 27 has represented situation about transcriber 3 being connected with display unit 4 via audio frequency amplifier 7 in addition.Connect for example to use and to carry out the transmission reception of image, voice data, the method for carrying out the transmission reception of simple command code as the HDMI cable.In the case, transcriber 3 obtains image from display unit 4 and shows the control lag time, is obtained the output delay time of voice data by audio frequency amplifier 7.Control part 320 shows that by image the control lag time deducts the output delay time of voice data, calculates time of delay.In audio frequency amplifier 7, do not exist under the situation of demonstration control lag of voice data, do not obtain above-mentioned output delay time, take the processing identical can yet with embodiment 1.
Show in the information processing of control lag time in demonstration control lag time of obtaining voice data or image, can enumerate for example following example.In Figure 27, at first for example pass through CEC traffic enquiry processing delay time to audio frequency amplifier 7 by transcriber 3.By CEC traffic enquiry processing delay time, 4 pairs of audio frequency amplifiers of display unit, 7 notice images show the control lag time to audio frequency amplifier 7 for display unit 4.Audio frequency amplifier 7 is with the information of the demonstration control lag time of the voice data of self and shown the form combination of the information of control lag time with appointment by the image that display unit 4 is obtained, and is sent to transcriber 3.Transcriber 3 is by the set point of the processing delay time decision sound delay time amount of two equipment.At this moment, display unit 4 not only can send image and show the control lag time, can also send the output delay time of voice data.In the case, transcriber 3 can obtain the output delay time of the voice data of display unit 4 from audio frequency amplifier 7, by selecting from audio frequency amplifier 7 or wherein which implementation voice output of display unit 4, the sound delay time amount of change setting.
In addition, in other examples, at first by audio frequency amplifier 7 for display unit 4 query processing retardations.Display unit 4 shows that with image the information (with the information of the output delay time of the voice data information with the form combination of appointment) of control lag time is notified to audio frequency amplifier 7.Audio frequency amplifier 7 is recorded in inner memory with the information of obtaining.Then transcriber 3 is in audio frequency amplifier 7 interrogator delay amounts, audio frequency amplifier 7 is with the information of the demonstration control lag time of the voice data of self and shown the form combination of the information of control lag time (with the demonstration control lag time of voice data) with appointment by the image that display unit 4 is obtained, and is sent to transcriber 3.Transcriber 3 determines the set point of sound delay time amount from the information of obtaining.
In addition, in other examples, for amplifier image is shown that the information (with the information of demonstration control lag time of the voice data information with the form combination of appointment) of control lag time is notified to audio frequency amplifier 7 from display unit 4, audio frequency amplifier 7 with the information of the demonstration control lag time of the voice data of self with show the form combination of the information of control lag time (with the demonstration control lag time of voice data) with appointment from the image that display unit 4 is obtained, is sent to transcriber 3 for transcriber 3.Transcriber 3 determines the set point of sound delay time amount from the information of obtaining.
As mentioned above, in Figure 27, transcriber 3 can obtain the information of audio frequency amplifier 7 and display unit 4 with various forms, sets the sound delay time amount.Also can consider illustration similar method in addition.In addition, identical therewith in Figure 26, transcriber 3 also can show the demonstration control lag time of the voice data of control lag time and voice output 15a and 15b with the image that various forms obtains image display 14, sets the sound delay time amount.According to more than, in this embodiment can be identical with embodiment 1, reach the synchronous of image output and voice output.Can be wired and wireless combination with being connected of its, perhaps all be wired, all be wireless.
Embodiment 6
Comprehensive above embodiment, for example receiving system 3 receives image, voice data with tame interior transmission information server by being connected of wireless delivery unit from video recording equipment, under the situation of the incoming timing generation deviation of image data and voice data, also can set the sound delay time amount by obtaining by the transmission processing delay time of wireless delivery unit gained and at control part 320, identical with embodiment 1, reach the synchronous of image output and voice output.

Claims (3)

1. video/audio reproducing apparatus, it is connected with display unit via wireless delivery unit, it is characterized in that, comprising:
Receive image data, voice data and be used for first acceptance division that first of reproduced in synchronization image data and voice data reproduces synchronizing information;
Receive from described wireless delivery unit and to be used for the revisal reproduced in synchronization and second to reproduce synchronizing information, receive second acceptance division that the 3rd of the deviation that is used for image data that the revisal reproduced in synchronization generates by the processing of display unit and voice data is reproduced synchronizing information from described display unit by what transmit the deviation of handling the image data that generates and voice data; With
Based on receive by described first acceptance division and described second acceptance division first, second and the 3rd reproduce synchronizing information, the control part that the image data that receives by described first acceptance division and voice data are reproduced.
2. video/audio reproducing apparatus as claimed in claim 1 is characterized in that:
The described first reproduction synchronizing information is to be included in the temporal information that receives in the data,
Described second reproduces synchronizing information comprises the interior data processing time information of described wireless delivery unit,
The described the 3rd reproduces synchronizing information comprises the interior data processing time information of described display unit.
3. video/audio reproducing apparatus as claimed in claim 1 is characterized in that:
The described first reproduction synchronizing information is to be included in the temporal information that receives in the data,
The described second reproduction synchronizing information is the temporal information based on the data processing time information in the described wireless delivery unit,
Described the 3rd reproduction synchronizing information is the temporal information based on the data processing time information in the described display unit.
CN2008101809253A 2008-01-15 2008-11-18 Video/audio reproducing apparatus Expired - Fee Related CN101489140B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2008005167 2008-01-15
JP2008005167A JP2009171079A (en) 2008-01-15 2008-01-15 Video/audio reproducing apparatus
JP2008-005167 2008-01-15
JP2008022265 2008-02-01
JP2008-022265 2008-02-01
JP2008022265A JP2009182912A (en) 2008-02-01 2008-02-01 Video/audio reproducing apparatus

Publications (2)

Publication Number Publication Date
CN101489140A CN101489140A (en) 2009-07-22
CN101489140B true CN101489140B (en) 2011-07-06

Family

ID=40891780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101809253A Expired - Fee Related CN101489140B (en) 2008-01-15 2008-11-18 Video/audio reproducing apparatus

Country Status (2)

Country Link
JP (1) JP2009171079A (en)
CN (1) CN101489140B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8674935B2 (en) * 2009-10-21 2014-03-18 Qualcomm Incorporated System delay mitigation in interactive systems
JP5075279B2 (en) 2010-04-20 2012-11-21 パナソニック株式会社 I / O switching device and I / O switching method
EP2533547A1 (en) * 2011-06-10 2012-12-12 Koninklijke KPN N.V. Method and system for providing a synchronised user experience from multiple modules
JP2014127213A (en) * 2012-12-25 2014-07-07 Pioneer Electronic Corp Synchronous reproduction control device, and synchronous reproduction control method

Also Published As

Publication number Publication date
JP2009171079A (en) 2009-07-30
CN101489140A (en) 2009-07-22

Similar Documents

Publication Publication Date Title
EP2081373A1 (en) Video/audio reproducing apparatus
CN101448118B (en) Audiovisual (av) device and control method thereof
US9826264B2 (en) Apparatus, systems and methods to synchronize communication of content to a presentation device and a mobile device
EP1936990A2 (en) Digital broadcast receiving apparatus and synchronization method
EP1956848A2 (en) Image information transmission system, image information transmitting apparatus, image information receiving apparatus, image information transmission method, image information transmitting method, and image information receiving method
EP2434757B1 (en) Video playback system and video playback method
JP2006019890A (en) Video signal receiver and receiving method
JP2006345169A (en) Digital television receiving terminal device
WO2015173975A1 (en) Reception apparatus, transmission apparatus, and data processing method
CN101489140B (en) Video/audio reproducing apparatus
JP4488045B2 (en) Receiving apparatus and receiving method
JP2009182912A (en) Video/audio reproducing apparatus
JP5060649B1 (en) Information reproducing apparatus and information reproducing method
JP4837018B2 (en) AV equipment
JP2007235519A (en) Method and system for video sound synchronization
US20070035668A1 (en) Method of routing an audio/video signal from a television's internal tuner to a remote device
JP4738251B2 (en) Synchronous automatic adjustment device
JP4270136B2 (en) Video / audio receiver and television receiver
JP4403667B2 (en) Program output method when using network and receiving apparatus having network connection function
JP5794847B2 (en) Recording apparatus and recording system
JP2003153124A (en) Video/audio receiver and television receiver
JP4551724B2 (en) Broadcast receiving apparatus, broadcast receiving system, broadcast receiving apparatus control method, and broadcast receiving method
JP2020145682A (en) Signal processing device
JP2008042516A (en) Video content display system and video display device
JP5248663B2 (en) Interface method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: HITACHI LTD.

Free format text: FORMER OWNER: HITACHI,LTD.

Effective date: 20130816

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20130816

Address after: Tokyo, Japan

Patentee after: Hitachi Consumer Electronics Co.,Ltd.

Address before: Tokyo, Japan

Patentee before: Hitachi Manufacturing Co., Ltd.

ASS Succession or assignment of patent right

Owner name: HITACHI MAXELL LTD.

Free format text: FORMER OWNER: HITACHI LTD.

Effective date: 20150302

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150302

Address after: Osaka Japan

Patentee after: Hitachi Maxell, Ltd.

Address before: Tokyo, Japan

Patentee before: Hitachi Consumer Electronics Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110706

Termination date: 20171118

CF01 Termination of patent right due to non-payment of annual fee