US20160100156A1 - Smart Audio and Video Capture Systems for Data Processing Systems - Google Patents
Smart Audio and Video Capture Systems for Data Processing Systems Download PDFInfo
- Publication number
- US20160100156A1 US20160100156A1 US14/968,225 US201514968225A US2016100156A1 US 20160100156 A1 US20160100156 A1 US 20160100156A1 US 201514968225 A US201514968225 A US 201514968225A US 2016100156 A1 US2016100156 A1 US 2016100156A1
- Authority
- US
- United States
- Prior art keywords
- portable device
- video
- image
- orientation
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H04N13/0296—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B3/00—Line transmission systems
- H04B3/02—Details
- H04B3/20—Reducing echo effects or singing; Opening or closing transmitting path; Conditioning for transmission in one direction or the other
-
- H04N13/0242—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/142—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/142—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
- H04N2007/145—Handheld terminals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/11—Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
Definitions
- a tablet has a sound recording system which enables the tablet to record sound, for example to enable voice communications or media applications.
- the digital data converted by a microphone in this recording system is used to perform various purposes, such as recognition, coding, and transmission. Since the sound environment includes noise.
- the recorded target sound in the microphone is enhanced or separated from noise in order to obtain clean sound.
- Some tablets may also have a three dimensional (3D) video camera feature, which can be used to implement 3D video conferencing with other tablet or device users.
- the disclosure includes a computation system comprising an orientation detection device configured to detect position information comprising a position and an orientation of the computation system, a multi-sensor system coupled to the orientation detection device, wherein the multi-sensor system is configured to capture environmental input data, wherein the multi-sensor system comprises at least one of an audio capturing system and a three-dimensional (3D) image capturing system, and wherein the environmental input data comprises at least one of audio and an image, and at least one signal processing component coupled to the orientation detection device and to the multi-sensor system, wherein the processor is configured to modify the captured environmental input data based on the position information.
- a computation system comprising an orientation detection device configured to detect position information comprising a position and an orientation of the computation system, a multi-sensor system coupled to the orientation detection device, wherein the multi-sensor system is configured to capture environmental input data, wherein the multi-sensor system comprises at least one of an audio capturing system and a three-dimensional (3D) image capturing system, and wherein the environmental input data comprises at least one of audio and an image,
- the disclosure includes a sound recording system comprising a direction of arrival (DOA) estimation component coupled to one or more microphones and configured to estimate DOA for a detected sound signal using received orientation information, a noise reduction component coupled to the DOA estimation component and configured to reduce noise in the detected sound signal using the DOA estimation, and a de-reverberation component coupled to the noise reduction component and the DOA estimation component and configured to remove reverberation effects in the detected sound signal using the DOA estimation.
- DOA direction of arrival
- the disclosure includes a three-dimensional (3D) video capturing system comprising a camera configuration device coupled to at least two cameras and configured to arrange at least some of the cameras to properly capture one of a 3D video and a 3D image based on detected orientation information for the 3D video capturing system, and an orientation detection device coupled to the camera configuration device and configured to detect the orientation information.
- 3D three-dimensional
- the disclosure includes a sound recording method implemented on a portable device, comprising detecting an orientation of the portable device, adjusting a microphone array device based on the detected orientation, recording a sound signal using the adjusted microphone array device, and estimating a direction of arrival (DOA) for the sound signal based on the detected orientation.
- a sound recording method implemented on a portable device, comprising detecting an orientation of the portable device, adjusting a microphone array device based on the detected orientation, recording a sound signal using the adjusted microphone array device, and estimating a direction of arrival (DOA) for the sound signal based on the detected orientation.
- DOA direction of arrival
- the disclosure includes a three-dimensional (3D) video capturing method implemented on a portable device, comprising detecting an orientation of the portable device, configuring a plurality of cameras based on the detected orientation, and capturing a video or image using the configured cameras.
- 3D three-dimensional
- FIG. 1 is a schematic diagram of a tablet design.
- FIG. 2 is a schematic diagram of a sound recording system.
- FIG. 3 is a schematic diagram of a signal processing component.
- FIG. 4 is a schematic diagram of an embodiment of an improved tablet design.
- FIG. 5 is a schematic diagram of an embodiment of an improved sound recording system.
- FIG. 6 is a schematic diagram of an embodiment of an improved signal processing component.
- FIG. 7 is a schematic diagram of an embodiment of an improved 3D video capturing system.
- FIG. 8 is a flowchart of an embodiment of an improved sound recording method.
- FIG. 9 is a flowchart of an embodiment of an improved 3D video capturing method.
- FIG. 10 is a schematic diagram of an embodiment of a general-purpose computer system.
- Emerging and future tablets may include advanced microphone arrays that may be integrated into the tablets to provide better recorded sound quality, e.g., with higher signal to noise ratio (SNR).
- the advanced microphone array devices may be used instead of currently used omni-directional (uni-directional) microphones for detecting target sounds.
- the microphone array may be more adaptable to the direction of the incoming sound, and hence may have better noise cancellation property.
- One approach to implement the microphone array may be to emphasize a target sound by using a phase difference of sound signals received by the microphones in the array based on a direction of a sound source and a distance between the microphones, and hence suppress noise. Different algorithms may be used to achieve this.
- a Coherent Signal Subspace process which may implement a Multiple Signal Classification (MUSIC) algorithm may be used.
- This algorithm may require pre-estimating the signal direction, where the estimation error in the signal direction may substantially affect the final estimation of the process.
- Estimating the sound signal's DOA with sufficient accuracy may be needed for some applications, such as for teleconferencing system, human computer interface, and hearing aid. Such applications may involve DOA estimation of a sound source in a closed room. Hence, the presence of a significant amount of reverberation from different directions may substantially degrade the performance of the DOA estimation algorithm. There may be a need to obtain a more reliable pre-estimated DOA that locates a speaker in a reverberant room. Further, an improved estimated DOA may improve noise cancellation since the noise source may have a different direction than the target sound.
- MUSIC Multiple Signal Classification
- Another important scenario that may need attention is estimating or identifying the user's face position with respect to a tablet's 3D video camera system. For example, when the user participates in a 3D video conferencing with another user using the tablet, the user may not hold the tablet in a designated proper position or the orientation of the tablet may be unknown to the 3D video camera system. Current 3D video camera enabled tablets in the market may not have the ability to capture a correct 3D video or image when the tablet is not held in the proper position.
- a position aware system and a camera configuration system which uses position or orientation information to adaptively configure the 3D cameras of the system to capture correct 3D video/images may be needed.
- the systems may be configured to detect and obtain the tablet's orientation or position information and use this information to enhance the performance of a sound recording sub-system and/or a 3D video capture sub-system in the tablet.
- position information and orientation information are used herein interchangeably to indicate the orientation and/or tilting (e.g., in degrees) of the tablet, for instance with respect to a designated position, such as a horizontal alignment of the tablet.
- the systems may comprises an orientation detection device, a microphone adjusting device, a camera configuration device, a sub-system of sound recording, a sub-system of 3D video capturing, or combinations thereof.
- the orientation detection device may be used to generate position/orientation of the tablet, which may be used by the microphone adjusting device and/or the camera configuration device.
- the microphone adjusting device may use this information to adjust the sensing angle in the microphone(s) and align the angle to the direction of the target sound.
- the position/orientation information may also be used to implement signal processing schemes in the sound recording sub-system.
- the video configuration device may use this information to re-arrange the cameras for capturing video/image.
- the information may also be used to implement corresponding processes in the 3D video capturing sub-system to obtain the correct 3D video or image.
- FIG. 1 illustrates an embodiment of a tablet design 100 for a tablet 101 .
- the tablet 101 may be any portable computation device characterized by a flat screen on one side of the tablet's housing.
- the display screen may be used for viewing and may also be a touch screen used for typing.
- the tablet 101 may not require connecting separate interface devices for basic operations, which may not be the case for a desktop computer.
- the tablet 101 may be a fixed device that is not foldable or that does not require mechanical operation, such as in the case of a laptop.
- the tablet 101 may offer fewer features/functions than other types of computation devices (e.g., laptops) and have lower pricing and cost.
- the tablet 101 may also have lighter weight and may be more portable friendly.
- the tablet 101 may be different than other communication devices, such as smartphones, in that the tablet 101 may be larger in size, offer more computation power and functions, and/or may not necessarily be equipped with a cellular interface.
- the table 101 may have similar features to at least some currently available tablets, also referred to as pads, in the market, such as the Apple iPad, the Hewlett-Packard (HP) Slate tablet, the Samsung Galaxy tablet, the Lenovo IdeaPad, the Dell Latitude tablet, and other tablets or pads.
- the tablet design 100 may have a relatively small thickness with respect to its width or length and a flat display screen (e.g., touch screen) on one side of the tablet 101 .
- the top and bottom edges of the table 101 may be wider than the remaining (side) edges of the tablet 101 .
- the length or the top and bottom edges may correspond to the length of the tablet 101 and the length of the side edges may correspond to the width of the tablet 101 .
- the display screen may comprise a substantial area of the total surface of the tablet 101 .
- the tablet design 100 may also comprise a microphone 102 , e.g., on one edge of the tablet 101 around the screen, and typically one or two cameras 104 , e.g., on another edge of the tablet 101 , as shown in FIG.
- the microphone 102 may be an omni-directional microphone or a microphone array device that is part of an audio recording system of the tablet 101 for receiving user's voice, enabling voice communications, sound recording, communications, or combinations thereof.
- the cameras 104 may be part of a video capturing system of the tablet 101 for shooting images or video, enabling video conferencing or calling, or both.
- the cameras 104 may be 3D cameras and the video capturing system may be a 3D video capturing system that captures 3D images or video.
- a 3D camera is a single device that is capable of capturing both “RGB” information and 3D information.
- at least two cameras 104 may be needed to capture two frames (at about the same time) for the same image from different perspectives. The two frames may then be processed according to a 3D processing scheme to render a 3D like image. The same concept may be applied for 3D video capturing.
- the audio recording system may be optimized according to one designated orientation of the tablet 101 .
- the audio recording system may be optimized for an upright position of the tablet 101 , as shown in FIG. 1 ( a ) .
- the microphone 102 may be positioned at the bottom edge of the tablet 101 (e.g., around the center of the bottom edge).
- the target sound or user's voice detected by the microphone 102 may be properly processed by the audio recording system to remove any noise.
- the microphone 102 may receive the user's voice or any target sound in addition to noise, e.g., from other sources around the user or the target sound.
- the audio recording system may then account for the noise assuming that the tablet 101 is held or positioned in the proper orientation (upright position) and that the microphone 102 is located in the proper location accordingly (at the bottom edge).
- the microphone 102 may not be located anymore in the proper location (e.g., with respect to the sound target) and hence the audio recording system (that assumes an upright positioning of the tablet 101 ) may not properly process the detected sound/voice and accompanying noise.
- the output of the audio recording system may not be optimized. For example, in a voice calling scenario, the communicated user voice may still include substantial noise or may not be clear to the receiver on the other side.
- the 3D video capturing system may be optimized according to a selected orientation of the tablet 101 , such as the upright position of FIG. 1 ( a ) , where the two cameras 104 may be positioned at the top edge of the tablet 101 (e.g., around the center of the top edge).
- the video or image captured by the cameras 104 may be properly processed by the 3D video capturing system to properly generate 3D like scenes.
- the 3D video capturing system may process the captured frames by accounting for the corresponding positioning of the cameras 104 (at the top edge), assuming that the tablet 101 is held or positioned in the proper orientation (upright position).
- the cameras 104 may not be located anymore in the proper location (e.g., with respect to the target image/video), and hence the 3D video recording system (that assumes an upright positioning of the tablet 101 ) may not properly process the captured video/image.
- the output of the 3D video capturing system may not be optimized. For example, in a video conferencing scenario, the communicated user 3D video may not be clear to the viewer on the other side.
- FIG. 2 illustrates an embodiment of a sound recording system 200 , which may be used in the tablet 101 based on the tablet design 100 .
- the sound recording system 200 may comprise a microphone 201 , a signal processing device 202 coupled to the microphone 201 , and at least one additional processing component 203 for further signal processing coupled to the signal processing device 202 .
- the components of the sound recording system 200 may be arranged as shown in FIG. 2 , and may be implemented using hardware, software, or combinations of both.
- the microphone 201 may correspond to the microphone 102 .
- the signal processing device 202 may be configured to receive the detected sound/audio from the microphone 201 as input, process the sound/audio, e.g., to cancel or suppress noise, and send a processed (clean) sound as output to the additional processing component(s) 203 .
- the processes of the signal processing device 202 may include but are not limited to noise reduction and de-reverberation.
- the additional processing component(s) 203 may be configured to receive the clean sound as input, further process the clean sound, e.g., to implement sound recognition, encoding, and/or transmission, and accordingly provide digital sound data as output.
- FIG. 3 illustrates an embodiment of a signal processing component 300 , which may be used in the tablet 101 based on the tablet design 100 .
- the signal processing component 300 may correspond to the signal processing component 202 of the sound recording system 200 .
- the signal processing component 300 may comprise a noise reduction block 301 and a de-reverberation block 302 coupled to the noise reduction block 301 .
- the components of the signal processing component 300 may be arranged as shown in FIG. 3 , and may be implemented using hardware, software, or combinations of both.
- the noise reduction block 301 may be configured to receive the collected sound (e.g., from the microphone 201 ) signal possibly with noise and/or reverberation effect, process the sound signal to reduce or eliminate noise, and then forward the processed signal to the de-reverberation block 302 .
- the de-reverberation block 302 may be configured to receive the processed signal from the noise reduction block 301 , further process the sound signal to cancel or reduce any reverberation effect in the sound, and then forward a clean sound as output.
- FIG. 4 illustrates an embodiment of an improved tablet design 400 for a tablet 401 .
- the tablet 401 may be any portable computation device characterized by a flat screen on one side of the tablet's housing.
- the components of the tablet 401 may be configured similar to the corresponding components of the tablet 101 , including a screen that may be a touch screen.
- the tablet 401 may also comprise a microphone 402 , e.g., on one edge of the tablet 401 around the screen.
- the microphone 402 may be a microphone array device, which may comprise a plurality of microphones arranged in an array configuration.
- the tablet 401 may also comprise at least two cameras 404 , which may be 3D cameras for capturing 3D video/image(s).
- the cameras 404 may be positioned on one or different edges of the tablet 401 .
- the tablet 401 may comprise about four cameras 404 , which may be each located on one of the four edges of the tablet 401 . Distributing the cameras 404 along different edges of the tablet 401 may allow considering different positioning/orientation of the tablet 400 when capturing video/images and hence better 3D video/image processing according to positioning/orientation.
- the components of the tablet 401 may be arranged as shown in FIG. 4 ( a ) , which may correspond to one possible position (e.g., upright position) for holding and operating the tablet 401 .
- FIGS. 4 ( b ), ( c ), and ( d ) show other possible orientations for holding or operating the tablet 401 , at 90 degrees, 180 degrees, and 270 degrees, respectively, from the orientation of FIG. 4 ( a ) .
- the positions of the microphone 402 and the cameras 404 from a fixed target, such as the user's face may be different. If typical sound/video processing schemes that assume a determined direction of the target with respect to one designated proper orientation of the tablet are used, then the outcome of processing the sound/video for a fixed target at different orientations of the tablet may lead to processing errors (degraded sound/video quality).
- the tablet 401 may comprise improved sound recording and/or 3D video capturing systems (not shown).
- the improved sound recording/3D video capturing systems may process the sound/video appropriately at any orientation or positioning (tilting) of the tablet 401 based on position/orientation information of the tablet 401 while recording sound and/or capturing 3D video.
- the tablet 401 may comprise an orientation detection device (not shown) that is configured to detect the position information.
- the position information may be used by a sound recoding system to estimate DOA for the signal and process accordingly the sound recorded by the microphone 402 . For example, the sound detected by only some of the microphones in the array selected based on the position information may be considered.
- the position information may be used by a 3D video capturing system to filter and process the video/image captured by the cameras 404 . For example, the video/image captured by only some of the cameras 404 selected based on the position information may be considered.
- the orientation detection device may be configured to generate orientation information, position data, and/or angle data that may be used by a microphone adjusting device (not shown) and/or a video configuration device (not shown).
- the microphone adjusting device may be configured to select the microphones or steer the sensors in the microphone for sound processing consideration in the array based on the orientation information and may be part of the sound recording system.
- the video configuration device may be configured to select or arrange the cameras 404 (e.g., direct the sensors in the cameras) for video processing consideration based on the orientation information and may be part of the 3D video capturing system.
- a position detector in the orientation detection device may detect the relative position or tilt of the tablet 401 to the ground and generate the position information data accordingly.
- the position information data may be used in the microphone adjustment device.
- the microphone adjustment device may steer accordingly a maximum sensitivity angle of the microphone array, e.g., with respect to the face or mouth of the user and/or may pass this information to a signal processing device (not shown) to conduct the signal processing process on the collected sound signals by the microphone array.
- the signal processing device may be part of the sound recording system.
- the signal processing process may include noise reduction, de-reverberation, speech enhancement, and/or other sound enhancement processes.
- the position information data may also be used in a 3D video configuration device/system to conduct and configure at least a pair of cameras 404 for capturing 3D videos and images.
- FIG. 5 illustrates an embodiment of an improved sound recording system 500 , which may be used in the tablet 401 based on the tablet design 400 .
- the sound recording system 500 may comprise at least two microphones 501 , a signal processing device 502 coupled to the microphones 501 , and at least one additional processing component(s) 503 for further signal processing coupled to the signal processing device 502 .
- the sound recording system 500 may comprise a microphone adjustment device 505 coupled to the signal processing device 502 , and an orientation detection device 504 coupled to the microphone adjustment device 505 .
- the components of the sound recording system 500 may be arranged as shown in FIG. 5 , and may be implemented using hardware, software, or combinations of both.
- the microphones 501 may be two separate omni-directional microphones, two separate microphone arrays, or two microphones (sensors) in a microphone array. In other embodiments, the sound recording system 500 may comprise more than two separate microphones 501 , e.g., on one or different edges of the tablet.
- the input to the signal processing device 502 may comprise collected sound signals from each of the microphones 501 and position information data from the microphone adjustment device 505 .
- the orientation detection device 504 may comprise an accelerometer and/or orientation/rotation detection device configured to provide orientation/rotation information. The orientation/rotation information may be detected with respect to a designated position or orientation of the tablet, such as with respect to the horizontal plane. Additionally or alternatively, the orientation detection device 504 may comprise face/mouth recognition devices that may be used to estimate position/orientation information of the tablet with respect to the user.
- the position information data from the orientation detection device 504 may be sent to the microphone adjustment device 505 , which may be configured to steer a maximum sensitivity angle of the microphones 501 (or microphone arrays).
- the microphones 501 may be steered so that the mouth of the user is aligned within the maximum sensitivity angle, and thus better align detection with the direction of incoming sound signal and away from noise sources.
- the microphone adjustment device 505 may send the position information data to the signal processing device 502 .
- the signal processing device 502 may implement noise reduction/de-reverberation processes using the position information data to obtain clean sound. Additionally, the signal processing device 502 may implement DOA estimation for sound, as described further below.
- the clean sound may then be sent to the additional processing component(s) 503 , which may be configured to implement signal recognition, encoding, and/or transmission.
- FIG. 6 illustrates an embodiment of an improved signal processing component 600 , which may be used in the tablet 401 based on the tablet design 400 .
- the signal processing component 600 may correspond to the signal processing device 502 of the sound recording system 500 .
- the signal processing component 600 may comprise a noise reduction block 601 , a de-reverberation block 602 coupled to the noise reduction block 601 , and a DOA estimation block 603 coupled to both the noise reduction block 601 and the de-reverberation block 602 .
- the components of the signal processing component 600 may be arranged as shown in FIG. 6 , and may be implemented using hardware, software, or combinations of both.
- the DOA estimation block 603 may be configured to receive the collected sound possibly with noise from each microphone (e.g., microphones 501 ) and implement DOA based on received position information (e.g., from the orientation detection device 504 and/or the microphone adjustment device 505 ).
- the position information data may be used by the DOA estimation block 603 to estimate a DOA for the incoming sound signal.
- the DOA estimation may be achieved using DOA estimation algorithms, such as the MUSIC algorithm.
- the output of the DOA estimation block 603 (DOA estimation information) may be sent as input to each of the noise reduction block 601 and the de-reverberation block 602 to achieve improved noise reduction and de-reverberation, respectively, based on the DOA information.
- the collected signal from each of the microphones may also be sent to the noise reduction block 601 , where the noise reduction process may be performed using the DOA information.
- the noise reduction block 601 may forward the processed signal to the de-reverberation block 602 , which may further process the sound signal to cancel or reduce any reverberation effect in the sound using the DOA information, and then forward a clean sound as output.
- FIG. 7 illustrates an embodiment of a 3D video capturing system 700 , which may be used in the tablet 401 based on the tablet design 400 .
- the 3D video capturing system 700 may comprise an orientation detection device 701 , a camera configuration device 702 coupled to the orientation detection device 701 , and a plurality of cameras 703 - 706 coupled to the camera configuration device 702 .
- the cameras 703 - 706 may be, for example, 3D cameras that correspond to the cameras 404 .
- the orientation detection device 701 may be configured to provide orientation/rotation information, e.g., similar to the orientation detection device 504 .
- the orientation detection device 701 may comprise an accelerometer, other orientation/rotation detection device, a face/mouth recognition device, or combinations thereof, which may be used to estimate position/orientation information of the tablet with respect to the user.
- the orientation detection device 701 may send the estimated position information data to the camera configuration device 702 , which may be configured to select a correct or appropriate pair of cameras from the cameras 703 - 706 , e.g., according to the position information.
- the cameras may be selected with the assumption that the user is sitting in front of the camera, which may be the typical scenario or most general case for tablet users. For example, if the tablet is rotated at about 90 degrees (as shown in FIG. 4 ( d ) ) with respect to the user's face, the correct pair of selected cameras may be the cameras on the top and bottom edges (in the initial (upright) position of FIG. 4 ( a ) ).
- FIG. 8 illustrates a flowchart of an embodiment of a sound recording method 800 , which may be implemented in the tablet 401 .
- the sound recording method 800 may be implemented using the sound recording system 500 .
- the method 800 may begin at block 810 , where a position of the tablet may be detected. The position/orientation may be detected by the orientation detection device 504 .
- a microphone of the tablet may be adjusted based on the position information. For instance, the microphone adjustment device 505 may steer a maximum sensitivity angle of the microphones 501 (or microphone arrays).
- a sound signal may be recorded, e.g., by at least two microphones 501 .
- a DOA may be estimated for the signal based on the position information.
- the DOA estimation block 603 may implement an algorithm to obtain the DOA based on the position information.
- the noise in the signal may be reduced based on the DOA estimation.
- the DOA estimation may be used by the noise reduction block 601 to reduce or eliminate the noise in the signal.
- a reverberation effect in the signal may be canceled based on the DOA estimation.
- the de-reverberation block 602 may use the DOA estimation to remove the reverberation effect in the signal.
- a clean sound may be transmitted. The clean sound may result from removing noise, reverberation effect, and/or other errors in the detected sound signal.
- the method 800 may then end.
- FIG. 9 illustrates an embodiment of a 3D video capturing method 900 , which may be implemented in the tablet 401 .
- the 3D video capturing method 900 may be implemented using the 3D video capturing system 700 .
- the method 900 may begin at block 910 , where a position of the tablet may be detected. The position/orientation may be detected by the orientation detection device 701 .
- a plurality of cameras may be configured based on the position information. For instance, the camera configuration device 702 may select an appropriate pair of cameras from the cameras 703 - 706 according to the position information.
- a video/image may be captured, e.g., by the selected cameras.
- the captured video/image may be processed using a 3D video/image processing scheme.
- a 3D video/image may be transmitted. The method 900 may then end.
- FIG. 10 illustrates a typical, general-purpose computer system 1000 suitable for implementing one or more embodiments of the components disclosed herein.
- the computer system 1000 includes a processor 1002 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 1004 , read only memory (ROM) 1006 , random access memory (RAM) 1008 , input/output (I/O) devices 1010 , and network connectivity devices 1012 .
- the processor 1002 may be implemented as one or more CPU chips, or may be part of one or more application specific integrated circuits (ASICs).
- ASICs application specific integrated circuits
- the secondary storage 1004 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if RAM 1008 is not large enough to hold all working data. Secondary storage 1004 may be used to store programs that are loaded into RAM 1008 when such programs are selected for execution.
- the ROM 1006 is used to store instructions and perhaps data that are read during program execution. ROM 1006 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of secondary storage 1004 .
- the RAM 1008 is used to store volatile data and perhaps to store instructions. Access to both ROM 1006 and RAM 1008 is typically faster than to secondary storage 1004 .
- R R l +k*(R u ⁇ R l ), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent.
- any numerical range defined by two R numbers as defined in the above is also specifically disclosed.
Abstract
A computation system comprising an orientation detection device configured to detect position information comprising a position and an orientation of the computation system, a multi-sensor system coupled to the orientation detection device, wherein the multi-sensor system is configured to capture environmental input data, wherein the multi-sensor system comprises at least one of an audio capturing system and a three-dimensional (3D) image capturing system, and wherein the environmental input data comprises at least one of audio and an image, and at least one signal processing component coupled to the orientation detection device and to the multi-sensor system, wherein the processor is configured to modify the captured environmental input data based on the position information.
Description
- The present application is a continuation of U.S. patent application Ser. No. 13/323,157, filed Dec. 12, 2011 by Jiong Zhou, et. al., and entitled “Smart Audio and Video Capture Systems for Data Processing Systems,” which is incorporated herein by reference as if reproduced in its entirety.
- Not applicable.
- Not applicable.
- Different manufacturers have provided different tablets into the consumers market, such as the products released since 2010. The tablets, also referred to as personal tablets, computer tablets, or pads, such as the iPad from Apple, are portable devices that offer several advantages in documentation, email, web surfing, social activities, and personal entertainment than other types of computing devices. Generally, a tablet has a sound recording system which enables the tablet to record sound, for example to enable voice communications or media applications. The digital data converted by a microphone in this recording system is used to perform various purposes, such as recognition, coding, and transmission. Since the sound environment includes noise. The recorded target sound in the microphone is enhanced or separated from noise in order to obtain clean sound. Some tablets may also have a three dimensional (3D) video camera feature, which can be used to implement 3D video conferencing with other tablet or device users.
- In one embodiment, the disclosure includes a computation system comprising an orientation detection device configured to detect position information comprising a position and an orientation of the computation system, a multi-sensor system coupled to the orientation detection device, wherein the multi-sensor system is configured to capture environmental input data, wherein the multi-sensor system comprises at least one of an audio capturing system and a three-dimensional (3D) image capturing system, and wherein the environmental input data comprises at least one of audio and an image, and at least one signal processing component coupled to the orientation detection device and to the multi-sensor system, wherein the processor is configured to modify the captured environmental input data based on the position information.
- In another embodiment, the disclosure includes a sound recording system comprising a direction of arrival (DOA) estimation component coupled to one or more microphones and configured to estimate DOA for a detected sound signal using received orientation information, a noise reduction component coupled to the DOA estimation component and configured to reduce noise in the detected sound signal using the DOA estimation, and a de-reverberation component coupled to the noise reduction component and the DOA estimation component and configured to remove reverberation effects in the detected sound signal using the DOA estimation.
- In another embodiment, the disclosure includes a three-dimensional (3D) video capturing system comprising a camera configuration device coupled to at least two cameras and configured to arrange at least some of the cameras to properly capture one of a 3D video and a 3D image based on detected orientation information for the 3D video capturing system, and an orientation detection device coupled to the camera configuration device and configured to detect the orientation information.
- In another embodiment, the disclosure includes a sound recording method implemented on a portable device, comprising detecting an orientation of the portable device, adjusting a microphone array device based on the detected orientation, recording a sound signal using the adjusted microphone array device, and estimating a direction of arrival (DOA) for the sound signal based on the detected orientation.
- In another embodiment, the disclosure includes a three-dimensional (3D) video capturing method implemented on a portable device, comprising detecting an orientation of the portable device, configuring a plurality of cameras based on the detected orientation, and capturing a video or image using the configured cameras.
- These and other features will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings and claims.
- For a more complete understanding of this disclosure, reference is now made to the following brief description, taken in connection with the accompanying drawings and detailed description, wherein like reference numerals represent like parts.
-
FIG. 1 is a schematic diagram of a tablet design. -
FIG. 2 is a schematic diagram of a sound recording system. -
FIG. 3 is a schematic diagram of a signal processing component. -
FIG. 4 is a schematic diagram of an embodiment of an improved tablet design. -
FIG. 5 is a schematic diagram of an embodiment of an improved sound recording system. -
FIG. 6 is a schematic diagram of an embodiment of an improved signal processing component. -
FIG. 7 is a schematic diagram of an embodiment of an improved 3D video capturing system. -
FIG. 8 is a flowchart of an embodiment of an improved sound recording method. -
FIG. 9 is a flowchart of an embodiment of an improved 3D video capturing method. -
FIG. 10 is a schematic diagram of an embodiment of a general-purpose computer system. - It should be understood at the outset that although an illustrative implementation of one or more embodiments are provided below, the disclosed systems and/or methods may be implemented using any number of techniques, whether currently known or in existence. The disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary designs and implementations illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
- Emerging and future tablets may include advanced microphone arrays that may be integrated into the tablets to provide better recorded sound quality, e.g., with higher signal to noise ratio (SNR). The advanced microphone array devices may be used instead of currently used omni-directional (uni-directional) microphones for detecting target sounds. The microphone array may be more adaptable to the direction of the incoming sound, and hence may have better noise cancellation property. One approach to implement the microphone array may be to emphasize a target sound by using a phase difference of sound signals received by the microphones in the array based on a direction of a sound source and a distance between the microphones, and hence suppress noise. Different algorithms may be used to achieve this.
- For example, to enhance the received sound signal, a Coherent Signal Subspace process which may implement a Multiple Signal Classification (MUSIC) algorithm may be used. This algorithm may require pre-estimating the signal direction, where the estimation error in the signal direction may substantially affect the final estimation of the process. Estimating the sound signal's DOA with sufficient accuracy may be needed for some applications, such as for teleconferencing system, human computer interface, and hearing aid. Such applications may involve DOA estimation of a sound source in a closed room. Hence, the presence of a significant amount of reverberation from different directions may substantially degrade the performance of the DOA estimation algorithm. There may be a need to obtain a more reliable pre-estimated DOA that locates a speaker in a reverberant room. Further, an improved estimated DOA may improve noise cancellation since the noise source may have a different direction than the target sound.
- Another important scenario that may need attention is estimating or identifying the user's face position with respect to a tablet's 3D video camera system. For example, when the user participates in a 3D video conferencing with another user using the tablet, the user may not hold the tablet in a designated proper position or the orientation of the tablet may be unknown to the 3D video camera system. Current 3D video camera enabled tablets in the market may not have the ability to capture a correct 3D video or image when the tablet is not held in the proper position. A position aware system and a camera configuration system which uses position or orientation information to adaptively configure the 3D cameras of the system to capture correct 3D video/images may be needed.
- Disclosed herein are systems and methods for allowing improved sound recording and 3D video/image capturing using tablets. The systems may be configured to detect and obtain the tablet's orientation or position information and use this information to enhance the performance of a sound recording sub-system and/or a 3D video capture sub-system in the tablet. The terms position information and orientation information are used herein interchangeably to indicate the orientation and/or tilting (e.g., in degrees) of the tablet, for instance with respect to a designated position, such as a horizontal alignment of the tablet. The systems may comprises an orientation detection device, a microphone adjusting device, a camera configuration device, a sub-system of sound recording, a sub-system of 3D video capturing, or combinations thereof. The orientation detection device may be used to generate position/orientation of the tablet, which may be used by the microphone adjusting device and/or the camera configuration device. The microphone adjusting device may use this information to adjust the sensing angle in the microphone(s) and align the angle to the direction of the target sound. The position/orientation information may also be used to implement signal processing schemes in the sound recording sub-system. The video configuration device may use this information to re-arrange the cameras for capturing video/image. The information may also be used to implement corresponding processes in the 3D video capturing sub-system to obtain the correct 3D video or image.
-
FIG. 1 illustrates an embodiment of atablet design 100 for atablet 101. Thetablet 101 may be any portable computation device characterized by a flat screen on one side of the tablet's housing. The display screen may be used for viewing and may also be a touch screen used for typing. Thetablet 101 may not require connecting separate interface devices for basic operations, which may not be the case for a desktop computer. Thetablet 101 may be a fixed device that is not foldable or that does not require mechanical operation, such as in the case of a laptop. Thetablet 101 may offer fewer features/functions than other types of computation devices (e.g., laptops) and have lower pricing and cost. Thetablet 101 may also have lighter weight and may be more portable friendly. Thetablet 101 may be different than other communication devices, such as smartphones, in that thetablet 101 may be larger in size, offer more computation power and functions, and/or may not necessarily be equipped with a cellular interface. The table 101 may have similar features to at least some currently available tablets, also referred to as pads, in the market, such as the Apple iPad, the Hewlett-Packard (HP) Slate tablet, the Samsung Galaxy tablet, the Lenovo IdeaPad, the Dell Latitude tablet, and other tablets or pads. - The
tablet design 100 may have a relatively small thickness with respect to its width or length and a flat display screen (e.g., touch screen) on one side of thetablet 101. The top and bottom edges of the table 101 may be wider than the remaining (side) edges of thetablet 101. As such, the length or the top and bottom edges may correspond to the length of thetablet 101 and the length of the side edges may correspond to the width of thetablet 101. The display screen may comprise a substantial area of the total surface of thetablet 101. Thetablet design 100 may also comprise amicrophone 102, e.g., on one edge of thetablet 101 around the screen, and typically one or twocameras 104, e.g., on another edge of thetablet 101, as shown inFIG. 1 (a) . Themicrophone 102 may be an omni-directional microphone or a microphone array device that is part of an audio recording system of thetablet 101 for receiving user's voice, enabling voice communications, sound recording, communications, or combinations thereof. Thecameras 104 may be part of a video capturing system of thetablet 101 for shooting images or video, enabling video conferencing or calling, or both. Thecameras 104 may be 3D cameras and the video capturing system may be a 3D video capturing system that captures 3D images or video. A 3D camera is a single device that is capable of capturing both “RGB” information and 3D information. In some embodiments, at least twocameras 104 may be needed to capture two frames (at about the same time) for the same image from different perspectives. The two frames may then be processed according to a 3D processing scheme to render a 3D like image. The same concept may be applied for 3D video capturing. - Typically, the audio recording system may be optimized according to one designated orientation of the
tablet 101. For instance, the audio recording system may be optimized for an upright position of thetablet 101, as shown inFIG. 1 (a) . In this position, themicrophone 102 may be positioned at the bottom edge of the tablet 101 (e.g., around the center of the bottom edge). As such, the target sound or user's voice detected by themicrophone 102 may be properly processed by the audio recording system to remove any noise. Themicrophone 102 may receive the user's voice or any target sound in addition to noise, e.g., from other sources around the user or the target sound. The audio recording system may then account for the noise assuming that thetablet 101 is held or positioned in the proper orientation (upright position) and that themicrophone 102 is located in the proper location accordingly (at the bottom edge). However, when the position/orientation of atablet 101 is changed or rotated, e.g., by about 180 degrees as shown inFIG. 1 (b) , themicrophone 102 may not be located anymore in the proper location (e.g., with respect to the sound target) and hence the audio recording system (that assumes an upright positioning of the tablet 101) may not properly process the detected sound/voice and accompanying noise. As a result, the output of the audio recording system may not be optimized. For example, in a voice calling scenario, the communicated user voice may still include substantial noise or may not be clear to the receiver on the other side. - Similarly, the 3D video capturing system may be optimized according to a selected orientation of the
tablet 101, such as the upright position ofFIG. 1 (a) , where the twocameras 104 may be positioned at the top edge of the tablet 101 (e.g., around the center of the top edge). In this case, the video or image captured by thecameras 104 may be properly processed by the 3D video capturing system to properly generate 3D like scenes. When thecameras 104 capture the image/video frames (e.g., of the user's face or any target scene), the 3D video capturing system may process the captured frames by accounting for the corresponding positioning of the cameras 104 (at the top edge), assuming that thetablet 101 is held or positioned in the proper orientation (upright position). However, when the position/orientation of atablet 101 is changed or rotated, e.g., by about 180 degrees as shown inFIG. 1 (b) , thecameras 104 may not be located anymore in the proper location (e.g., with respect to the target image/video), and hence the 3D video recording system (that assumes an upright positioning of the tablet 101) may not properly process the captured video/image. As a result, the output of the 3D video capturing system may not be optimized. For example, in a video conferencing scenario, the communicateduser 3D video may not be clear to the viewer on the other side. -
FIG. 2 illustrates an embodiment of asound recording system 200, which may be used in thetablet 101 based on thetablet design 100. Thesound recording system 200 may comprise amicrophone 201, asignal processing device 202 coupled to themicrophone 201, and at least oneadditional processing component 203 for further signal processing coupled to thesignal processing device 202. The components of thesound recording system 200 may be arranged as shown inFIG. 2 , and may be implemented using hardware, software, or combinations of both. Themicrophone 201 may correspond to themicrophone 102. Thesignal processing device 202 may be configured to receive the detected sound/audio from themicrophone 201 as input, process the sound/audio, e.g., to cancel or suppress noise, and send a processed (clean) sound as output to the additional processing component(s) 203. The processes of thesignal processing device 202 may include but are not limited to noise reduction and de-reverberation. The additional processing component(s) 203 may be configured to receive the clean sound as input, further process the clean sound, e.g., to implement sound recognition, encoding, and/or transmission, and accordingly provide digital sound data as output. -
FIG. 3 illustrates an embodiment of asignal processing component 300, which may be used in thetablet 101 based on thetablet design 100. Thesignal processing component 300 may correspond to thesignal processing component 202 of thesound recording system 200. Thesignal processing component 300 may comprise anoise reduction block 301 and ade-reverberation block 302 coupled to thenoise reduction block 301. The components of thesignal processing component 300 may be arranged as shown inFIG. 3 , and may be implemented using hardware, software, or combinations of both. Thenoise reduction block 301 may be configured to receive the collected sound (e.g., from the microphone 201) signal possibly with noise and/or reverberation effect, process the sound signal to reduce or eliminate noise, and then forward the processed signal to thede-reverberation block 302. Thede-reverberation block 302 may be configured to receive the processed signal from thenoise reduction block 301, further process the sound signal to cancel or reduce any reverberation effect in the sound, and then forward a clean sound as output. -
FIG. 4 illustrates an embodiment of animproved tablet design 400 for atablet 401. Thetablet 401 may be any portable computation device characterized by a flat screen on one side of the tablet's housing. The components of thetablet 401 may be configured similar to the corresponding components of thetablet 101, including a screen that may be a touch screen. Thetablet 401 may also comprise amicrophone 402, e.g., on one edge of thetablet 401 around the screen. Themicrophone 402 may be a microphone array device, which may comprise a plurality of microphones arranged in an array configuration. Thetablet 401 may also comprise at least twocameras 404, which may be 3D cameras for capturing 3D video/image(s). Thecameras 404 may be positioned on one or different edges of thetablet 401. For instance, thetablet 401 may comprise about fourcameras 404, which may be each located on one of the four edges of thetablet 401. Distributing thecameras 404 along different edges of thetablet 401 may allow considering different positioning/orientation of thetablet 400 when capturing video/images and hence better 3D video/image processing according to positioning/orientation. The components of thetablet 401 may be arranged as shown inFIG. 4 (a) , which may correspond to one possible position (e.g., upright position) for holding and operating thetablet 401. -
FIGS. 4 (b), (c), and (d) show other possible orientations for holding or operating thetablet 401, at 90 degrees, 180 degrees, and 270 degrees, respectively, from the orientation ofFIG. 4 (a) . At the different orientations, the positions of themicrophone 402 and thecameras 404 from a fixed target, such as the user's face may be different. If typical sound/video processing schemes that assume a determined direction of the target with respect to one designated proper orientation of the tablet are used, then the outcome of processing the sound/video for a fixed target at different orientations of the tablet may lead to processing errors (degraded sound/video quality). - Instead, to allow holding and operating the
tablet 401 at different orientations, thetablet 401 may comprise improved sound recording and/or 3D video capturing systems (not shown). The improved sound recording/3D video capturing systems may process the sound/video appropriately at any orientation or positioning (tilting) of thetablet 401 based on position/orientation information of thetablet 401 while recording sound and/or capturing 3D video. Thetablet 401 may comprise an orientation detection device (not shown) that is configured to detect the position information. The position information may be used by a sound recoding system to estimate DOA for the signal and process accordingly the sound recorded by themicrophone 402. For example, the sound detected by only some of the microphones in the array selected based on the position information may be considered. Similarly, the position information may be used by a 3D video capturing system to filter and process the video/image captured by thecameras 404. For example, the video/image captured by only some of thecameras 404 selected based on the position information may be considered. - The orientation detection device may be configured to generate orientation information, position data, and/or angle data that may be used by a microphone adjusting device (not shown) and/or a video configuration device (not shown). The microphone adjusting device may be configured to select the microphones or steer the sensors in the microphone for sound processing consideration in the array based on the orientation information and may be part of the sound recording system. The video configuration device may be configured to select or arrange the cameras 404 (e.g., direct the sensors in the cameras) for video processing consideration based on the orientation information and may be part of the 3D video capturing system.
- For example, when the tablet is rotated relative to the horizontal plane, a position detector in the orientation detection device may detect the relative position or tilt of the
tablet 401 to the ground and generate the position information data accordingly. The position information data may be used in the microphone adjustment device. For instance, the microphone adjustment device may steer accordingly a maximum sensitivity angle of the microphone array, e.g., with respect to the face or mouth of the user and/or may pass this information to a signal processing device (not shown) to conduct the signal processing process on the collected sound signals by the microphone array. The signal processing device may be part of the sound recording system. The signal processing process may include noise reduction, de-reverberation, speech enhancement, and/or other sound enhancement processes. The position information data may also be used in a 3D video configuration device/system to conduct and configure at least a pair ofcameras 404 for capturing 3D videos and images. -
FIG. 5 illustrates an embodiment of an improvedsound recording system 500, which may be used in thetablet 401 based on thetablet design 400. Thesound recording system 500 may comprise at least twomicrophones 501, asignal processing device 502 coupled to themicrophones 501, and at least one additional processing component(s) 503 for further signal processing coupled to thesignal processing device 502. Additionally, thesound recording system 500 may comprise amicrophone adjustment device 505 coupled to thesignal processing device 502, and anorientation detection device 504 coupled to themicrophone adjustment device 505. The components of thesound recording system 500 may be arranged as shown inFIG. 5 , and may be implemented using hardware, software, or combinations of both. - The
microphones 501 may be two separate omni-directional microphones, two separate microphone arrays, or two microphones (sensors) in a microphone array. In other embodiments, thesound recording system 500 may comprise more than twoseparate microphones 501, e.g., on one or different edges of the tablet. The input to thesignal processing device 502 may comprise collected sound signals from each of themicrophones 501 and position information data from themicrophone adjustment device 505. Theorientation detection device 504 may comprise an accelerometer and/or orientation/rotation detection device configured to provide orientation/rotation information. The orientation/rotation information may be detected with respect to a designated position or orientation of the tablet, such as with respect to the horizontal plane. Additionally or alternatively, theorientation detection device 504 may comprise face/mouth recognition devices that may be used to estimate position/orientation information of the tablet with respect to the user. - The position information data from the
orientation detection device 504 may be sent to themicrophone adjustment device 505, which may be configured to steer a maximum sensitivity angle of the microphones 501 (or microphone arrays). Themicrophones 501 may be steered so that the mouth of the user is aligned within the maximum sensitivity angle, and thus better align detection with the direction of incoming sound signal and away from noise sources. Alternatively or additionally, themicrophone adjustment device 505 may send the position information data to thesignal processing device 502. Thesignal processing device 502 may implement noise reduction/de-reverberation processes using the position information data to obtain clean sound. Additionally, thesignal processing device 502 may implement DOA estimation for sound, as described further below. The clean sound may then be sent to the additional processing component(s) 503, which may be configured to implement signal recognition, encoding, and/or transmission. -
FIG. 6 illustrates an embodiment of an improvedsignal processing component 600, which may be used in thetablet 401 based on thetablet design 400. Thesignal processing component 600 may correspond to thesignal processing device 502 of thesound recording system 500. Thesignal processing component 600 may comprise anoise reduction block 601, ade-reverberation block 602 coupled to thenoise reduction block 601, and aDOA estimation block 603 coupled to both thenoise reduction block 601 and thede-reverberation block 602. The components of thesignal processing component 600 may be arranged as shown inFIG. 6 , and may be implemented using hardware, software, or combinations of both. - The
DOA estimation block 603 may be configured to receive the collected sound possibly with noise from each microphone (e.g., microphones 501) and implement DOA based on received position information (e.g., from theorientation detection device 504 and/or the microphone adjustment device 505). The position information data may be used by theDOA estimation block 603 to estimate a DOA for the incoming sound signal. The DOA estimation may be achieved using DOA estimation algorithms, such as the MUSIC algorithm. The output of the DOA estimation block 603 (DOA estimation information) may be sent as input to each of thenoise reduction block 601 and thede-reverberation block 602 to achieve improved noise reduction and de-reverberation, respectively, based on the DOA information. The collected signal from each of the microphones may also be sent to thenoise reduction block 601, where the noise reduction process may be performed using the DOA information. Thenoise reduction block 601 may forward the processed signal to thede-reverberation block 602, which may further process the sound signal to cancel or reduce any reverberation effect in the sound using the DOA information, and then forward a clean sound as output. -
FIG. 7 illustrates an embodiment of a 3Dvideo capturing system 700, which may be used in thetablet 401 based on thetablet design 400. The 3Dvideo capturing system 700 may comprise anorientation detection device 701, acamera configuration device 702 coupled to theorientation detection device 701, and a plurality of cameras 703-706 coupled to thecamera configuration device 702. The cameras 703-706 may be, for example, 3D cameras that correspond to thecameras 404. Theorientation detection device 701 may be configured to provide orientation/rotation information, e.g., similar to theorientation detection device 504. For instance, theorientation detection device 701 may comprise an accelerometer, other orientation/rotation detection device, a face/mouth recognition device, or combinations thereof, which may be used to estimate position/orientation information of the tablet with respect to the user. - The
orientation detection device 701 may send the estimated position information data to thecamera configuration device 702, which may be configured to select a correct or appropriate pair of cameras from the cameras 703-706, e.g., according to the position information. The cameras may be selected with the assumption that the user is sitting in front of the camera, which may be the typical scenario or most general case for tablet users. For example, if the tablet is rotated at about 90 degrees (as shown inFIG. 4 (d) ) with respect to the user's face, the correct pair of selected cameras may be the cameras on the top and bottom edges (in the initial (upright) position ofFIG. 4 (a) ). -
FIG. 8 illustrates a flowchart of an embodiment of asound recording method 800, which may be implemented in thetablet 401. For instance, thesound recording method 800 may be implemented using thesound recording system 500. Themethod 800 may begin atblock 810, where a position of the tablet may be detected. The position/orientation may be detected by theorientation detection device 504. Atblock 820, a microphone of the tablet may be adjusted based on the position information. For instance, themicrophone adjustment device 505 may steer a maximum sensitivity angle of the microphones 501 (or microphone arrays). Atblock 830, a sound signal may be recorded, e.g., by at least twomicrophones 501. Atblock 840, a DOA may be estimated for the signal based on the position information. For instance, theDOA estimation block 603 may implement an algorithm to obtain the DOA based on the position information. Atblock 850, the noise in the signal may be reduced based on the DOA estimation. The DOA estimation may be used by thenoise reduction block 601 to reduce or eliminate the noise in the signal. Atblock 860, a reverberation effect in the signal may be canceled based on the DOA estimation. For instance, thede-reverberation block 602 may use the DOA estimation to remove the reverberation effect in the signal. Atblock 870, a clean sound may be transmitted. The clean sound may result from removing noise, reverberation effect, and/or other errors in the detected sound signal. Themethod 800 may then end. -
FIG. 9 illustrates an embodiment of a 3Dvideo capturing method 900, which may be implemented in thetablet 401. For instance, the 3Dvideo capturing method 900 may be implemented using the 3Dvideo capturing system 700. Themethod 900 may begin atblock 910, where a position of the tablet may be detected. The position/orientation may be detected by theorientation detection device 701. Atblock 920, a plurality of cameras may be configured based on the position information. For instance, thecamera configuration device 702 may select an appropriate pair of cameras from the cameras 703-706 according to the position information. Atblock 930, a video/image may be captured, e.g., by the selected cameras. Atblock 940, the captured video/image may be processed using a 3D video/image processing scheme. Atblock 950, a 3D video/image may be transmitted. Themethod 900 may then end. - In some embodiments, the components described above may be implemented on any general-purpose computer system or smart device component with sufficient processing power, memory resources, and throughput capability to handle the necessary workload placed upon it.
FIG. 10 illustrates a typical, general-purpose computer system 1000 suitable for implementing one or more embodiments of the components disclosed herein. Thecomputer system 1000 includes a processor 1002 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices includingsecondary storage 1004, read only memory (ROM) 1006, random access memory (RAM) 1008, input/output (I/O)devices 1010, andnetwork connectivity devices 1012. Theprocessor 1002 may be implemented as one or more CPU chips, or may be part of one or more application specific integrated circuits (ASICs). - The
secondary storage 1004 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device ifRAM 1008 is not large enough to hold all working data.Secondary storage 1004 may be used to store programs that are loaded intoRAM 1008 when such programs are selected for execution. TheROM 1006 is used to store instructions and perhaps data that are read during program execution.ROM 1006 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity ofsecondary storage 1004. TheRAM 1008 is used to store volatile data and perhaps to store instructions. Access to bothROM 1006 andRAM 1008 is typically faster than tosecondary storage 1004. - At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations should be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). For example, whenever a numerical range with a lower limit, Rl, and an upper limit, Ru, is disclosed, any number falling within the range is specifically disclosed. In particular, the following numbers within the range are specifically disclosed: R=Rl+k*(Ru−Rl), wherein k is a variable ranging from 1 percent to 100 percent with a 1 percent increment, i.e., k is 1 percent, 2 percent, 3 percent, 4 percent, 7 percent, . . . , 70 percent, 71 percent, 72 percent, . . . , 97 percent, 96 percent, 97 percent, 98 percent, 99 percent, or 100 percent. Moreover, any numerical range defined by two R numbers as defined in the above is also specifically disclosed. Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having should be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure. The discussion of a reference in the disclosure is not an admission that it is prior art, especially any reference that has a publication date after the priority date of this application. The disclosure of all patents, patent applications, and publications cited in the disclosure are hereby incorporated by reference, to the extent that they provide exemplary, procedural, or other details supplementary to the disclosure.
- While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented.
- In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein.
Claims (20)
1. A method comprising:
detecting an orientation of a portable device based on an indication of a rotational orientation or a tilt orientation of the portable device relative to a horizontal plane, wherein the portable device comprises a camera group comprising a plurality of pairs of cameras such that each camera pair is selectable to obtain a three-dimensional (3D) image or a 3D video; and
selecting, by a processor of the portable device, a camera pair from the camera group to obtain the 3D image or the 3D video based on the detected orientation of the portable device.
2. The method of claim 1 , further comprising capturing the 3D image or the 3D video with the camera pair selected based on the detected orientation of the portable device.
3. The method of claim 1 , further comprising employing a signal processing component to modify the captured 3D image or the 3D video based on the detected orientation of the portable device.
4. The method of claim 1 , wherein each of the pairs of cameras are located proximate to different edges of the portable device.
5. The method of claim 1 , wherein the portable device is part of a smartphone, and wherein the smartphone is configured to enable at least one of video conferencing, voice calling, and a human computer interface.
6. The method of claim 1 , further comprising obtaining the 3D image or the 3D video by filtering out data from one or more pairs of cameras that are not selected to obtain the 3D image or the 3D video.
7. The method of claim 1 , further comprising obtaining the 3D image or the 3D video by capturing the 3D image or the 3D video with the selected camera pair.
8. The method of claim 7 , further comprising:
processing the obtained 3D image or the 3D video using a 3D image processing scheme; and
transmitting the 3D image or the 3D video.
9. A portable device comprising:
a camera group comprising a plurality of cameras pairs such that each camera pair is selectable to capture a three-dimensional (3D) image or a 3D video; and
a processor coupled to the camera group and configured to:
determine an orientation of the portable device based on a rotational orientation or a tilt orientation of the portable device with respect to a horizontal plane; and
select a camera pair from the camera group to obtain the 3D image or the 3D video based on the determined orientation of the portable device.
10. The portable device of claim 9 , wherein the processor is further configured to cause the pair of cameras selected based on the determined orientation of the portable device to capture the 3D image or the 3D video.
11. The portable device of claim 10 , further comprising a signal processing component coupled to the processor and configured to modify the captured 3D image or the 3D video based on the determined orientation of the portable device.
12. The portable device of claim 9 , wherein the processor is further configured to obtain the 3D image or the 3D video by filtering out data from one or more pairs of cameras that are not selected to obtain the 3D image or the 3D video.
13. The portable device of claim 9 , wherein each of the pairs of cameras of the camera group are located proximate to different edges of the portable device.
14. The portable device of claim 9 , wherein the portable device is part of a smartphone, and wherein the smartphone is configured to enable at least one of video conferencing, voice calling, and a human computer interface.
15. A non-transitory computer readable medium comprising a computer program product for use by a portable device, the computer program product comprising computer executable instructions stored on the non-transitory computer readable medium such that when executed by a processor cause the portable device to:
detect an orientation of the portable device based on an indication of a rotational orientation or a tilt orientation of the portable device relative to a horizontal plane, wherein the portable device comprises a camera group comprising a plurality of pairs of cameras such that each camera pair is selectable to obtain a three-dimensional (3D) image or a 3D video; and
select, by the processor of the portable device, a camera pair from the camera group to obtain the 3D image or the 3D video based on the detected orientation of the portable device.
16. The computer program product of claim 15 , wherein the instructions further cause the portable device to capture the 3D image or the 3D video with the camera pair selected based on the detected orientation of the portable device.
17. The computer program product of claim 15 , wherein the instructions further cause the portable device to modify the captured 3D image or the 3D video based on the detected orientation of the portable device.
18. The computer program product of claim 15 , wherein each of the pairs of cameras are located proximate to different edges of the portable device.
19. The computer program product of claim 15 , wherein the portable device is part of a smartphone, and wherein the smartphone is configured to enable at least one of video conferencing, voice calling, and a human computer interface.
20. The computer program product of claim 15 , wherein the instructions further cause the portable device to employ the pair of cameras selected based on the determined orientation of the portable device to capture the 3D image or the 3D video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/968,225 US20160100156A1 (en) | 2011-12-12 | 2015-12-14 | Smart Audio and Video Capture Systems for Data Processing Systems |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/323,157 US9246543B2 (en) | 2011-12-12 | 2011-12-12 | Smart audio and video capture systems for data processing systems |
US14/968,225 US20160100156A1 (en) | 2011-12-12 | 2015-12-14 | Smart Audio and Video Capture Systems for Data Processing Systems |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/323,157 Continuation US9246543B2 (en) | 2011-12-12 | 2011-12-12 | Smart audio and video capture systems for data processing systems |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160100156A1 true US20160100156A1 (en) | 2016-04-07 |
Family
ID=48571625
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/323,157 Active 2033-01-18 US9246543B2 (en) | 2011-12-12 | 2011-12-12 | Smart audio and video capture systems for data processing systems |
US14/968,225 Abandoned US20160100156A1 (en) | 2011-12-12 | 2015-12-14 | Smart Audio and Video Capture Systems for Data Processing Systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/323,157 Active 2033-01-18 US9246543B2 (en) | 2011-12-12 | 2011-12-12 | Smart audio and video capture systems for data processing systems |
Country Status (4)
Country | Link |
---|---|
US (2) | US9246543B2 (en) |
EP (2) | EP2781083A4 (en) |
CN (1) | CN104012074B (en) |
WO (1) | WO2013086979A1 (en) |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108696712A (en) * | 2017-03-03 | 2018-10-23 | 展讯通信(上海)有限公司 | 3D video call methods, device and terminal based on IMS |
US20190317178A1 (en) * | 2016-11-23 | 2019-10-17 | Hangzhou Hikvision Digital Technology Co., Ltd. | Device control method, apparatus and system |
US10521579B2 (en) * | 2017-09-09 | 2019-12-31 | Apple Inc. | Implementation of biometric authentication |
US10748153B2 (en) | 2014-05-29 | 2020-08-18 | Apple Inc. | User interface for payments |
US10749967B2 (en) | 2016-05-19 | 2020-08-18 | Apple Inc. | User interface for remote authorization |
US10783576B1 (en) | 2019-03-24 | 2020-09-22 | Apple Inc. | User interfaces for managing an account |
US10803281B2 (en) | 2013-09-09 | 2020-10-13 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs |
US10860096B2 (en) | 2018-09-28 | 2020-12-08 | Apple Inc. | Device control using gaze information |
US10872256B2 (en) | 2017-09-09 | 2020-12-22 | Apple Inc. | Implementation of biometric authentication |
US10956550B2 (en) | 2007-09-24 | 2021-03-23 | Apple Inc. | Embedded authentication systems in an electronic device |
US11037150B2 (en) | 2016-06-12 | 2021-06-15 | Apple Inc. | User interfaces for transactions |
US11074572B2 (en) | 2016-09-06 | 2021-07-27 | Apple Inc. | User interfaces for stored-value accounts |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
US11321731B2 (en) | 2015-06-05 | 2022-05-03 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US11481769B2 (en) | 2016-06-11 | 2022-10-25 | Apple Inc. | User interface for transactions |
US11574041B2 (en) | 2016-10-25 | 2023-02-07 | Apple Inc. | User interface for managing access to credentials for use in an operation |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
US11783305B2 (en) | 2015-06-05 | 2023-10-10 | Apple Inc. | User interface for loyalty accounts and private label accounts for a wearable device |
US11816194B2 (en) | 2020-06-21 | 2023-11-14 | Apple Inc. | User interfaces for managing secure operations |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9223404B1 (en) * | 2012-01-27 | 2015-12-29 | Amazon Technologies, Inc. | Separating foreground and background objects in captured images |
US20130271579A1 (en) * | 2012-04-14 | 2013-10-17 | Younian Wang | Mobile Stereo Device: Stereo Imaging, Measurement and 3D Scene Reconstruction with Mobile Devices such as Tablet Computers and Smart Phones |
US9445174B2 (en) * | 2012-06-14 | 2016-09-13 | Nokia Technologies Oy | Audio capture apparatus |
WO2014053875A1 (en) | 2012-10-01 | 2014-04-10 | Nokia Corporation | An apparatus and method for reproducing recorded audio with correct spatial directionality |
US9426573B2 (en) * | 2013-01-29 | 2016-08-23 | 2236008 Ontario Inc. | Sound field encoder |
EP2962299B1 (en) * | 2013-02-28 | 2018-10-31 | Nokia Technologies OY | Audio signal analysis |
EP2819430A1 (en) * | 2013-06-27 | 2014-12-31 | Speech Processing Solutions GmbH | Handheld mobile recording device with microphone characteristic selection means |
US9544574B2 (en) * | 2013-12-06 | 2017-01-10 | Google Inc. | Selecting camera pairs for stereoscopic imaging |
US9565416B1 (en) | 2013-09-30 | 2017-02-07 | Google Inc. | Depth-assisted focus in multi-camera systems |
JP6148163B2 (en) * | 2013-11-29 | 2017-06-14 | 本田技研工業株式会社 | Conversation support device, method for controlling conversation support device, and program for conversation support device |
US11959749B2 (en) * | 2014-06-20 | 2024-04-16 | Profound Positioning Inc. | Mobile mapping system |
US9710724B2 (en) | 2014-09-05 | 2017-07-18 | Intel Corporation | Multi-camera device |
CN105812969A (en) * | 2014-12-31 | 2016-07-27 | 展讯通信(上海)有限公司 | Method, system and device for picking up sound signal |
JP6592940B2 (en) * | 2015-04-07 | 2019-10-23 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
CN107534725B (en) * | 2015-05-19 | 2020-06-16 | 华为技术有限公司 | Voice signal processing method and device |
CN104967717B (en) * | 2015-05-26 | 2016-09-28 | 努比亚技术有限公司 | Noise-reduction method under terminal speech interactive mode and device |
KR101910383B1 (en) * | 2015-08-05 | 2018-10-22 | 엘지전자 주식회사 | Driver assistance apparatus and vehicle including the same |
KR102339798B1 (en) * | 2015-08-21 | 2021-12-15 | 삼성전자주식회사 | Method for processing sound of electronic device and electronic device thereof |
US10021339B2 (en) * | 2015-12-01 | 2018-07-10 | Qualcomm Incorporated | Electronic device for generating video data |
FR3046014A1 (en) * | 2015-12-21 | 2017-06-23 | Orange | METHOD FOR MANAGING RESOURCES ON A TERMINAL |
CN106328156B (en) * | 2016-08-22 | 2020-02-18 | 华南理工大学 | Audio and video information fusion microphone array voice enhancement system and method |
CN106303357B (en) * | 2016-08-30 | 2019-11-08 | 福州瑞芯微电子股份有限公司 | A kind of video call method and system of far field speech enhan-cement |
US10362270B2 (en) | 2016-12-12 | 2019-07-23 | Dolby Laboratories Licensing Corporation | Multimodal spatial registration of devices for congruent multimedia communications |
CN106898348B (en) * | 2016-12-29 | 2020-02-07 | 北京小鸟听听科技有限公司 | Dereverberation control method and device for sound production equipment |
WO2018140253A1 (en) * | 2017-01-24 | 2018-08-02 | Commscope Technologies Llc | Alignment apparatus using a mobile terminal and methods of operating the same |
US10462370B2 (en) | 2017-10-03 | 2019-10-29 | Google Llc | Video stabilization |
CN110069123B (en) * | 2018-01-22 | 2022-02-18 | 腾讯科技(深圳)有限公司 | Method and device for checking information point collection validity |
US11022511B2 (en) | 2018-04-18 | 2021-06-01 | Aron Kain | Sensor commonality platform using multi-discipline adaptable sensors for customizable applications |
US10171738B1 (en) | 2018-05-04 | 2019-01-01 | Google Llc | Stabilizing video to reduce camera and face movement |
WO2021061112A1 (en) | 2019-09-25 | 2021-04-01 | Google Llc | Gain control for face authentication |
CN111551921A (en) * | 2020-05-19 | 2020-08-18 | 北京中电慧声科技有限公司 | Sound source orientation system and method based on sound image linkage |
CN111883186B (en) * | 2020-07-10 | 2022-12-23 | 上海明略人工智能(集团)有限公司 | Recording device, voice acquisition method and device, storage medium and electronic device |
US11190689B1 (en) | 2020-07-29 | 2021-11-30 | Google Llc | Multi-camera video stabilization |
EP4047939A1 (en) | 2021-02-19 | 2022-08-24 | Nokia Technologies Oy | Audio capture in presence of noise |
TWI799165B (en) * | 2022-03-04 | 2023-04-11 | 圓展科技股份有限公司 | System and method for capturing sounding target |
US20240077868A1 (en) * | 2022-09-07 | 2024-03-07 | Schweitzer Engineering Laboratories, Inc. | Configurable multi-sensor input |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030203747A1 (en) * | 2002-04-26 | 2003-10-30 | Nec Corporation | Foldable portable telephone having a display portion selectively put into a lengthwise state or an oblong state and a pair of front camera portions |
US20100238263A1 (en) * | 2009-01-28 | 2010-09-23 | Robinson Ian N | Systems for performing visual collaboration between remotely situated participants |
US20110249073A1 (en) * | 2010-04-07 | 2011-10-13 | Cranfill Elizabeth C | Establishing a Video Conference During a Phone Call |
US8937646B1 (en) * | 2011-10-05 | 2015-01-20 | Amazon Technologies, Inc. | Stereo imaging using disparate imaging devices |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7015954B1 (en) * | 1999-08-09 | 2006-03-21 | Fuji Xerox Co., Ltd. | Automatic video system using multiple cameras |
US7688306B2 (en) * | 2000-10-02 | 2010-03-30 | Apple Inc. | Methods and apparatuses for operating a portable device based on an accelerometer |
JP2006509439A (en) * | 2002-12-06 | 2006-03-16 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Personalized surround sound headphone system |
JP4266148B2 (en) | 2003-09-30 | 2009-05-20 | 株式会社東芝 | Electronics |
US7817805B1 (en) | 2005-01-12 | 2010-10-19 | Motion Computing, Inc. | System and method for steering the directional response of a microphone to a moving acoustic source |
TWI294585B (en) | 2005-10-28 | 2008-03-11 | Quanta Comp Inc | Audio system of a tablet personal computer and the speaker orientating method thereof |
JP2009514312A (en) * | 2005-11-01 | 2009-04-02 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Hearing aid with acoustic tracking means |
US7565288B2 (en) * | 2005-12-22 | 2009-07-21 | Microsoft Corporation | Spatial noise suppression for a microphone array |
US20070237339A1 (en) * | 2006-04-11 | 2007-10-11 | Alon Konchitsky | Environmental noise reduction and cancellation for a voice over internet packets (VOIP) communication device |
JP5537044B2 (en) * | 2008-05-30 | 2014-07-02 | キヤノン株式会社 | Image display apparatus, control method therefor, and computer program |
KR101500741B1 (en) | 2008-09-12 | 2015-03-09 | 옵티스 셀룰러 테크놀로지, 엘엘씨 | Mobile terminal having a camera and method for photographing picture thereof |
JP4643698B2 (en) | 2008-09-16 | 2011-03-02 | レノボ・シンガポール・プライベート・リミテッド | Tablet computer with microphone and control method |
US8401178B2 (en) | 2008-09-30 | 2013-03-19 | Apple Inc. | Multiple microphone switching and configuration |
JP5229053B2 (en) | 2009-03-30 | 2013-07-03 | ソニー株式会社 | Signal processing apparatus, signal processing method, and program |
JP5299054B2 (en) * | 2009-04-21 | 2013-09-25 | ソニー株式会社 | Electronic device, display control method and program |
DK2262285T3 (en) * | 2009-06-02 | 2017-02-27 | Oticon As | Listening device providing improved location ready signals, its use and method |
US8599238B2 (en) * | 2009-10-16 | 2013-12-03 | Apple Inc. | Facial pose improvement with perspective distortion correction |
JP5407848B2 (en) | 2009-12-25 | 2014-02-05 | 富士通株式会社 | Microphone directivity control device |
US9008686B2 (en) * | 2010-01-12 | 2015-04-14 | Nokia Corporation | Collaborative location/orientation estimation |
US20110298887A1 (en) * | 2010-06-02 | 2011-12-08 | Maglaque Chad L | Apparatus Using an Accelerometer to Capture Photographic Images |
KR101685980B1 (en) * | 2010-07-30 | 2016-12-13 | 엘지전자 주식회사 | Mobile terminal and method for controlling the same |
US9274744B2 (en) * | 2010-09-10 | 2016-03-01 | Amazon Technologies, Inc. | Relative position-inclusive device interfaces |
US10726861B2 (en) * | 2010-11-15 | 2020-07-28 | Microsoft Technology Licensing, Llc | Semi-private communication in open environments |
-
2011
- 2011-12-12 US US13/323,157 patent/US9246543B2/en active Active
-
2012
- 2012-12-12 CN CN201280061091.9A patent/CN104012074B/en active Active
- 2012-12-12 EP EP12856814.4A patent/EP2781083A4/en not_active Ceased
- 2012-12-12 WO PCT/CN2012/086425 patent/WO2013086979A1/en active Application Filing
- 2012-12-12 EP EP18163954.3A patent/EP3376763A1/en not_active Ceased
-
2015
- 2015-12-14 US US14/968,225 patent/US20160100156A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030203747A1 (en) * | 2002-04-26 | 2003-10-30 | Nec Corporation | Foldable portable telephone having a display portion selectively put into a lengthwise state or an oblong state and a pair of front camera portions |
US20100238263A1 (en) * | 2009-01-28 | 2010-09-23 | Robinson Ian N | Systems for performing visual collaboration between remotely situated participants |
US20110249073A1 (en) * | 2010-04-07 | 2011-10-13 | Cranfill Elizabeth C | Establishing a Video Conference During a Phone Call |
US8937646B1 (en) * | 2011-10-05 | 2015-01-20 | Amazon Technologies, Inc. | Stereo imaging using disparate imaging devices |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10956550B2 (en) | 2007-09-24 | 2021-03-23 | Apple Inc. | Embedded authentication systems in an electronic device |
US11468155B2 (en) | 2007-09-24 | 2022-10-11 | Apple Inc. | Embedded authentication systems in an electronic device |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
US11755712B2 (en) | 2011-09-29 | 2023-09-12 | Apple Inc. | Authentication with secondary approver |
US11494046B2 (en) | 2013-09-09 | 2022-11-08 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US10803281B2 (en) | 2013-09-09 | 2020-10-13 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on fingerprint sensor inputs |
US11287942B2 (en) | 2013-09-09 | 2022-03-29 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces |
US11768575B2 (en) | 2013-09-09 | 2023-09-26 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US10977651B2 (en) | 2014-05-29 | 2021-04-13 | Apple Inc. | User interface for payments |
US10796309B2 (en) | 2014-05-29 | 2020-10-06 | Apple Inc. | User interface for payments |
US10748153B2 (en) | 2014-05-29 | 2020-08-18 | Apple Inc. | User interface for payments |
US10902424B2 (en) | 2014-05-29 | 2021-01-26 | Apple Inc. | User interface for payments |
US11836725B2 (en) | 2014-05-29 | 2023-12-05 | Apple Inc. | User interface for payments |
US11321731B2 (en) | 2015-06-05 | 2022-05-03 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US11783305B2 (en) | 2015-06-05 | 2023-10-10 | Apple Inc. | User interface for loyalty accounts and private label accounts for a wearable device |
US11734708B2 (en) | 2015-06-05 | 2023-08-22 | Apple Inc. | User interface for loyalty accounts and private label accounts |
US10749967B2 (en) | 2016-05-19 | 2020-08-18 | Apple Inc. | User interface for remote authorization |
US11206309B2 (en) | 2016-05-19 | 2021-12-21 | Apple Inc. | User interface for remote authorization |
US11481769B2 (en) | 2016-06-11 | 2022-10-25 | Apple Inc. | User interface for transactions |
US11900372B2 (en) | 2016-06-12 | 2024-02-13 | Apple Inc. | User interfaces for transactions |
US11037150B2 (en) | 2016-06-12 | 2021-06-15 | Apple Inc. | User interfaces for transactions |
US11074572B2 (en) | 2016-09-06 | 2021-07-27 | Apple Inc. | User interfaces for stored-value accounts |
US11574041B2 (en) | 2016-10-25 | 2023-02-07 | Apple Inc. | User interface for managing access to credentials for use in an operation |
US10816633B2 (en) * | 2016-11-23 | 2020-10-27 | Hangzhou Hikvision Digital Technology Co., Ltd. | Device control method, apparatus and system |
US20190317178A1 (en) * | 2016-11-23 | 2019-10-17 | Hangzhou Hikvision Digital Technology Co., Ltd. | Device control method, apparatus and system |
CN108696712A (en) * | 2017-03-03 | 2018-10-23 | 展讯通信(上海)有限公司 | 3D video call methods, device and terminal based on IMS |
US10872256B2 (en) | 2017-09-09 | 2020-12-22 | Apple Inc. | Implementation of biometric authentication |
US11393258B2 (en) | 2017-09-09 | 2022-07-19 | Apple Inc. | Implementation of biometric authentication |
US10783227B2 (en) | 2017-09-09 | 2020-09-22 | Apple Inc. | Implementation of biometric authentication |
US11386189B2 (en) | 2017-09-09 | 2022-07-12 | Apple Inc. | Implementation of biometric authentication |
US10521579B2 (en) * | 2017-09-09 | 2019-12-31 | Apple Inc. | Implementation of biometric authentication |
US11765163B2 (en) | 2017-09-09 | 2023-09-19 | Apple Inc. | Implementation of biometric authentication |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
US11928200B2 (en) | 2018-06-03 | 2024-03-12 | Apple Inc. | Implementation of biometric authentication |
US10860096B2 (en) | 2018-09-28 | 2020-12-08 | Apple Inc. | Device control using gaze information |
US11619991B2 (en) | 2018-09-28 | 2023-04-04 | Apple Inc. | Device control using gaze information |
US11809784B2 (en) | 2018-09-28 | 2023-11-07 | Apple Inc. | Audio assisted enrollment |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
US11688001B2 (en) | 2019-03-24 | 2023-06-27 | Apple Inc. | User interfaces for managing an account |
US10783576B1 (en) | 2019-03-24 | 2020-09-22 | Apple Inc. | User interfaces for managing an account |
US11669896B2 (en) | 2019-03-24 | 2023-06-06 | Apple Inc. | User interfaces for managing an account |
US11610259B2 (en) | 2019-03-24 | 2023-03-21 | Apple Inc. | User interfaces for managing an account |
US11328352B2 (en) | 2019-03-24 | 2022-05-10 | Apple Inc. | User interfaces for managing an account |
US11816194B2 (en) | 2020-06-21 | 2023-11-14 | Apple Inc. | User interfaces for managing secure operations |
Also Published As
Publication number | Publication date |
---|---|
US20130147923A1 (en) | 2013-06-13 |
CN104012074A (en) | 2014-08-27 |
EP3376763A1 (en) | 2018-09-19 |
CN104012074B (en) | 2017-07-21 |
EP2781083A1 (en) | 2014-09-24 |
WO2013086979A1 (en) | 2013-06-20 |
EP2781083A4 (en) | 2015-06-10 |
US9246543B2 (en) | 2016-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9246543B2 (en) | Smart audio and video capture systems for data processing systems | |
EP2882170B1 (en) | Audio information processing method and apparatus | |
US9491553B2 (en) | Method of audio signal processing and hearing aid system for implementing the same | |
US9516241B2 (en) | Beamforming method and apparatus for sound signal | |
US8433076B2 (en) | Electronic apparatus for generating beamformed audio signals with steerable nulls | |
US9196238B2 (en) | Audio processing based on changed position or orientation of a portable mobile electronic apparatus | |
US10880519B2 (en) | Panoramic streaming of video with user selected audio | |
US20150022636A1 (en) | Method and system for voice capture using face detection in noisy environments | |
WO2017113937A1 (en) | Mobile terminal and noise reduction method | |
US20130106997A1 (en) | Apparatus and method for generating three-dimension data in portable terminal | |
US10186278B2 (en) | Microphone array noise suppression using noise field isotropy estimation | |
CN113192527A (en) | Method, apparatus, electronic device and storage medium for cancelling echo | |
US20170188140A1 (en) | Controlling audio beam forming with video stream data | |
CN107113496B (en) | Surround sound recording for mobile devices | |
US10097747B2 (en) | Multiple camera autofocus synchronization | |
CN106205630A (en) | Video recording system reduces the system of motor vibration noise | |
US20220095074A1 (en) | Method to adapt audio processing based on user attention sensing and system therefor | |
WO2017071045A1 (en) | Recording method and device | |
US10873806B2 (en) | Acoustic processing apparatus, acoustic processing system, acoustic processing method, and storage medium | |
US20240015433A1 (en) | Wind noise reduction, flexible beamforming, and direction of arrival estimation by microphone placement | |
WO2019072222A1 (en) | Image processing method and device and apparatus | |
JP2013070235A (en) | Imaging device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FUTUREWEI TECHNOLOGIES, INC., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHOU, JIONG;KALKER, TON;REEL/FRAME:038090/0399 Effective date: 20111212 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |