WO2021187645A1 - Terminal mobile - Google Patents

Terminal mobile Download PDF

Info

Publication number
WO2021187645A1
WO2021187645A1 PCT/KR2020/003862 KR2020003862W WO2021187645A1 WO 2021187645 A1 WO2021187645 A1 WO 2021187645A1 KR 2020003862 W KR2020003862 W KR 2020003862W WO 2021187645 A1 WO2021187645 A1 WO 2021187645A1
Authority
WO
WIPO (PCT)
Prior art keywords
noise
mixing level
mobile terminal
signal
mixing
Prior art date
Application number
PCT/KR2020/003862
Other languages
English (en)
Korean (ko)
Inventor
유주현
조현학
김정곤
이건섭
송호성
Original Assignee
엘지전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 엘지전자 주식회사 filed Critical 엘지전자 주식회사
Priority to PCT/KR2020/003862 priority Critical patent/WO2021187645A1/fr
Publication of WO2021187645A1 publication Critical patent/WO2021187645A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present invention relates to a mobile terminal, and more particularly, to a mobile terminal capable of controlling the inflow of ambient noise.
  • the terminal may be divided into a mobile/portable terminal and a stationary terminal according to whether the terminal can be moved.
  • the mobile terminal can be divided into a handheld terminal and a vehicle mounted terminal depending on whether the user can carry it directly.
  • the functions of mobile terminals are diversifying. For example, there are functions for data and voice communication, photography and video recording through a camera, voice recording, music file playback through a speaker system, and outputting an image or video to the display unit. Some terminals add an electronic game play function or perform a multimedia player function. In particular, recent mobile terminals can receive multicast signals that provide broadcast and visual content such as video or television programs.
  • a terminal is diversified in functions, for example, in the form of a multimedia player equipped with complex functions such as taking pictures or videos, playing music or video files, playing games, and receiving broadcasts. is being implemented.
  • a video shot by an individual through a terminal is uploaded to a content providing server or a server providing a social network service and shared with other users.
  • the conventional terminal is equipped with only a noise canceling function that removes ambient noise when shooting a video. Accordingly, all sounds other than the voice output by the desired object (person or object) are removed, and there is a problem in that the original sound output by the object is distorted.
  • An object of the present disclosure is to provide a mobile terminal that allows a user to introduce as much ambient noise as desired without distorting the original sound when shooting a video.
  • An object of the present disclosure is to provide a mobile terminal capable of producing content having a quality suitable for sharing through personal broadcasting and social network service (SNS) without separate voice editing.
  • SNS social network service
  • a mobile terminal includes one or more microphones for receiving an audio signal including an original sound signal and a noise signal, a camera for acquiring an image, and a mixing level for controlling an inflow of an image acquired by the camera and ambient noise Receives a request for adjusting the ambient noise through a display displaying a preview screen including an adjustment menu and the mixing level adjustment menu, determines a noise mixing level according to the received request, and determines the ambient noise level according to the determined noise mixing level It may include a processor that adjusts the amount of noise introduced.
  • the processor may mix the original sound signal and the audio signal according to the determined noise mixing level to adjust the amount of inflow of the ambient noise.
  • the processor may remove the noise signal from the voice signal to obtain an estimated original sound signal obtained by estimating the original sound signal, and may mix the estimated original sound signal and the voice signal according to the determined noise mixing level.
  • the user may control the amount of ambient noise inflow with only a simple touch input when shooting a video. Accordingly, there is an effect that a video can be captured regardless of the surrounding environment.
  • FIG. 1 shows a mobile terminal according to an embodiment of the present disclosure.
  • FIG. 2 is a view for explaining a noise removal method according to the prior art.
  • FIG 3 is a view for explaining an example of adjusting the amount of ambient noise inflow according to an embodiment of the present disclosure.
  • FIG. 4 is a view for explaining in detail a process in which a removal rate of a noise signal from an original sound signal input through a microphone is adjusted according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart illustrating a method of operating a mobile terminal according to an embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating an example of a preview screen according to an embodiment of the present disclosure.
  • FIG. 7 is a table illustrating a relationship between a scaling factor and an ambient noise mixing level according to an embodiment of the present disclosure.
  • FIG. 1 shows a mobile terminal 100 according to an embodiment of the present disclosure.
  • the mobile terminal 100 includes a TV, a projector, a mobile phone, a smart phone, a desktop computer, a notebook computer, a digital broadcasting terminal, a personal digital assistant (PDA), a portable multimedia player (PMP), a navigation system, a tablet PC, a wearable device, and a set-top box (STB). ), a DMB receiver, a radio, a washing machine, a refrigerator, a desktop computer, a digital signage, a robot, a vehicle, etc., may be implemented as a stationary device or a movable device.
  • PDA personal digital assistant
  • PMP portable multimedia player
  • STB set-top box
  • the mobile terminal 100 includes a communication unit 110 , an input unit 120 , a learning processor 130 , a sensing unit 140 , an output unit 150 , a memory 170 , and a processor 180 .
  • a communication unit 110 may include
  • the communication unit 110 may transmit/receive data to and from external devices such as another mobile terminal or an external server using wired/wireless communication technology.
  • the communication unit 110 may transmit/receive sensor information, a user input, a learning model, a control signal, and the like with external devices.
  • the communication technology used by the communication unit 110 includes GSM (Global System for Mobile communication), CDMA (Code Division Multi Access), LTE (Long Term Evolution), 5G, WLAN (Wireless LAN), Wi-Fi (Wireless-Fidelity) ), Bluetooth, RFID (Radio Frequency Identification), Infrared Data Association (IrDA), ZigBee, NFC (Near Field Communication), and the like.
  • GSM Global System for Mobile communication
  • CDMA Code Division Multi Access
  • LTE Long Term Evolution
  • 5G Fifth Generation
  • WLAN Wireless LAN
  • Wi-Fi Wireless-Fidelity
  • Bluetooth Bluetooth
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • ZigBee ZigBee
  • NFC Near Field Communication
  • the input unit 120 may acquire various types of data.
  • the input unit 120 may include a camera for inputting an image signal, a microphone for receiving an audio signal, a user input unit for receiving information from a user, and the like.
  • the camera or microphone may be treated as a sensor, and a signal obtained from the camera or microphone may be referred to as sensing data or sensor information.
  • the input unit 120 may acquire training data for model training and input data to be used when acquiring an output using the training model.
  • the input unit 120 may acquire raw input data, and in this case, the processor 180 or the learning processor 130 may extract an input feature as a preprocessing for the input data.
  • the input unit 120 may include a camera (Camera, 121) for inputting an image signal, a microphone (Microphone, 122) for receiving an audio signal, and a user input unit (User Input Unit, 123) for receiving information from a user. have.
  • a camera Camera
  • Microphone Microphone
  • User Input Unit User Input Unit
  • the voice data or image data collected by the input unit 120 may be analyzed and processed as a user's control command.
  • the input unit 120 is for inputting image information (or signal), audio information (or signal), data, or information input from a user.
  • the mobile terminal 100 may include one or more Cameras 121 may be provided.
  • the camera 121 processes an image frame such as a still image or a moving image obtained by an image sensor in a video call mode or a photographing mode.
  • the processed image frame may be displayed on the display unit 151 or stored in the memory 170 .
  • the microphone 122 processes an external sound signal as electrical voice data.
  • the processed voice data may be utilized in various ways according to a function (or a running application program) being performed by the mobile terminal 100 . Meanwhile, various noise removal algorithms for removing noise generated in the process of receiving an external sound signal may be applied to the microphone 122 .
  • the user input unit 123 is for receiving information from a user, and when information is input through the user input unit 123 , the processor 180 may control the operation of the mobile terminal 100 to correspond to the input information. .
  • the user input unit 123 includes a mechanical input means (or a mechanical key, for example, a button located on the front/rear or side of the terminal 100, a dome switch, a jog wheel, a jog switch, etc.) and It may include a touch input means.
  • the touch input means consists of a virtual key, a soft key, or a visual key displayed on the touch screen through software processing, or a part other than the touch screen. It may be made of a touch key (touch key) disposed on the.
  • the learning processor 130 may train a model composed of an artificial neural network by using the training data.
  • the learned artificial neural network may be referred to as a learning model.
  • the learning model may be used to infer a result value with respect to new input data other than the training data, and the inferred value may be used as a basis for a decision to perform a certain operation.
  • the learning processor 130 may include a memory integrated or implemented in the mobile terminal 100 .
  • the learning processor 130 may be implemented using the memory 170 , an external memory directly coupled to the mobile terminal 100 , or a memory maintained in an external device.
  • the sensing unit 140 may acquire at least one of internal information of the mobile terminal 100 , surrounding environment information of the mobile terminal 100 , and user information by using various sensors.
  • sensors included in the sensing unit 140 include a proximity sensor, an illuminance sensor, an acceleration sensor, a magnetic sensor, a gyro sensor, an inertial sensor, an RGB sensor, an IR sensor, a fingerprint recognition sensor, an ultrasonic sensor, an optical sensor, a microphone, and a lidar. , radar, etc.
  • the output unit 150 may generate an output related to visual, auditory or tactile sense.
  • the output unit 150 may include a display unit that outputs visual information, a speaker that outputs auditory information, and a haptic module that outputs tactile information.
  • the output unit 150 includes at least one of a display unit 151, a sound output unit 152, a haptic module 153, and an optical output unit 154. can do.
  • the display unit 151 displays (outputs) information processed by the mobile terminal 100 .
  • the display unit 151 may display execution screen information of an application program driven in the mobile terminal 100 or UI (User Interface) and GUI (Graphic User Interface) information according to the execution screen information.
  • UI User Interface
  • GUI Graphic User Interface
  • the display unit 151 may implement a touch screen by forming a layer structure with the touch sensor or being formed integrally with the touch sensor.
  • a touch screen may function as the user input unit 123 providing an input interface between the mobile terminal 100 and the user, and may provide an output interface between the terminal 100 and the user.
  • the sound output unit 152 may output audio data received from the communication unit 110 or stored in the memory 170 in a call signal reception, a call mode or a recording mode, a voice recognition mode, a broadcast reception mode, and the like.
  • the sound output unit 152 may include at least one of a receiver, a speaker, and a buzzer.
  • the haptic module 153 generates various tactile effects that the user can feel.
  • a representative example of the tactile effect generated by the haptic module 153 may be vibration.
  • the light output unit 154 outputs a signal for notifying the occurrence of an event by using the light of the light source of the mobile terminal 100 .
  • Examples of the event generated in the mobile terminal 100 may be message reception, call signal reception, missed call, alarm, schedule notification, email reception, information reception through an application, and the like.
  • the memory 170 may store data supporting various functions of the mobile terminal 100 .
  • the memory 170 may store input data obtained from the input unit 120 , learning data, a learning model, a learning history, and the like.
  • the processor 180 may determine at least one executable operation of the mobile terminal 100 based on information determined or generated using a data analysis algorithm or a machine learning algorithm. Then, the processor 180 may control the components of the mobile terminal 100 to perform the determined operation.
  • the processor 180 may request, search, receive, or utilize the data of the learning processor 130 or the memory 170, and may perform a predicted operation or an operation determined to be desirable among the at least one executable operation. It is possible to control the components of the mobile terminal 100 to execute.
  • the processor 180 may generate a control signal for controlling the corresponding external device and transmit the generated control signal to the corresponding external device.
  • the processor 180 may obtain intention information with respect to a user input and determine a user's requirement based on the obtained intention information.
  • the processor 180 uses at least one of a speech to text (STT) engine for converting a voice input into a character string or a natural language processing (NLP) engine for obtaining intention information of a natural language. Intention information corresponding to the input may be obtained.
  • STT speech to text
  • NLP natural language processing
  • At this time, at least one of the STT engine and the NLP engine may be configured as an artificial neural network, at least a part of which is learned according to a machine learning algorithm. And, at least one or more of the STT engine or the NLP engine may be learned by the learning processor 130 , learned by an external server, or learned by distributed processing thereof.
  • the processor 180 collects history information including user feedback on the operation contents or operation of the mobile terminal 100 and stores it in the memory 170 or the learning processor 130, or to an external device such as an external server. can be transmitted The collected historical information may be used to update the learning model.
  • the processor 180 may control at least some of the components of the mobile terminal 100 in order to drive an application program stored in the memory 170 . Furthermore, in order to drive the application program, the processor 180 may operate two or more of the components included in the mobile terminal 100 in combination with each other.
  • FIG. 2 is a view for explaining a noise removal method according to the prior art
  • FIG. 3 is a view for explaining an example of adjusting the amount of ambient noise inflow according to an embodiment of the present disclosure.
  • the noise removal module 200 removes the noise signal n from the voice signal y including the original sound signal s0 and the noise signal n.
  • the noise removal module 200 may output the estimated original sound signal s1 similar to the input original sound signal s.
  • the noise removal module 200 may identify the noise signal n, generate a signal having a waveform opposite to that of the identified noise signal n, and cancel the noise signal n.
  • the noise signal n can be effectively removed, there is a problem in that the noise signal n corresponding to the ambient noise signal is always removed.
  • an inflow amount of ambient noise is to be adjusted.
  • the mobile terminal 100 may include a noise removal module 310 and a mixer 330 .
  • the noise removal module 310 and the mixer 330 may be included in the processor 180 of FIG. 1 or may exist separately from the processor 180 .
  • the microphone 122 may receive a voice signal y from the outside.
  • the voice signal y may include an original sound signal s0 corresponding to the voice output by the target object and a noise signal n corresponding to ambient noise.
  • the noise removal module 310 may output the estimated original sound signal s1 obtained by removing the noise signal n from the voice signal y.
  • the noise removal module 310 may separate the original sound signal s0 and the noise signal n from the voice signal y.
  • the noise removal module 310 may generate an opposite signal having a waveform opposite to that of the noise signal n, and cancel the noise signal n by using the generated opposite signal. Accordingly, an estimated original sound signal s1 similar to the original sound signal s0 may be obtained.
  • the mixer 330 may mix the estimated original sound signal s1 and the original sound signal y, and output the mixed result.
  • the mixer 330 may mix the estimated original sound signal s1 and the audio signal y using the scaling factor ⁇ .
  • the mixing voice signal which is the mixing result of the mixer 330, may be expressed as Equation 1 below.
  • the scaling factor ( ⁇ ) is a factor used to adjust the amount of ambient noise, and may be any one of 0 or more and 1 or less.
  • the reason that the voice signal y is used instead of the noise signal n in the (1- ⁇ ) y item corresponding to the amount of ambient noise is that the estimated original sound signal s1 is used in the process of removing the noise signal n. because it was distorted.
  • the voice signal y is used instead of the noise signal n. This is because the audio signal y includes the original sound signal s0 to compensate for distortion of the estimated original sound signal s1.
  • the amount of ambient noise introduced may be 0.
  • the amount of ambient noise introduced may be 1.
  • the value of the scaling factor ⁇ may be set as a default or may be set according to a user input.
  • the value of the scaling factor ⁇ may be associated with an ambient noise mixing level determined through manipulation of a mixing level adjustment menu, which will be described later.
  • the amount of ambient noise can be adjusted, so that the user can remove the ambient noise to a desired degree.
  • a sense of presence appropriate to the recording environment of the video may be delivered to the viewer of the video.
  • FIG. 4 is a view for explaining in detail a process in which a removal rate of a noise signal from an original sound signal input through a microphone is adjusted according to an embodiment of the present disclosure.
  • the mobile terminal 100 may include a plurality of microphones.
  • two microphones 122a and 122b are used as an example.
  • the processor 180 may include a noise removal module 310 , a preprocessor 320 , a mixer 330 , and a postprocessor 350 .
  • the noise removal module 310 may remove a noise signal from a voice signal input through the first microphone 122a or the second microphone 122b.
  • the preprocessor 320 may preprocess the voice signal input through the first microphone 122a or the second microphone 122b.
  • the mixer 330 may mix the original sound signal from which the noise signal is removed and the audio signal.
  • the mixer 330 may mix an original sound signal and an audio signal based on the ambient noise mixing level.
  • the post-processing unit 350 may post-process the mixed voice signal representing the output result of the mixer 330 .
  • the noise removal module 310 may include a first amplifier 311 , a first digital filter 313 , a signal separator 315 , and a first dynamic range compressor 317 .
  • the first amplifier 311 may amplify a voice signal input through the first microphone 122a or the second microphone 122b.
  • the first digital filter 313 may filter the amplified voice signal.
  • the first digital filter 313 may correct the tone characteristics of the voice signal.
  • the signal separator 315 may separate the filtered voice signal into an original sound signal and a noise signal.
  • the signal separation unit 315 may separate a voice signal into an original sound signal and a noise signal by using a well-known deep learning algorithm or machine learning algorithm for noise cancellation.
  • the noise signal may be a signal corresponding to the surrounding voice signal.
  • the signal separator 315 may obtain an estimated original sound signal obtained by estimating the original sound signal by removing the separated noise signal.
  • the first dynamic range compressor 317 may compress the dynamic range of the estimated original sound signal.
  • the dynamic range of the estimated original sound signal may be a range between the largest magnitude and the smallest magnitude of the estimated original sound signal.
  • the preprocessor 320 may include a delay time compensator 321 , a second amplifier 323 , and a second digital filter 325 .
  • the delay time compensator 321 determines the time it takes for the voice signal to be output to the mixer 330 through the noise removal module 310 and the time it takes for the voice signal to be output to the mixer 330 through the preprocessor 320 . difference can be compensated for.
  • the delay time compensator 321 may compensate for the delay time through phase shifting of the voice signal.
  • the second amplifier 323 may amplify the audio signal.
  • the second digital filter 325 may filter the amplified voice signal.
  • the second digital filter 325 may correct distortion of the amplified voice signal.
  • the mixer 330 may mix the estimated original sound signal output from the noise removal module 310 and the filtered audio signal output from the preprocessor 320 .
  • the mixer 330 may mix the estimated original sound signal and the audio signal based on the ambient noise mixing level, and may output a mixed audio signal indicating the mixing result.
  • the post-processing unit 350 may include a second dynamic range compressor 351 , a third amplifier 353 , and an encoder 355 .
  • the second dynamic range compressor 351 may compress the dynamic range of the mixed voice signal output from the mixer 330 .
  • the third amplifier 353 may amplify a mixed voice signal having a compressed dynamic range.
  • the encoder 355 may encode the amplified speech signal.
  • the encoded mixed voice signal may be matched with a moving picture and stored in the memory 170 .
  • the encoded mixed voice signal, the moving picture, and the ambient noise mixing level may be stored together in the memory 170 .
  • FIG. 5 is a flowchart illustrating a method of operating a mobile terminal according to an embodiment of the present disclosure.
  • the processor 180 of the mobile terminal 100 displays a preview screen on the display unit 151 ( S501 ).
  • the processor 180 may display a preview screen on the display unit 151 according to the execution of the camera application installed in the mobile terminal 100 .
  • the preview screen may include an image capturing button for capturing an image and a video capturing button for capturing a moving picture.
  • the processor 180 may start recording a video when a video recording button is selected.
  • the preview screen will be described with reference to FIG. 6 .
  • FIG. 6 is a diagram illustrating an example of a preview screen according to an embodiment of the present disclosure.
  • the preview screen 600 may include a preview image 610 acquired through the camera 121 , a mixing level adjustment menu 630 , and a video recording button 601 .
  • the mixing level adjustment menu 630 may be a menu for adjusting the amount of ambient noise introduced through one or more microphones while shooting a video.
  • the mixing level adjustment menu 630 will be described later in detail.
  • the video recording button 601 may be a button for starting or ending recording of a video.
  • FIG. 5 will be described.
  • the processor 180 of the mobile terminal 100 preview While displaying the screen, it is determined whether a request for controlling the ambient noise has been received (S503), and if the request for controlling the ambient noise is received, the ambient noise level is adjusted according to the received request. mixing The level is determined (S505).
  • the processor 18 may receive a request for adjusting the ambient noise after shooting a video.
  • the processor 180 may receive a request for adjusting ambient noise even before shooting a video. That is, the processor 180 may receive a request for adjusting the ambient noise even when the preview image 610 of FIG. 6 is displayed and the video capture button 601 is not selected.
  • the ambient noise control request may be received through manipulation of the mixing level control menu 630 on the preview screen 600 of FIG. 6 . This will be described later.
  • the processor 180 may determine the mixing level of the ambient noise through manipulation of the mixing level adjustment menu 630 .
  • the preview screen 600 may include a mixing level adjustment menu 630 .
  • the mixing level adjustment menu 630 may be displayed when a video recording command is received.
  • the mixing level adjustment menu 630 includes one or more of a minimum level icon 631 , a maximum level icon 633 , a mixing level adjustment guide bar 635 , a mixing level adjustment button 637 , and a mixing level indicator 639 . can do.
  • the minimum level icon 631 may be an icon for maximally reducing ambient noise. When the minimum level icon 631 is selected, the ambient voice mixing level may be set to the minimum.
  • the minimum value of the ambient voice mixing level may be 0, and the maximum value of the ambient voice mixing level may be 100. However, this is only an example, and may vary according to user settings.
  • the maximum level icon 633 may be an icon for maximally increasing ambient noise. When the maximum level icon 633 is selected, the ambient voice mixing level may be set to the maximum.
  • the mixing level adjustment guide bar 635 may guide selection of a mixing level of ambient noise.
  • the mixing level adjusting guide bar 635 may be divided into a plurality of levels.
  • the mixing level adjustment button 637 may move on the mixing level adjustment guide bar 635 and may be a button for selecting a specific mixing level.
  • the mixing level adjustment button 637 may be located at any one of a plurality of levels partitioned on the mixing level adjustment guide bar 635 .
  • a user may select a mixing level of ambient noise through a touch input to the mixing level adjustment button 637 .
  • the mixing level indicator 639 may be an indicator indicating the value of the mixing level selected through the mixing level control button 637 . The user may check how much of the ambient noise is introduced through the mixing level indicator 639 .
  • the amount of ambient noise may be increased, and as the value of the mixing level indicator 639 decreases, the amount of ambient noise may decrease.
  • the value of the scaling factor ⁇ may be 1.
  • the value of the scaling factor ⁇ may be 0.
  • FIG. 5 will be described.
  • the processor 180 of the mobile terminal 100 separates the voice signal input through the microphone 122 into an original sound signal and an ambient noise signal (S507).
  • the processor 180 may separate the voice signal into an original sound signal and an ambient noise signal.
  • the ambient noise signal may be a noise signal.
  • the noise removal module 310 of the processor 180 may separate the voice signal into an original sound signal and an ambient noise signal and remove the ambient noise signal. Accordingly, the processor 180 may obtain an estimated original sound signal similar to the original sound signal.
  • the processor 180 may use a well-known deep learning algorithm or machine learning algorithm for noise cancellation to separate an original sound signal and an ambient sound signal from a voice signal, and may remove the surrounding voice signal.
  • the processor 180 of the mobile terminal 100 determines the ambient noise mixing Based on the level, the separated original sound signal and the audio signal input through the microphone 122 are mix (S509).
  • the mixer 330 of the processor 180 may generate a mixed voice signal by mixing the separated original sound signal and the voice signal.
  • the accurately separated original sound signal may be the estimated original sound signal s1.
  • the mixed voice signal representing the mixing result may be expressed as in [Equation 1] below.
  • the scaling factor ( ⁇ ) is a factor used to adjust the amount of ambient noise, and may be any one of 0 or more and 1 or less.
  • the scaling factor ⁇ may be a value corresponding to the ambient noise mixing level. As the value of the ambient noise mixing level increases, the value of the scaling factor ⁇ may decrease. As the value of the ambient noise mixing level decreases, the value of the scaling factor ⁇ may increase.
  • the processor 180 of the mobile terminal 100 determines whether a video recording end command has been received (S511), and upon receiving the video recording end command, the captured video and mixing indicating the result mixing Voice signal and ambient noise mixing The level is stored in the memory 170 (S513).
  • the processor 180 may output a mixed voice signal reflecting the mixing result through a speaker provided in the mobile terminal 100 when playing the video.
  • the processor 180 of the mobile terminal 100 does not receive the ambient noise control request. if not , preset ambient noise mixing A level is acquired (S515).
  • the processor 180 may determine the amount of ambient noise introduced by using a preset ambient noise mixing level.
  • the preset ambient noise mixing level may be the most recently stored ambient noise mixing level before shooting a video.
  • the preset ambient noise mixing level may be a default level.
  • the level set by default may be 50, but this is only an example.
  • FIG. 7 is a table illustrating a relationship between a scaling factor and an ambient noise mixing level according to an embodiment of the present disclosure.
  • the scaling factor ⁇ is a factor described in [Equation 1], and the ambient noise mixing level is a level selected from the mixing level adjustment menu 630 of FIG. 6 .
  • the value of the scaling factor ⁇ may be set to 0.
  • the value of the scaling factor ⁇ may be set to 0.2.
  • the value of the scaling factor ⁇ may be set to 0.5.
  • the value of the scaling factor ⁇ may be set to 0.8.
  • the value of the scaling factor ⁇ may be set to 0.
  • the processor 180 may obtain the ambient noise mixing level selected from the mixing level adjustment menu 630 and determine a scaling factor ⁇ corresponding to the obtained ambient noise mixing level.
  • the processor 180 may obtain a mixed voice signal as in [Equation 1] by using the determined value of the scaling factor ⁇ .
  • the present disclosure described above can be implemented as computer-readable code on a medium in which a program is recorded.
  • the computer-readable medium includes all kinds of recording devices in which data readable by a computer system is stored. Examples of computer-readable media include Hard Disk Drive (HDD), Solid State Disk (SSD), Silicon Disk Drive (SDD), ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc. there is this
  • the computer may include the processor 180 of the artificial intelligence device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

La présente invention concerne un terminal mobile permettant de régler la quantité d'entrée de bruit ambiant, et le terminal mobile peut comprendre : un ou plusieurs microphones permettant de recevoir un signal de parole comprenant un signal sonore d'origine et un signal de bruit ; une caméra permettant d'acquérir une vidéo ; une unité d'affichage permettant d'afficher un écran de prévisualisation comprenant un menu de réglage de niveau de mélange pour régler l'image acquise par la caméra et la quantité d'entrée de bruit ambiant ; et un processeur permettant de recevoir une requête de réglage de bruit ambiant par le biais du menu de réglage de niveau de mélange, de déterminer un niveau de mélange de bruit selon la requête reçue, et de régler la quantité d'entrée de bruit ambiant selon le niveau de mélange de bruit déterminé.
PCT/KR2020/003862 2020-03-20 2020-03-20 Terminal mobile WO2021187645A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/003862 WO2021187645A1 (fr) 2020-03-20 2020-03-20 Terminal mobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/003862 WO2021187645A1 (fr) 2020-03-20 2020-03-20 Terminal mobile

Publications (1)

Publication Number Publication Date
WO2021187645A1 true WO2021187645A1 (fr) 2021-09-23

Family

ID=77768208

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/003862 WO2021187645A1 (fr) 2020-03-20 2020-03-20 Terminal mobile

Country Status (1)

Country Link
WO (1) WO2021187645A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1093454A (ja) * 1996-08-08 1998-04-10 Motorola Inc デジタル受信機内にノイズを発生する装置および方法
KR20120034863A (ko) * 2010-10-04 2012-04-13 삼성전자주식회사 이동통신 단말기에서 오디오 신호 처리 방법 및 장치
KR101516589B1 (ko) * 2008-03-25 2015-05-06 에스케이텔레콤 주식회사 이동통신단말기 및 그의 음성신호 처리 방법
KR20160000345A (ko) * 2014-06-24 2016-01-04 엘지전자 주식회사 이동 단말기 및 그 제어 방법
KR20160055023A (ko) * 2014-11-07 2016-05-17 엘지전자 주식회사 이동 단말기 및 이동 단말기의 제어 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1093454A (ja) * 1996-08-08 1998-04-10 Motorola Inc デジタル受信機内にノイズを発生する装置および方法
KR101516589B1 (ko) * 2008-03-25 2015-05-06 에스케이텔레콤 주식회사 이동통신단말기 및 그의 음성신호 처리 방법
KR20120034863A (ko) * 2010-10-04 2012-04-13 삼성전자주식회사 이동통신 단말기에서 오디오 신호 처리 방법 및 장치
KR20160000345A (ko) * 2014-06-24 2016-01-04 엘지전자 주식회사 이동 단말기 및 그 제어 방법
KR20160055023A (ko) * 2014-11-07 2016-05-17 엘지전자 주식회사 이동 단말기 및 이동 단말기의 제어 방법

Similar Documents

Publication Publication Date Title
WO2015194693A1 (fr) Dispositif d'affichage de vidéo et son procédé de fonctionnement
WO2019027090A1 (fr) Terminal mobile et procédé de commande associé
EP3430811A1 (fr) Appareil électronique et son procédé de fonctionnement
WO2016175424A1 (fr) Terminal mobile, et procédé de commande associé
WO2019135433A1 (fr) Dispositif d'affichage et système comprenant ce dernier
WO2016182361A1 (fr) Procédé de reconnaissance de gestes, dispositif informatique et dispositif de commande
WO2021060575A1 (fr) Serveur à intelligence artificielle et procédé de fonctionnement associé
WO2022010177A1 (fr) Dispositif et procédé de génération de résumé vidéo
WO2018066788A1 (fr) Dispositif d'affichage
WO2021187645A1 (fr) Terminal mobile
WO2015142135A1 (fr) Procédé et dispositif d'affichage d'image
WO2021201320A1 (fr) Dispositif d'affichage
WO2022102945A1 (fr) Dispositif électronique et procédé de commande associé
WO2022169039A1 (fr) Appareil électronique et son procédé de commande
WO2021193991A1 (fr) Dispositif d'affichage
WO2020122271A1 (fr) Dispositif d'affichage
WO2024005241A1 (fr) Unité d'affichage et son procédé de fonctionnement
WO2021162173A1 (fr) Caméra multiple, dispositif de capture d'image, et procédé correspondant
WO2023182624A1 (fr) Dispositif d'affichage
WO2023095947A1 (fr) Dispositif d'affichage et son procédé de fonctionnement
WO2024010129A1 (fr) Dispositif d'affichage et procédé de fonctionnement associé
WO2023008621A1 (fr) Dispositif d'affichage sans fil, boîtier décodeur sans fil et système d'affichage sans fil
WO2024117508A1 (fr) Dispositif électronique et procédé de fourniture d'espace virtuel
WO2022065662A1 (fr) Dispositif électronique et son procédé de commande
WO2022019661A1 (fr) Procédé de réalisation d'appel vidéo, dispositif d'affichage pour réaliser ledit procédé, et support d'enregistrement lisible par ordinateur dans lequel est stocké un programme pour la réalisation dudit procédé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926058

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926058

Country of ref document: EP

Kind code of ref document: A1