CN112565973A - Terminal, terminal control method, terminal control device and storage medium - Google Patents

Terminal, terminal control method, terminal control device and storage medium Download PDF

Info

Publication number
CN112565973A
CN112565973A CN202011521562.2A CN202011521562A CN112565973A CN 112565973 A CN112565973 A CN 112565973A CN 202011521562 A CN202011521562 A CN 202011521562A CN 112565973 A CN112565973 A CN 112565973A
Authority
CN
China
Prior art keywords
terminal
target object
audio processing
component
millimeter wave
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011521562.2A
Other languages
Chinese (zh)
Other versions
CN112565973B (en
Inventor
何昱滨
王杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011521562.2A priority Critical patent/CN112565973B/en
Publication of CN112565973A publication Critical patent/CN112565973A/en
Priority to PCT/CN2021/129846 priority patent/WO2022134910A1/en
Application granted granted Critical
Publication of CN112565973B publication Critical patent/CN112565973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/323Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/326Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for microphones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The embodiment of the application provides a terminal, a terminal control method, a terminal control device and a storage medium, and relates to the technical field of terminals. The terminal comprises a positioning component, a motor control component and an audio processing component; the positioning component is used for determining related information of the target object, the related information comprises the contour shape of the target object and/or the position relation between the target object and the terminal, the motor control component is used for controlling the rotation of the audio processing component based on the related information, and the audio processing component is used for outputting or collecting audio; the positioning component is coupled with the motor control component; the motor control component and the audio processing component are coupled. The embodiment of the application improves the effectiveness of audio output or acquisition.

Description

Terminal, terminal control method, terminal control device and storage medium
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to a terminal, a terminal control method, a terminal control device and a storage medium.
Background
The terminal has audio output and audio acquisition functions.
In the related art, a speaker for outputting audio is provided at the bottom of the terminal; the bottom of the terminal is also provided with a microphone for collecting audio.
Disclosure of Invention
The embodiment of the application provides a terminal, a terminal control method, a terminal control device and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a terminal, where the terminal includes a positioning component, a motor control component, and an audio processing component; the positioning component is used for determining related information of a target object, the related information comprises a contour shape of the target object and/or a position relation between the target object and the terminal, the motor control component is used for controlling the rotation of the audio processing component based on the related information, and the audio processing component is used for outputting or acquiring audio;
the positioning component is coupled with the motor control component;
the motor control component and the audio processing component are coupled.
In another aspect, an embodiment of the present application provides a terminal control method, which is applied to the terminal described in the above aspect, and the method includes:
transmitting a first millimeter wave signal through the positioning component;
receiving, by the positioning component, a second millimeter wave signal reflected by the target object from the first millimeter wave signal;
determining related information of the target object based on the first millimeter wave signal and the second millimeter wave signal, wherein the related information comprises the contour shape of the target object and/or the position relation between the target object and the terminal;
and controlling the audio processing component to rotate through the motor control component based on the related information, wherein the rotated audio processing component corresponds to the target object.
In another aspect, an embodiment of the present application provides a terminal control apparatus, which is applied to the terminal described in the above aspect, and the apparatus includes:
the signal sending module is used for sending a first millimeter wave signal through the positioning component;
the signal receiving module is used for receiving a second millimeter wave signal reflected by the target object from the first millimeter wave signal through the positioning component;
an information determination module, configured to determine, based on the first millimeter wave signal and the second millimeter wave signal, related information of the target object, where the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal;
and the component rotating module is used for controlling the audio processing component to rotate through the motor control component based on the related information, and the rotated audio processing component corresponds to the target object.
In still another aspect, an embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and the computer program is loaded and executed by a processor to implement the terminal control method according to the above aspect.
In yet another aspect, embodiments of the present application provide a computer program product including computer instructions stored in a computer-readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the terminal control method.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the audio processing component is controlled to rotate based on the contour shape of the target object and/or the position relation between the target object and the terminal, so that the audio processing component can more accurately output audio to the target object or collect audio from the target object, the interference to non-target audiences is reduced, and the effectiveness of audio output or collection is improved.
Drawings
FIG. 1 is a schematic diagram of an audio processing system provided by one embodiment of the present application;
fig. 2 is a schematic diagram of a terminal provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a terminal provided in another embodiment of the present application;
fig. 4 is a schematic diagram of a terminal provided in another embodiment of the present application;
fig. 5 is a flowchart of a terminal control method according to an embodiment of the present application;
fig. 6 is a flowchart of a terminal control method according to another embodiment of the present application;
fig. 7 is a flowchart of a terminal control method according to another embodiment of the present application;
fig. 8 is a block diagram of a terminal control apparatus according to an embodiment of the present application;
fig. 9 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms related to embodiments of the present application are explained:
electric motor (Electric motor): the motor is an electromagnetic device which realizes the conversion or transmission of electric energy based on the law of electromagnetic induction.
Stepping Motor (Stepping Motor): which may also be referred to as a stepper motor, is one type of motor. A stepping motor is a discrete value control motor that converts an electrical pulse excitation signal into a corresponding angular displacement or linear displacement, and is also called a pulse motor because the stepping motor moves one step each time an electrical pulse is input. Stepping motors are classified into three basic types, electromechanical, magnetoelectric, and linear.
Millimeter wave (Millimeter Waves): refers to Electromagnetic Waves (Electromagnetic Waves) with the frequency range of 30-300GHz (gigahertz). Under vacuum or Free-Space (Free-Space) conditions, the corresponding wavelength ranges from 1 to 10 mm.
Transceiver: also known as a transceiver, is a radio transmitter and receiver mounted on a single unit and sharing a portion of the same circuitry.
LPAMiD (power amplifier module): the integrated module is used for integrating a power amplifier, a duplexer, a filter, a switch module and a low noise amplifier in a radio frequency front end part. Illustratively, the LPAMID includes an LNA (Low Noise Amplifier), an integrated multi-mode multi-band PA (Power Amplifier), and a FEMiD.
LNA: the amplifier has a very low noise coefficient, and is generally used as a high-frequency or intermediate-frequency preamplifier of various radio receivers and an amplifying circuit of high-sensitivity electronic detection equipment. The LNA determines the overall performance of the receiver. The role of the LNA is to pick up extremely weak and indeterminate signals from the antenna, typically on the order of microvolts or less than-100 dBm, and then amplify the signal to a more useful level, typically about 0.5 to 1V. The main parameters of the LNA are the Noise Figure (NF), gain and linearity. The noise figure is used to measure the magnitude of the LNA internal noise.
Integrated multimode-multiband PA: refers to a PA that supports multiple modes and frequency bands.
FEMiD (front end module): the integrated module is an integrated module of a radio frequency front end integrated radio frequency switch, a filter and a duplexer. The performance of the rf switch, such as insertion loss, return loss, isolation, harmonic rejection, and power capability, is critical to the impact of the rf front-end link. The duplexer is used to isolate the transmitted and received signals and ensure the normal operation of both receiving and transmitting signals, and is composed of two groups of band-pass filters with different frequencies to avoid the transmission of the signals transmitted by the duplexer to the receiver.
DSP (Digital Signal Processor): the chip is a chip of an integrated special-purpose computer and refers to a processor capable of realizing digital signal processing technology.
Referring to fig. 1, a schematic diagram of an audio processing system provided by an embodiment of the present application is shown, which may include a terminal 100 and a target object 200.
In the embodiment of the present application, the terminal 100 refers to an electronic device having an audio output and/or audio capture function. Illustratively, the terminal 100 may be a mobile phone, a tablet computer, an electronic book reader, a multimedia playing device, a wearable device, or the like.
In the embodiment of the present application, the target object 200 refers to an object having an audio output or audio acquisition requirement. Illustratively, the target object 200 may include at least one of: characters and articles. For example, the target object 200 may include at least one person, or the target object 200 may include at least one article, or the target object 200 may include at least one person and at least one article, which is not limited in this embodiment. The target object 200 may be in a stationary state or a moving state, which is not limited in this embodiment of the application.
For example, the terminal 100 provided in the embodiment of the present application may output audio to the target object 200 based on the related information of the target object 200, or capture audio from the target object 200. The related information of the target object 200 includes the contour shape of the target object and/or the position relationship between the target object and the terminal, and the terminal 100 outputs audio to the target object or collects audio from the target object based on the contour shape of the target object and/or the position between the target object and the terminal, thereby realizing the directional output or collection of audio. For audio output, in the diffusion direction of non-target audiences, the interference to other people is reduced, and the waste of power is reduced; for audio acquisition, the sound source to be input is usually only from a narrow specific direction angle, so that the probability of receiving sound sources in other directions is reduced, and the possibility of noise input is reduced.
Several embodiments of the present application will be described below.
Referring to fig. 2, a schematic diagram of a terminal according to an embodiment of the present application is shown. The terminal 100 may include: a positioning component 110, a motor control component 120, and an audio processing component 130.
In an embodiment of the present application, the positioning component 110 is used to determine relevant information of the target object. The related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal. The outline shape of the target object is used to indicate the appearance of the target object, and a portion corresponding to the audio output or audio capture can be determined based on the outline shape of the target object. The position relation between the target object and the terminal includes at least one of: the distance between the target object and the terminal, and the orientation between the target object and the terminal. The distance between the target object and the terminal may refer to a vertical distance between the target object and the terminal, for example, the distance between the target object and the terminal may be 50cm (centimeter), 10cm (centimeter), or the like; the orientation between the target object and the terminal is used to indicate the orientation of the target object when the terminal is used as a base point, for example, the orientation between the target object and the terminal is that the target object is located in the south of the terminal, or the orientation between the target object and the terminal is that the target object is located in the north of the terminal, or the orientation between the target object and the terminal is that the target object is located in the west-north of the terminal, and the like.
The motor control assembly 120 is used to control the rotation of the audio processing assembly 130 based on the relevant information. Illustratively, when the related information includes the contour shape of the target object, the motor control component 120 may control the rotation of the audio processing component 130 based on the contour shape of the target object; when the related information includes the positional relationship between the target object and the terminal, the motor control component 120 may control the rotation of the audio processing component 130 based on the positional relationship between the target object and the terminal.
The audio processing component 130 is used to output or capture audio. The audio processing component 130 is used to output audio to the target object or the audio processing component 130 is used to capture audio from the target object. The audio processing component 130 provided by the embodiment of the present application is rotatable. In a possible implementation manner, the audio processing component may include one or more components, for example, the audio processing component may include one or more microphones, the audio processing component may include one or more earphones, the audio processing component may include one or more speakers, and the like, which is not limited in this embodiment.
The positioning component 110 and the motor control component 120 are coupled; the motor control component 120 and the audio processing component 130 are coupled. In a possible implementation manner, after the positioning component 110 determines the related information of the target object, the related information is sent to the motor control component 120, and the motor control component 120 controls the rotation of the audio processing component 130 based on the related information of the target object.
In summary, in the technical solution provided in the embodiment of the present application, the audio processing component is controlled to rotate based on the contour shape of the target object and/or the position relationship between the target object and the terminal, so that the audio processing component can more accurately output audio to the target object, or acquire audio from the target object, interference to non-target audiences is reduced, and the effectiveness of audio output or acquisition is improved.
Please refer to fig. 3, which illustrates a schematic diagram of a terminal according to another embodiment of the present application.
In a possible implementation manner, the positioning component 110 includes a millimeter wave transceiver 111, a radio frequency integration unit 112, and a digital signal processing unit 113; the millimeter wave transceiver 111 is coupled with the radio frequency integrated unit 112; the millimeter wave transceiver 111 is coupled with the digital signal processing unit 113.
The millimeter wave transceiver 111 refers to a device for transmitting and receiving millimeter waves. Illustratively, millimeter wave Transceiver 111 includes a Transceiver. The embodiment of the application adopts the millimeter waves because the millimeter waves have the capacity of all-weather work, the millimeter waves have shorter wavelength and wider frequency band and have the characteristic of good propagation characteristic in the atmosphere. The millimeter wave can easily penetrate through the mobile phone shell without being influenced by the structural appearance. The millimeter wave generally adopts 24GHz or 77GHz working frequency, and 77GHz has the advantages of higher accuracy of distance measurement, better horizontal angle resolution, smaller antenna volume and less signal interference.
The rf integrated unit 112 refers to a device for converting a radio signal into a certain radio signal waveform and transmitting it through an antenna resonance. Illustratively, the radio frequency integrated unit 112 includes an LPAMiD.
The digital signal processing unit 113 refers to a device for processing a transmission signal and a reception signal. Illustratively, the digital signal processing unit 113 includes a DSP.
In a possible implementation, the motor control assembly 120 includes a stepper motor 121.
In a possible implementation, the audio processing component 130 includes at least one of: microphone, speaker, earphone. It should be noted that fig. 3 only illustrates the audio processing component 130 as a microphone, and in other possible implementations, the audio processing component 130 may also be a component in other forms, which is not limited in this embodiment of the application.
Illustratively, the terminal 100 may further include a body 140, and the body 140 may also be referred to as a body, which is a main body frame of the terminal 100. The body 140 has a generally hexahedral shape, and partial edges or corners of the hexahedron may be formed with arc-shaped chamfers. The front surface of the body 140 is generally in the shape of a rounded rectangle or a right-angled rectangle.
The body 140 includes a middle frame 141, and the middle frame 141 is a frame around the body 140. In a possible implementation manner, at least one hole 150 is formed on the body 140, and the audio processing assembly 130 corresponds to the position of the at least one hole 150. Illustratively, the middle frame 141 has at least one aperture 150 formed therein. The audio processing component 130 corresponds to the position of the at least one hole 150 (the audio processing component corresponding to the position of the hole means that the audio processing component can output audio or capture audio from the position of the hole). In the case where the audio processing component 130 is a component for picking up audio, the above-mentioned hole may be referred to as a sound pickup hole, and the audio of the target object may be picked up by the audio processing component 130 through the above-mentioned at least one hole 150; in the case where the audio processing component 130 is a component for outputting audio, the hole may be referred to as a sound output hole, and the audio output by the audio processing component 130 may be transmitted to the target object through the at least one hole 150. Of course, in other possible implementations, the hole 150 may also have other names, and the embodiment of the present application is not limited thereto.
Illustratively, the terminal 100 may further include a display screen (not shown) disposed on the body 140, for example, the display screen may be disposed on a front surface, a back surface or a periphery of the body 140, which is not limited in this embodiment. The display screen is used for displaying images and colors. The display screen is illustratively a touch display screen, and the touch display screen has a function of receiving touch operations (such as clicking, sliding, pressing and the like) of a user in addition to a display function. The display screen may be a rigid screen or a flexible screen, which is not limited in the embodiment of the present application.
In a possible implementation, as shown in fig. 4, the terminal 100 further includes a camera assembly 160, and the camera assembly 160 is used for acquiring an initial positional relationship of the target object. The camera assembly 160 is coupled to the positioning assembly 110, the positioning assembly 110 is coupled to the motor control assembly 120, and the motor control assembly 120 is coupled to the audio processing assembly 130. In a possible implementation, the camera assembly 160 is coupled to the digital signal processing unit 113 in the positioning assembly 110. For the description of the positioning module 110, the motor control module 120, and the audio processing module 130, reference is made to the above description, and the description thereof is omitted here. When the target object includes an item, for example, an instrument or other item, embodiments of the present application utilize the camera assembly to achieve a preliminary determination, which is then repositioned and continuously positioned by the positioning assembly.
To sum up, in the technical scheme provided by the embodiment of the application, the millimeter wave signals are sent and received to determine the relevant information of the target object, so that the millimeter wave has strong anti-jamming capability and good penetrability, and the determined relevant information of the target object can be more accurate.
When the target object comprises an article, the camera assembly is firstly used for preliminary positioning, and then the positioning assembly is used for realizing accurate positioning, so that the audio working effect is better.
Referring to fig. 5, a flowchart of a terminal control method according to an embodiment of the present application is shown. The method may be applied to the terminals shown in fig. 2 to 4, and may include the following steps.
Step 501, a first millimeter wave signal is sent through a positioning component.
In a possible implementation, the positioning component transmits the first millimeter wave signal through the array antenna. Illustratively, the positioning component transmits the first millimeter wave signal in various directions.
In a possible implementation manner, the first millimeter Wave signal is an FMCW (Frequency Modulated Continuous Wave) signal, and the FMCW signal is a high-Frequency Continuous Wave, and the Frequency of the FMCW signal changes according to a triangular Wave rule with time.
Step 502, receiving, by the positioning component, a second millimeter wave signal of the first millimeter wave signal reflected by the target object.
The first millimeter wave signal is reflected on the surface of an obstacle (for example, the surface of a target object), and then received, amplified and demodulated by a receiving antenna, so that a second millimeter wave signal is obtained. When the first millimeter wave is an FMCW signal, the second millimeter wave signal is also an FMCW signal.
In a possible implementation, when the positioning component transmits the first millimeter wave signal in each direction, the positioning component receives the second millimeter wave signal from each direction.
In step 503, the relevant information of the target object is determined based on the first millimeter wave signal and the second millimeter wave signal.
In the embodiment of the present application, the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal.
When the first millimeter wave signal and the second millimeter wave signal are both FMCW signals, the frequency change laws of the first millimeter wave signal and the second millimeter wave signal are the same, and are both triangular wave laws, but a time difference exists, and the distance between the target object and the terminal can be determined by utilizing the time difference.
In a possible implementation manner, the first millimeter wave signal and the second millimeter wave signal are mixed by a mixer to form an intermediate frequency signal, where the frequency of the intermediate frequency signal is f (f is obtained by performing fast fourier transform on the intermediate frequency signal), d represents a distance between the terminal and the target object, s represents a frequency domain slope of the FMCW signal, and c represents an optical speed, and then the distance between the terminal and the target object may be determined by the following formula:
Figure BDA0002849547900000081
and based on the d-value samples of multiple points in the scanning area obtained by millimeter wave signal processing, a distribution diagram of the d-value in the scanning area can be further obtained, and the outline shape of the target object is identified in the distribution diagram by using a software algorithm.
And step 504, controlling the audio processing assembly to rotate through the motor control assembly based on the related information, wherein the rotated audio processing assembly corresponds to the target object.
The fact that the rotated audio processing component corresponds to the target object means that the rotated audio processing component faces the target object. That is, the rotated audio processing component is directed to output audio to the target object, or the rotated audio processing component is directed to capture audio from the target object.
In the case where the target object includes a character, the head position of the character is taken as a target part of audio output or audio capture. Illustratively, the audio processing component is controlled to rotate by the motor control component, and the rotated audio processing component corresponds to the target part. The audio processing component outputs audio to the target portion or the audio processing component captures audio from the target portion.
In the case where the target object includes an article, the sound emission position of the article is taken as a target portion of audio acquisition. Illustratively, the audio processing component is controlled to rotate by the motor control component, and the rotated audio processing component corresponds to the target part. The audio processing component captures audio from the target portion.
The motor control component converts a signal from the positioning component into a control signal of a stepping motor, taking the audio processing component as a loudspeaker as an example, the loudspeaker can be rotated slightly by taking the z direction as an axis through the stepping motor to control the audio direction of the loudspeaker to adjust and move in an x-y plane; alternatively, the speaker can be rotated slightly by the stepping motor in the y direction to control the audio direction of the speaker to adjust the movement in the x-z plane. Based on this adjustment, the audio direction of the speaker will be directed outward in various directions starting from the hole, and the final direction will be based on the target located by the positioning component.
According to the method and the device, the audio processing assembly of the terminal is dynamically adjusted based on the relevant information of the target object, so that the terminal can accurately output or collect audio, and the accurate audio output can reduce the interference to others; accurate audio acquisition can reduce the noise source by a wide margin, improves the recording quality.
In summary, in the technical solution provided in the embodiment of the present application, the audio processing component is controlled to rotate based on the contour shape of the target object and/or the position relationship between the target object and the terminal, so that the audio processing component can more accurately output audio to the target object, or acquire audio from the target object, interference to non-target audiences is reduced, and the effectiveness of audio output or acquisition is improved.
In an exemplary embodiment, the target object includes a character, and the related information includes a human figure outline of the character and/or a positional relationship between the character and the terminal. Referring to fig. 6, a flowchart of a terminal control method according to another embodiment of the present application is shown. The method may be applied to the terminals shown in fig. 2 to 4, and may include the following steps.
Step 601, displaying indication information, wherein the indication information is used for indicating whether to start an audio pointing function.
The audio pointing function refers to a function in which an audio processing component outputs audio to or captures audio from a specific region. The specific region is a region where the target object is located.
In a possible implementation manner, the indication information is displayed in the system setting interface, and if the user confirms to start the audio pointing function in the system setting interface, the following terminal control method will be automatically executed when the subsequent terminal outputs audio or collects audio.
In a possible implementation manner, the indication information is displayed in the audio acquisition interface, and if the user confirms to start the audio pointing function in the audio acquisition interface, the following terminal control method is automatically executed when the subsequent terminal acquires audio. The audio capture interface includes, for example, a call interface, a voice chat interface, a video chat interface, a web conference interface, and the like, and the type of the audio capture interface is not limited in the embodiments of the present application.
In a possible implementation manner, the indication information is displayed in the audio output interface, and if the user confirms to start the audio pointing function in the audio output interface, the following terminal control method is automatically executed when the subsequent terminal outputs the audio. The audio output interface includes, for example, a music playing interface, a video playing interface, a call interface, and the like, and the type of the audio output interface is not limited in the embodiments of the present application.
Step 602, in response to receiving the confirmation instruction of the indication information, sending a first millimeter wave signal through the positioning component.
The confirmation instruction is used to instruct the audio pointing function to be turned on.
In a possible implementation manner, the user triggers a confirmation instruction of the indication information through gestures, touch, voice and the like.
For the description about the transmission of the first millimeter wave signal by the positioning component, reference may be made to the above embodiments, and details are not described here.
It should be noted that in a possible implementation, the terminal does not display the indication information, but turns on the audio pointing function by default.
Step 603, receiving, by the positioning component, a second millimeter wave signal reflected by the target object from the first millimeter wave signal.
In step 604, the relevant information of the target object is determined based on the first millimeter wave signal and the second millimeter wave signal.
In the embodiment of the present application, the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal.
For the description of steps 603 to 604, refer to the above embodiments, and are not repeated herein.
Step 605, determining the head position of the person based on the related information and the head features.
The head features are used to indicate head contour features of the person.
The head position of the person is used to indicate the positional relationship between the head of the person and the terminal. For example, the head position of the person may be that the head is located on the upper side of the terminal.
In a possible implementation manner, the terminal determines a head area of a person from the outline of the person based on the outline and the head characteristics of the person; the terminal determines the position relation between the head and the terminal based on the position relation between the person and the terminal and the head area of the person.
In a possible implementation, steps 602 to 605 may be performed by a positioning component. Illustratively, steps 602 to 603 may be performed by a millimeter wave transceiver and radio frequency integrated unit in the positioning component, and steps 604 to 605 may be performed by a digital signal processing unit in the positioning component.
And 606, controlling the audio processing assembly to rotate by the motor control assembly, wherein the rotated audio processing assembly faces to the head position of the character.
After the terminal determines the head position of the person, the terminal may determine rotation information corresponding to the audio processing component based on the current position of the audio processing component and the head position of the person, where the rotation information is used to indicate how the audio processing component rotates. Illustratively, the rotation information can be determined by the digital signal processing unit, and after the determination by the digital signal processing unit, the rotation information is sent to the motor control assembly; the motor control assembly controls the audio processing assembly to rotate based on the rotation information.
Step 607 is to determine an updated positional relationship between the person and the terminal, where the updated positional relationship is obtained by measuring the position between the person and the terminal again after the time when the related information is acquired.
In a possible implementation, after the audio processing component completes the rotation, the terminal performs the step of determining the updated positional relationship between the character and the terminal.
The updated positional relationship is a positional relationship between the person and the terminal obtained by the terminal remeasurement. In a possible implementation mode, sending millimeter wave signals through a positioning component, and receiving reflected millimeter wave signals reflected by people; and determining the updated position relationship between the person and the terminal based on the transmitted millimeter wave signal and the reflected millimeter wave signal.
At step 608, it is determined whether the updated positional relationship and the change between the positional relationships satisfy a preset condition. In response to the updated positional relationship and the change between the positional relationships satisfying the preset condition, execution is started again from step 602; in response to the update of the positional relationship and the change between the positional relationships not satisfying the preset condition, execution is started again from step 607.
In a possible implementation manner, after the terminal determines the correlation of the target object, the correlation is stored in the register, and after the terminal determines the updated position relationship, the updated position relationship is compared with the position relationship to determine the change between the updated position relationship and the position relationship. The terminal also stores the updated positional relationship in the register.
In a possible implementation, the updated positional relationship includes an updated distance between the target object and the terminal, and/or an updated orientation between the target object and the terminal.
Updating the location relationship and the change between the location relationship may include at least one of: a change in distance, a change in orientation.
In a possible implementation, the preset condition includes that the updated position relationship and the change between the position relationships are greater than a preset threshold, and the preset threshold may be set by a technician. Illustratively, when the change between the updated positional relationship and the positional relationship is larger than a preset threshold, determining that the change between the updated positional relationship and the positional relationship satisfies a preset condition; and when the change between the updated position relationship and the position relationship is smaller than a preset threshold value, determining that the change between the updated position relationship and the position relationship does not meet a preset condition. For example, the preset threshold corresponding to the change of the distance and the preset threshold corresponding to the change of the orientation may be different, the change of the distance may be measured in cm (centimeter), and the change of the orientation may be measured in angle.
In a possible implementation, in response to the updated positional relationship and the change between the positional relationships satisfying a preset condition, execution is started again from the step of transmitting the first millimeter wave signal by the positioning component. When the updated position relation and the change between the position relations meet the preset condition, the position of the character is obviously changed, the terminal needs to determine the head position of the character again, and then the steering of the audio processing component is adjusted.
In a possible implementation, in response to the change between the updated positional relationship and the positional relationship not satisfying the preset condition, execution is started again from the step of determining the updated positional relationship between the person and the terminal. When the change between the updated position relation and the position relation does not meet the preset condition, the position of the character is not obviously changed, the terminal can still output audio or collect audio through the audio processing assembly which is rotated before, and the steering of the audio processing assembly is not required to be adjusted. The terminal may continuously determine an updated positional relationship and determine whether the audio processing component needs to be reoriented based on the updated positional relationship.
Illustratively, when the relative movement amplitude of the head of the person and the terminal is small, such as head-up and head-down, the audio direction is determined not to need to be adjusted; when the relative movement amplitude is large or the target object is replaced, it is determined that the head of the person needs to be repositioned.
In the embodiment of the application, even if the target object moves, the orientation of the audio processing component can be dynamically adjusted, and the best audio working effect is ensured.
In a possible implementation, the target object includes a first object and a second object; a first one of the audio processing components corresponds to a first object; a second one of the audio processing components corresponds to the second object. A first one of the audio processing components serves a first object (i.e., the first audio processing component outputs audio for the first object or the first audio processing component captures audio from the first object) and a second one of the audio processing components serves a second object (the second audio processing component outputs audio for the second object or the second audio processing component captures audio from the second object). The first audio processing component and the second audio processing component are different components. In a possible implementation manner, the second audio processing component is all components except the first audio processing component in the audio processing component, or the second audio processing component is a part of components except the first audio processing component in the audio processing component. In one example, the number of first audio processing components and second audio processing components is the same; in another example, the number of the first audio processing components and the number of the second audio processing components are different, and the embodiment of the present application is not limited thereto.
In a possible implementation, a component closest to the first object among the audio processing components is determined as the first audio processing component; and determining a component closest to the second object in the audio processing components as a second audio processing component.
In a possible implementation, in case the target object comprises a plurality of objects, the default closest object to the terminal corresponds to the audio processing component. That is, in the case where the target object includes a plurality of objects, the default audio processing component serves the object closest to the terminal (i.e., the audio processing component outputs audio for the object closest to the terminal, or the audio processing component acquires audio from the object closest to the terminal).
To sum up, in the technical scheme provided by the embodiment of the application, by displaying the indication information and then after receiving the confirmation instruction of the indication information, the millimeter wave signal is sent by the positioning component, a user can flexibly select whether to start the audio pointing function, and the flexibility is improved.
In addition, the embodiment of the application determines whether the orientation of the audio processing component needs to be adjusted again or not based on the updated position relationship and the position relationship, so that the orientation of the audio processing component can be dynamically adjusted even if the target object moves, and the best audio working effect is ensured.
In an exemplary embodiment, the target object includes an item. Referring to fig. 7, a flowchart of a terminal control method according to another embodiment of the present application is shown. The method may be applied to the terminals shown in fig. 2 to 4, and may include the following steps.
In step 701, an initial positional relationship between an article and a terminal is determined by a camera assembly.
In a possible implementation manner, the terminal displays indication information, wherein the indication information is used for indicating whether an audio pointing function is started or not; in response to receiving a confirmation instruction of the indication information, an initial positional relationship between the article and the terminal is determined by the camera assembly.
In a possible implementation, the user focuses the item through the camera assembly so that the terminal determines an initial positional relationship between the item and the terminal. The initial positional relationship is used to indicate an approximate position between the article and the terminal.
At step 702, a first millimeter wave signal is transmitted to the article based on the initial positional relationship by the positioning component.
In a possible implementation manner, after the terminal determines the initial position relationship, the positioning component may determine an approximate position of the article, and at this time, the positioning component may transmit the first millimeter wave signal to the article based on the initial position relationship.
And 703, receiving a second millimeter wave signal, which is reflected by the article, of the first millimeter wave signal through the positioning component.
Step 704, determining related information of the article based on the first millimeter wave signal and the second millimeter wave signal, wherein the related information comprises the outline shape of the article and/or the position relation between the article and the terminal.
For the description of step 702 to step 704, reference may be made to the above embodiments, which are not repeated herein.
Step 705, the motor control component controls the audio processing component to rotate based on the related information, and the rotated audio processing component corresponds to the article.
In a possible implementation, when the target object includes an item, the audio processing component is primarily for capturing audio of the item. Therefore, the fact that the rotated audio processing component corresponds to the article means that the rotated audio processing component collects the audio of the article.
In a possible implementation mode, the sound production positions of different articles are possibly different, the terminal can determine the sound production part of the article based on the outline shape of the article, and determine the position relation between the sound production part and the terminal based on the position relation between the article and the terminal, so that the control motor control assembly controls the audio processing assembly to rotate, and the rotated audio processing assembly faces the sound production part.
Step 706, determining a new position relationship between the article and the terminal, wherein the new position relationship is obtained by measuring the position between the article and the terminal again after the time of obtaining the relevant information.
In step 707, it is determined whether the new positional relationship and the change between the positional relationships satisfy a preset condition. In response to the new positional relationship and the change between the positional relationships satisfying the preset condition, execution is started again from step 701; in response to the new positional relationship and the change between the positional relationships not satisfying the preset condition, execution is started again from step 706.
In a possible implementation manner, in response to the change between the new positional relationship and the positional relationship satisfying a preset condition, the method starts from the step of determining the initial positional relationship between the article and the terminal through the camera again.
In a possible implementation, in response to the change between the new positional relationship and the positional relationship not satisfying a preset condition, execution is again started from the step of determining the new positional relationship between the article and the terminal.
For the description of step 706 to step 707, reference may be made to the above embodiments, which are not repeated herein.
To sum up, in the technical scheme that this application embodiment provided, through determining the position relation between article and the terminal to make audio frequency processing component towards this article, audio frequency processing component can be more accurate the audio frequency of gathering article, reduces the interference of other noises.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Referring to fig. 8, a block diagram of a terminal control apparatus provided in an embodiment of the present application is shown, where the apparatus may be applied to the terminals shown in fig. 2 to fig. 4, and the apparatus has a function of implementing the above terminal control method, where the function may be implemented by hardware or by hardware executing corresponding software. The apparatus 800 may include:
a signal sending module 810, configured to send a first millimeter wave signal through the positioning component;
a signal receiving module 820, configured to receive, by the positioning component, a second millimeter wave signal reflected by a target object from the first millimeter wave signal;
an information determining module 830, configured to determine relevant information of the target object based on the first millimeter wave signal and the second millimeter wave signal, where the relevant information includes a contour shape of the target object and/or a position relationship between the target object and the terminal;
a component rotation module 840, configured to control, by the motor control component, rotation of the audio processing component based on the relevant information, where the rotated audio processing component corresponds to the target object.
In summary, in the technical solution provided in the embodiment of the present application, the audio processing component is controlled to rotate based on the contour shape of the target object and/or the position relationship between the target object and the terminal, so that the audio processing component can more accurately output audio to the target object, or acquire audio from the target object, interference to non-target audiences is reduced, and the effectiveness of audio output or acquisition is improved.
In an exemplary embodiment, the target object includes a character, and the related information includes a human-shaped outline of the character and/or a positional relationship between the character and the terminal;
the assembly rotating module 840, configured to:
determining a head position of the person based on the related information and the head features;
and controlling the audio processing assembly to rotate through the motor control assembly, wherein the rotated audio processing assembly faces to the head position of the person.
In an exemplary embodiment, the apparatus further comprises: a position determination module (not shown).
The position determining module is used for determining an updated position relationship between the person and the terminal, and the updated position relationship is obtained by measuring the position between the person and the terminal again after the relevant information is acquired;
the signal sending module 810 is further configured to, in response to that the change between the updated positional relationship and the positional relationship satisfies a preset condition, start execution from the step of sending the first millimeter wave signal by the positioning component again;
the position determining module is further configured to, in response to that the change between the updated position relationship and the position relationship does not satisfy a preset condition, execute again from the step of determining the updated position relationship between the person and the terminal.
In an exemplary embodiment, the target object comprises an article, the terminal further comprising a camera assembly;
the device, still include: a position determination module (not shown).
The position determining module is used for determining an initial position relation between the article and the terminal through the camera assembly;
the signal sending module 810 is configured to:
transmitting, by the positioning component, the first millimeter wave signal to the item based on the initial positional relationship.
In an exemplary embodiment, the position determining module is further configured to:
determining a new position relation between the article and the terminal, wherein the new position relation is obtained by measuring the position between the article and the terminal again after the time of acquiring the relevant information;
in response to the change between the new position relation and the position relation meeting a preset condition, starting to execute the step of determining the initial position relation between the article and the terminal through the camera again;
and in response to the change between the new position relation and the position relation not meeting a preset condition, executing from the step of determining the new position relation between the article and the terminal again.
In an exemplary embodiment, the target object includes a first object and a second object;
a first one of the audio processing components corresponds to the first object;
a second one of the audio processing components corresponds to the second object.
In an exemplary embodiment, the apparatus further comprises:
an information display module (not shown in the figure) for displaying indication information, wherein the indication information is used for indicating whether to start an audio pointing function;
the signal sending module 810 is further configured to, in response to receiving a confirmation instruction of the indication information, start execution from the step of sending the first millimeter wave signal by the positioning component.
It should be noted that, when the apparatus provided in the foregoing embodiment implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the apparatus may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Referring to fig. 9, a block diagram of a terminal according to an embodiment of the present application is shown.
The terminal in the embodiment of the present application may include one or more of the following components: a processor 910 and a memory 920.
Processor 910 may include one or more processing cores. The processor 910 connects various parts within the entire terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 920 and calling data stored in the memory 920. Alternatively, the processor 910 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 910 may integrate one or more of a Central Processing Unit (CPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, an application program and the like; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 910, but may be implemented by a single chip.
Optionally, the processor 910, when executing the program instructions in the memory 920, implements the methods provided by the various method embodiments described above.
The Memory 920 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). Optionally, the memory 920 includes a non-transitory computer-readable medium. The memory 920 may be used to store instructions, programs, code sets, or instruction sets. The memory 920 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function, instructions for implementing the various method embodiments described above, and the like; the storage data area may store data created according to the use of the terminal, and the like.
The structure of the terminal described above is only illustrative, and in actual implementation, the terminal may include more or less components, such as: a display screen, etc., which are not limited in this embodiment.
Those skilled in the art will appreciate that the configuration shown in fig. 9 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, there is also provided a computer-readable storage medium having a computer program stored therein, the computer program being loaded and executed by a processor of a terminal to implement the steps in the above-described terminal control method embodiments.
In an exemplary embodiment, a computer program product is provided that includes computer instructions stored in a computer readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and executes the computer instructions, so that the terminal executes the terminal control method.
It should be understood that reference to "a plurality" herein means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, the step numbers described herein only exemplarily show one possible execution sequence among the steps, and in some other embodiments, the steps may also be executed out of the numbering sequence, for example, two steps with different numbers are executed simultaneously, or two steps with different numbers are executed in a reverse order to the order shown in the figure, which is not limited by the embodiment of the present application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like that are made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A terminal is characterized by comprising a positioning component, a motor control component and an audio processing component; the positioning component is used for determining related information of a target object, the related information comprises a contour shape of the target object and/or a position relation between the target object and the terminal, the motor control component is used for controlling the rotation of the audio processing component based on the related information, and the audio processing component is used for outputting or acquiring audio;
the positioning component is coupled with the motor control component;
the motor control component and the audio processing component are coupled.
2. The terminal of claim 1, wherein the positioning component comprises a millimeter wave transceiver, a radio frequency integrated unit, a digital signal processing unit;
the millimeter wave transceiver is coupled with the radio frequency integrated unit;
the millimeter wave transceiver is coupled with the digital signal processing unit.
3. A terminal according to claim 1 or 2, wherein the motor control assembly comprises a stepper motor.
4. A terminal control method, applied to the terminal according to any one of claims 1 to 3, the method comprising:
transmitting a first millimeter wave signal through the positioning component;
receiving, by the positioning component, a second millimeter wave signal reflected by the target object from the first millimeter wave signal;
determining related information of the target object based on the first millimeter wave signal and the second millimeter wave signal, wherein the related information comprises the contour shape of the target object and/or the position relation between the target object and the terminal;
and controlling the audio processing component to rotate through the motor control component based on the related information, wherein the rotated audio processing component corresponds to the target object.
5. The method according to claim 4, wherein the target object comprises a person, and the related information comprises a figure outline of the person and/or a position relationship between the person and the terminal;
the controlling, by the motor control assembly, the audio processing assembly to rotate based on the related information includes:
determining a head position of the person based on the related information and the head features;
and controlling the audio processing assembly to rotate through the motor control assembly, wherein the rotated audio processing assembly faces to the head position of the person.
6. The method of claim 5, further comprising, after controlling the rotation of the audio processing assembly by the motor control assembly:
determining an updated position relationship between the person and the terminal, wherein the updated position relationship is obtained by measuring the position between the person and the terminal again after the time of acquiring the relevant information;
in response to the updated positional relationship and the change between the positional relationships satisfying a preset condition, starting execution from the step of transmitting the first millimeter wave signal by the positioning component again;
and in response to the change between the updated position relation and the position relation not meeting a preset condition, executing from the step of determining the updated position relation between the person and the terminal again.
7. The method of claim 4, wherein the target object comprises an article, the terminal further comprising a camera assembly;
before the first millimeter wave signal is sent by the positioning component, the method further includes:
determining, by the camera assembly, an initial positional relationship between the item and the terminal;
the sending of the first millimeter wave signal by the positioning component includes:
transmitting, by the positioning component, the first millimeter wave signal to the item based on the initial positional relationship.
8. The method of claim 7, wherein after controlling the audio processing component to rotate based on the related information by the motor control component, further comprising:
determining a new position relation between the article and the terminal, wherein the new position relation is obtained by measuring the position between the article and the terminal again after the time of acquiring the relevant information;
in response to the change between the new position relation and the position relation meeting a preset condition, starting to execute the step of determining the initial position relation between the article and the terminal through the camera again;
and in response to the change between the new position relation and the position relation not meeting a preset condition, executing from the step of determining the new position relation between the article and the terminal again.
9. The method of any one of claims 4 to 8, wherein the target object comprises a first object and a second object;
a first one of the audio processing components corresponds to the first object;
a second one of the audio processing components corresponds to the second object.
10. The method according to any one of claims 4 to 8, further comprising:
displaying indication information, wherein the indication information is used for indicating whether an audio pointing function is started or not;
in response to receiving the confirmation instruction of the indication information, the method starts from the step of sending the first millimeter wave signal by the positioning component.
11. A terminal control apparatus, for use in a terminal according to any one of claims 1 to 3, the apparatus comprising:
the signal sending module is used for sending a first millimeter wave signal through the positioning component;
the signal receiving module is used for receiving a second millimeter wave signal reflected by the target object from the first millimeter wave signal through the positioning component;
an information determination module, configured to determine, based on the first millimeter wave signal and the second millimeter wave signal, related information of the target object, where the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal;
and the component rotating module is used for controlling the audio processing component to rotate through the motor control component based on the related information, and the rotated audio processing component corresponds to the target object.
12. A computer-readable storage medium, characterized in that a computer program is stored therein, which is loaded and executed by a processor to implement the terminal control method according to any one of claims 4 to 10.
CN202011521562.2A 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium Active CN112565973B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011521562.2A CN112565973B (en) 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium
PCT/CN2021/129846 WO2022134910A1 (en) 2020-12-21 2021-11-10 Terminal, terminal control method and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011521562.2A CN112565973B (en) 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112565973A true CN112565973A (en) 2021-03-26
CN112565973B CN112565973B (en) 2023-08-01

Family

ID=75031199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011521562.2A Active CN112565973B (en) 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium

Country Status (2)

Country Link
CN (1) CN112565973B (en)
WO (1) WO2022134910A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134910A1 (en) * 2020-12-21 2022-06-30 Oppo广东移动通信有限公司 Terminal, terminal control method and apparatus, and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07159190A (en) * 1993-12-09 1995-06-23 Zanabui Informatics:Kk Sound device totallizing system on vehicle
US20080037803A1 (en) * 1994-05-09 2008-02-14 Automotive Technologies International, Inc. Sound Management Techniques for Vehicles
WO2013093187A2 (en) * 2011-12-21 2013-06-27 Nokia Corporation An audio lens
US20140233764A1 (en) * 2013-02-21 2014-08-21 Stuart Mathis Microphone Positioning System
CN105721645A (en) * 2016-02-22 2016-06-29 梁天柱 Voice peripheral of mobile phone
CN107182011A (en) * 2017-07-21 2017-09-19 深圳市泰衡诺科技有限公司上海分公司 Audio frequency playing method and system, mobile terminal, WiFi earphones
WO2018041359A1 (en) * 2016-09-01 2018-03-08 Universiteit Antwerpen Method of determining a personalized head-related transfer function and interaural time difference function, and computer program product for performing same
CN109284081A (en) * 2018-09-20 2019-01-29 维沃移动通信有限公司 A kind of output method of audio, device and audio frequency apparatus
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components
CN111161768A (en) * 2019-12-23 2020-05-15 秒针信息技术有限公司 Recording apparatus
US20200217930A1 (en) * 2018-08-30 2020-07-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for gesture recognition, terminal, and storage medium
CN111724823A (en) * 2016-03-29 2020-09-29 联想(北京)有限公司 Information processing method and device and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107656718A (en) * 2017-08-02 2018-02-02 宇龙计算机通信科技(深圳)有限公司 A kind of audio signal direction propagation method, apparatus, terminal and storage medium
CN108805086B (en) * 2018-06-14 2021-09-14 联想(北京)有限公司 Media control method and audio processing device
CN111050269B (en) * 2018-10-15 2021-11-19 华为技术有限公司 Audio processing method and electronic equipment
CN110493690B (en) * 2019-08-29 2021-08-13 北京搜狗科技发展有限公司 Sound collection method and device
CN111060874B (en) * 2019-12-10 2021-10-29 深圳市优必选科技股份有限公司 Sound source positioning method and device, storage medium and terminal equipment
CN112565973B (en) * 2020-12-21 2023-08-01 Oppo广东移动通信有限公司 Terminal, terminal control method, device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07159190A (en) * 1993-12-09 1995-06-23 Zanabui Informatics:Kk Sound device totallizing system on vehicle
US20080037803A1 (en) * 1994-05-09 2008-02-14 Automotive Technologies International, Inc. Sound Management Techniques for Vehicles
WO2013093187A2 (en) * 2011-12-21 2013-06-27 Nokia Corporation An audio lens
US20140233764A1 (en) * 2013-02-21 2014-08-21 Stuart Mathis Microphone Positioning System
CN105721645A (en) * 2016-02-22 2016-06-29 梁天柱 Voice peripheral of mobile phone
CN111724823A (en) * 2016-03-29 2020-09-29 联想(北京)有限公司 Information processing method and device and electronic equipment
WO2018041359A1 (en) * 2016-09-01 2018-03-08 Universiteit Antwerpen Method of determining a personalized head-related transfer function and interaural time difference function, and computer program product for performing same
CN107182011A (en) * 2017-07-21 2017-09-19 深圳市泰衡诺科技有限公司上海分公司 Audio frequency playing method and system, mobile terminal, WiFi earphones
US20200217930A1 (en) * 2018-08-30 2020-07-09 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for gesture recognition, terminal, and storage medium
CN109284081A (en) * 2018-09-20 2019-01-29 维沃移动通信有限公司 A kind of output method of audio, device and audio frequency apparatus
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components
CN111161768A (en) * 2019-12-23 2020-05-15 秒针信息技术有限公司 Recording apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KIYOTSUGU KAKIHARA 等: "Facial movement synthesis by HMM from audio speech", 《ELECTRONICS AND COMMUNICATIONS IN JAPAN (PART II: ELECTRONICS)VOLUME 85, ISSUE 4》, pages 1 - 10 *
朱朝文;曾水平;: "基于DSP的高速音频采集系统的硬件设计与实现", 自动化技术与应用, no. 04, pages 120 - 39 *
李军锋;徐华兴;夏日升;颜永红;: "基于听觉感知特性的双耳音频处理技术", 应用声学, no. 05, pages 124 - 134 *
雷威,张晓兵,王保平,朱卓娅: "《多摄像机协同关注目标检测跟踪技术》", 东南大学出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134910A1 (en) * 2020-12-21 2022-06-30 Oppo广东移动通信有限公司 Terminal, terminal control method and apparatus, and storage medium

Also Published As

Publication number Publication date
WO2022134910A1 (en) 2022-06-30
CN112565973B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
US20220382381A1 (en) Multifunctional Radar Systems and Methods of Operation Thereof
EP3829149B1 (en) Audio control method and device, and terminal
US20140199950A1 (en) Sleeve with electronic extensions for a cell phone
CN103988366A (en) Sleeve with electronic extensions for cell phone
CN114188707B (en) Terminal antenna and method for controlling antenna beam direction
CN108366207A (en) Control method, apparatus, electronic equipment and the computer readable storage medium of shooting
CN110519450B (en) Ultrasonic processing method, ultrasonic processing device, electronic device, and computer-readable medium
US20220278704A1 (en) Apparatus and method for adjusting level of low-noise amplifier (lna), and terminal device
JP2008518504A (en) Composite wireless communication device and radio receiver
WO2011150618A1 (en) Radio terminal and signal scanning method thereof
CN106231559A (en) Network access method, device and terminal
CN112565973B (en) Terminal, terminal control method, device and storage medium
EP4068740A1 (en) Volume adjustment method and system, and device
CN108540243A (en) A kind of detection method and mobile terminal of radio frequency path
US20120094601A1 (en) Bluetooth headset with camera function and rear view monitor system employing the same
US9332331B2 (en) Data processing method and an electronic apparatus
CN111399011A (en) Position information determining method and electronic equipment
KR101031601B1 (en) Detection of lightning
CN115616298A (en) Ultra-wideband mobile phone circuit board magnetic field detection system and method
CN111669249B (en) Cellular network electromagnetic interference method and system based on environment recognition
CN114943242A (en) Event detection method and device, electronic equipment and storage medium
CN109644456B (en) Beam scanning range determining method, device, equipment and storage medium
CN108370476A (en) The method and device of microphone, audio frequency process
CN205986897U (en) Radio monitoring equipment
CN114966262B (en) Anti-interference testing device and anti-interference testing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant