CN112565973B - Terminal, terminal control method, device and storage medium - Google Patents

Terminal, terminal control method, device and storage medium Download PDF

Info

Publication number
CN112565973B
CN112565973B CN202011521562.2A CN202011521562A CN112565973B CN 112565973 B CN112565973 B CN 112565973B CN 202011521562 A CN202011521562 A CN 202011521562A CN 112565973 B CN112565973 B CN 112565973B
Authority
CN
China
Prior art keywords
terminal
millimeter wave
person
target object
audio processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011521562.2A
Other languages
Chinese (zh)
Other versions
CN112565973A (en
Inventor
何昱滨
王杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011521562.2A priority Critical patent/CN112565973B/en
Publication of CN112565973A publication Critical patent/CN112565973A/en
Priority to PCT/CN2021/129846 priority patent/WO2022134910A1/en
Application granted granted Critical
Publication of CN112565973B publication Critical patent/CN112565973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/323Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/326Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only for microphones
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The embodiment of the application provides a terminal, a terminal control method, a terminal control device and a storage medium, and relates to the technical field of terminals. The terminal comprises a positioning component, a motor control component and an audio processing component; the positioning component is used for determining relevant information of the target object, the relevant information comprises the outline shape of the target object and/or the position relation between the target object and the terminal, the motor control component is used for controlling rotation of the audio processing component based on the relevant information, and the audio processing component is used for outputting or collecting audio; the positioning component is coupled with the motor control component; the motor control assembly and the audio processing assembly are coupled. The method and the device improve the effectiveness of audio output or collection.

Description

Terminal, terminal control method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of terminals, in particular to a terminal, a terminal control method, a terminal control device and a storage medium.
Background
The terminal has audio output and audio acquisition functions.
In the related art, a speaker for outputting audio is provided at the bottom of the terminal; the bottom of the terminal will also be provided with a microphone for capturing audio.
Disclosure of Invention
The embodiment of the application provides a terminal, a terminal control method, a terminal control device and a storage medium. The technical scheme is as follows:
in one aspect, an embodiment of the present application provides a terminal, where the terminal includes a positioning component, a motor control component, and an audio processing component; the positioning component is used for determining relevant information of a target object, the relevant information comprises the outline shape of the target object and/or the position relation between the target object and the terminal, the motor control component is used for controlling rotation of the audio processing component based on the relevant information, and the audio processing component is used for outputting or collecting audio;
the positioning assembly is coupled with the motor control assembly;
the motor control assembly is coupled to the audio processing assembly.
On the other hand, an embodiment of the present application provides a terminal control method, which is applied to the terminal described in the above aspect, where the method includes:
transmitting a first millimeter wave signal through the positioning assembly;
receiving a second millimeter wave signal reflected by the first millimeter wave signal through the target object through the positioning component;
determining related information of the target object based on the first millimeter wave signal and the second millimeter wave signal, wherein the related information comprises the outline shape of the target object and/or the position relation between the target object and the terminal;
And controlling the rotation of the audio processing component based on the related information through the motor control component, wherein the rotated audio processing component corresponds to the target object.
On the other hand, an embodiment of the present application provides a terminal control device, which is applied to the terminal described in the above aspect, and the device includes:
the signal sending module is used for sending a first millimeter wave signal through the positioning component;
the signal receiving module is used for receiving a second millimeter wave signal reflected by the first millimeter wave signal through the target object through the positioning component;
an information determining module, configured to determine, based on the first millimeter wave signal and the second millimeter wave signal, related information of the target object, where the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal;
and the assembly rotating module is used for controlling the rotation of the audio processing assembly based on the related information through the motor control assembly, and the audio processing assembly after rotation corresponds to the target object.
In yet another aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, the computer program being loaded and executed by a processor to implement the terminal control method as described in the above aspect.
In yet another aspect, embodiments of the present application provide a computer program product comprising computer instructions stored in a computer-readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the terminal performs the terminal control method described above.
The technical scheme provided by the embodiment of the application can bring the following beneficial effects:
the audio processing component is controlled to rotate based on the outline shape of the target object and/or the position relationship between the target object and the terminal, so that the audio processing component can more accurately output audio to the target object or collect audio from the target object, interference to non-target audiences is reduced, and the effectiveness of audio output or collection is improved.
Drawings
FIG. 1 is a schematic diagram of an audio processing system provided in one embodiment of the present application;
FIG. 2 is a schematic diagram of a terminal provided in one embodiment of the present application;
fig. 3 is a schematic diagram of a terminal according to another embodiment of the present application;
fig. 4 is a schematic diagram of a terminal provided in another embodiment of the present application;
fig. 5 is a flowchart of a terminal control method provided in an embodiment of the present application;
Fig. 6 is a flowchart of a terminal control method according to another embodiment of the present application;
fig. 7 is a flowchart of a terminal control method according to another embodiment of the present application;
fig. 8 is a block diagram of a terminal control apparatus provided in one embodiment of the present application;
fig. 9 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, terms related to the embodiments of the present application will be explained:
motor (Electric Machinery): commonly called a motor, which is an electromagnetic device for realizing electric energy conversion or transmission based on an electromagnetic induction law.
Stepper Motor (Stepping Motor): which may also be referred to as a stepper motor, is one type of motor. The stepping motor is a discrete value control motor which converts an electric pulse excitation signal into a corresponding angular displacement or linear displacement, and the stepping motor moves one step every time an electric pulse is input, so the stepping motor is also called a pulse motor. Stepper motors are classified into three basic types, electromechanical, magneto-electric, and linear.
Millimeter wave (Millimeter Waves): refers to electromagnetic waves (Electromagnetic Waves) having a frequency in the range of 30-300GHz (gigahertz). Under vacuum or Free-Space conditions, the corresponding wavelength ranges from 1 to 10 mm.
Transceiver (Transceiver): also referred to as a transceiver, is a radio transmitter and a receiver mounted on a single component and sharing a portion of the same circuitry.
LPAMiD (power amplifier module): is an integrated module integrating a power amplifier, a duplexer, a filter, a switching module and a low noise amplifier in a radio frequency front end section. Illustratively, the LPAMID includes an LNA (Low Noise Amplifier ), an integrated multimode, multiband PA (Power Amplifier), and a FEMiD.
LNA: is an amplifier with low noise coefficient, and is generally used as a high-frequency or intermediate-frequency preamplifier of various radio receivers and an amplifying circuit of high-sensitivity electronic detection equipment. The LNA determines the overall performance of the receiver. The purpose of the LNA is to obtain extremely weak uncertainty signals, typically on the order of microvolts or below-100 dBm, from the antenna, and then amplify the signals to a more useful level, typically about 0.5 to 1V. The main parameters of the LNA are Noise Factor (NF), gain and linearity. The noise figure is used to measure the LNA internal noise level.
Integrated multimode, multiband PA: refers to a PA that supports multiple modes and bands.
FEMiD (front end module): the integrated module integrates a radio frequency switch, a filter and a duplexer at the front end of the radio frequency. Performance of the rf switch, such as insertion loss, return loss, isolation, harmonic rejection, and power capacity, is critical to the impact of the rf front-end link. The duplexer has the function of isolating the transmitting and receiving signals, ensuring that the receiving and transmitting can work normally at the same time, and consists of two groups of band-pass filters with different frequencies, thereby avoiding the transmission of the local transmitting signal to the receiver.
DSP (Digital Signal Processor ): is a chip of an integrated special purpose computer, and refers to a processor capable of realizing digital signal processing technology.
Referring to fig. 1, a schematic diagram of an audio processing system provided in one embodiment of the present application is shown, which may include a terminal 100 and a target object 200.
In the embodiment of the present application, the terminal 100 refers to an electronic device having an audio output and/or audio collection function. By way of example, the terminal 100 may be a cell phone, tablet computer, electronic book reader, multimedia playing device, wearable device, etc.
In the present embodiment, the target object 200 refers to an object having audio output or audio acquisition requirements. Illustratively, the target object 200 may include at least one of: character, article. For example, the target object 200 may include at least one person, or the target object 200 may include at least one item, or the target object 200 may include at least one person and at least one item, which is not limited in this embodiment of the present application. The target object 200 may be in a stationary state or a moving state, which is not limited in the embodiment of the present application.
Illustratively, the terminal 100 provided in the embodiments of the present application may output audio to the target object 200 based on the related information of the target object 200 or collect audio from the target object 200. The related information of the target object 200 includes a contour shape of the target object and/or a positional relationship between the target object and the terminal, and the terminal 100 outputs or collects audio to or from the target object based on the contour shape of the target object and/or a position between the target object and the terminal, thereby realizing directional output or collection of audio. For audio output, in the diffusion direction of non-target audience, the interference to other people is reduced, and the waste of power is reduced; for audio collection, the sound source to be input is often only from a narrower specific direction angle, so that the probability of receiving sound sources in other directions is reduced, and the possibility of noise input is reduced.
Several embodiments of the present application are described below.
Referring to fig. 2, a schematic diagram of a terminal according to an embodiment of the present application is shown. The terminal 100 may include: a positioning assembly 110, a motor control assembly 120, and an audio processing assembly 130.
In the embodiment of the present application, the positioning component 110 is used to determine relevant information of the target object. The related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal. The outline shape of the target object is used for indicating the external shape of the target object, and the corresponding part of the audio output or the audio acquisition can be determined based on the outline shape of the target object. The positional relationship between the target object and the terminal includes at least one of: the distance between the target object and the terminal, and the azimuth between the target object and the terminal. The distance between the target object and the terminal may refer to a vertical distance between the target object and the terminal, for example, the distance between the target object and the terminal may be 50cm (centimeter), 10cm (centimeter), or the like; the orientation between the target object and the terminal is used to indicate that the target object is oriented when the terminal is taken as a base point, for example, the orientation between the target object and the terminal is that the target object is located in the south of the terminal, or the orientation between the target object and the terminal is that the target object is located in the north of the terminal, etc., which are described by way of example only, and in other possible implementations, the orientation between the target object and the terminal may be in other forms, and the embodiment of the application is not limited thereto.
The motor control assembly 120 is used to control the rotation of the audio processing assembly 130 based on the related information. Illustratively, when the related information includes a contour shape of the target object, the motor control component 120 may control the rotation of the audio processing component 130 based on the contour shape of the target object; when the related information includes a positional relationship between the target object and the terminal, the motor control component 120 may control the rotation of the audio processing component 130 based on the positional relationship between the target object and the terminal.
The audio processing component 130 is used to output or capture audio. The audio processing component 130 is used to output audio to a target object, or the audio processing component 130 is used to collect audio from a target object. The audio processing component 130 provided in the embodiments of the present application is rotatable. In a possible implementation, one or more components may be included in the audio processing component, for example, the audio processing component may include one or more microphones, the audio processing component may include one or more headphones, the audio processing component may include one or more speakers, etc., which are not limited in this embodiment.
The positioning assembly 110 and the motor control assembly 120 are coupled; the motor control component 120 and the audio processing component 130 are coupled. In a possible implementation manner, after the positioning component 110 determines the relevant information of the target object, the relevant information is sent to the motor control component 120, and the motor control component 120 controls the rotation of the audio processing component 130 based on the relevant information of the target object.
In summary, in the technical solution provided in the embodiments of the present application, the rotation of the audio processing component is controlled based on the outline shape of the target object and/or the positional relationship between the target object and the terminal, so that the audio processing component may output audio to the target object more accurately, or collect audio from the target object, thereby reducing interference to non-target audience and improving the effectiveness of audio output or collection.
Referring to fig. 3, a schematic diagram of a terminal according to another embodiment of the present application is shown.
In a possible implementation, the positioning component 110 includes a millimeter wave transceiver 111, a radio frequency integrated unit 112, a digital signal processing unit 113; millimeter wave transceiver 111 is coupled to radio frequency integrated unit 112; millimeter wave transceiver 111 is coupled to digital signal processing unit 113.
Millimeter wave transceiver 111 refers to a device for transmitting and receiving millimeter waves. Illustratively, millimeter-wave Transceiver 111 comprises a Transceiver. The millimeter wave is adopted in the embodiment of the application, and has the characteristics of short wavelength, wide frequency band and good propagation characteristics in the atmosphere because the millimeter wave has all-weather working capability. Millimeter wave can easily penetrate the cell phone housing without being affected by the structural profile. The millimeter wave generally adopts 24GHz or 77GHz working frequency, and the 77GHz has the advantages of higher accuracy of ranging, better resolution of horizontal angle, smaller antenna volume and less occurrence of signal interference.
The radio frequency integrated unit 112 refers to a device for converting a radio signal into a certain radio signal waveform and transmitting it through antenna resonance. Illustratively, the radio frequency integrated unit 112 includes an LPAMiD.
The digital signal processing unit 113 refers to a device for processing a transmission signal and a reception signal. The digital signal processing unit 113 includes a DSP, for example.
In a possible implementation, the motor control assembly 120 includes a stepper motor 121.
In a possible implementation, the audio processing component 130 includes at least one of: microphones, speakers, handsets. It should be noted that, in fig. 3, only the audio processing component 130 is illustrated as a microphone, and in other possible implementations, the audio processing component 130 may also be another type of component, which is not limited in this embodiment of the present application.
Illustratively, the terminal 100 may further include a body 140, and the body 140 may also be referred to as a body, which is a main body frame of the terminal 100. The body 140 is generally in the shape of a hexahedron, and a portion of the edges or corners of the hexahedron may be formed with arc-shaped chamfers. The front surface of the body 140 is generally rounded rectangular or rectangular.
The body 140 includes a middle frame 141, and the middle frame 141 is a frame around the body 140. In a possible implementation, the body 140 has at least one hole 150 formed therein, and the audio processing component 130 corresponds to a position of the at least one hole 150. Illustratively, the middle frame 141 has at least one aperture 150 formed therein. The audio processing component 130 corresponds to the location of the at least one aperture 150 (the audio processing component corresponding to the location of the aperture means that the audio processing component can output audio from the location of the aperture or collect audio). In the case where the audio processing component 130 is a component for capturing audio, the above-mentioned hole may be referred to as a sound pickup hole, and the audio of the target object may be captured by the audio processing component 130 through the above-mentioned at least one hole 150; in the case where the audio processing component 130 is a component for outputting audio, the above-mentioned hole may be referred to as an outlet hole, and the audio output from the audio processing component 130 may be transmitted to the target object through the above-mentioned at least one hole 150. Of course, in other possible implementations, the hole 150 may be called as such, which is not limited by the embodiment of the present application.
The terminal 100 may further include a display (not shown) disposed on the body 140, for example, the display may be disposed on the front, the back, or around the body 140, which is not limited in the embodiment of the present application. The display screen is used for displaying images and colors. The display screen is illustratively a touch display screen, and the touch display screen has a function of receiving a touch operation (such as clicking, sliding, pressing, etc.) of a user in addition to a display function. The display screen may be a rigid screen or a flexible screen, which is not limited in this embodiment of the present application.
In a possible implementation, as shown in fig. 4, the terminal 100 further includes a camera component 160, where the camera component 160 is configured to obtain an initial positional relationship of the target object. The camera assembly 160 is coupled to the positioning assembly 110, the positioning assembly 110 is coupled to the motor control assembly 120, and the motor control assembly 120 is coupled to the audio processing assembly 130. In a possible implementation, the camera assembly 160 is coupled with a digital signal processing unit 113 in the positioning assembly 110. The relevant descriptions of the positioning component 110, the motor control component 120, and the audio processing component 130 are referred to above, and are not repeated here. When the target object includes an item, for example, a musical instrument or other item, embodiments of the present application utilize the camera assembly to achieve preliminary localization, repositioning and continued localization by the localization assembly.
In summary, in the technical solution provided in the embodiments of the present application, by sending and receiving the millimeter wave signal, the related information of the target object is determined, and the millimeter wave has strong anti-interference capability and good penetrability, so that the determined related information of the target object is more accurate.
When the target object comprises an article, preliminary positioning is performed through the camera component, and accurate positioning is realized through the positioning component, so that the audio working effect is good.
Referring to fig. 5, a flowchart of a terminal control method according to an embodiment of the present application is shown. The method may be applied to a terminal as shown in fig. 2 to 4, and the method may include the following steps.
Step 501, a first millimeter wave signal is transmitted by a positioning component.
In a possible implementation, the positioning component transmits the first millimeter wave signal through the array antenna. Illustratively, the positioning component transmits the first millimeter wave signal in each direction.
In a possible implementation, the first millimeter wave signal is an FMCW (Frequency Modulated Continuous Wave ) signal, which is a high frequency continuous wave whose frequency varies in accordance with a triangular wave law over time.
Step 502, receiving, by a positioning component, a second millimeter wave signal reflected by the target object from the first millimeter wave signal.
The first millimeter wave signal is received, amplified, and demodulated by a receiving antenna after being reflected on the surface of an obstacle (for example, the surface of a target object), and a second millimeter wave signal is obtained. When the first millimeter wave is an FMCW signal, the second millimeter wave signal is also an FMCW signal.
In a possible implementation, when the positioning component transmits the first millimeter wave signal in each direction, the positioning component receives the second millimeter wave signal from each direction.
In step 503, based on the first millimeter wave signal and the second millimeter wave signal, relevant information of the target object is determined.
In the embodiment of the application, the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal.
When the first millimeter wave signal and the second millimeter wave signal are both FMCW signals, the frequency change rules of the first millimeter wave signal and the second millimeter wave signal are the same and are both triangular wave rules, but there is a time difference, and the distance between the target object and the terminal can be determined by using the time difference.
In a possible implementation manner, the first millimeter wave signal and the second millimeter wave signal form an intermediate frequency signal through a mixer, the frequency of the intermediate frequency signal is f (f is obtained by performing fast fourier transform processing on the intermediate frequency signal), d represents the distance between the terminal and the target object, s represents the frequency domain slope of the FMCW signal, c represents the speed of light, and then the distance between the terminal and the target object can be determined by the following formula:
And obtaining a distribution diagram of d values in the scanning area based on d value samples of multiple points in the scanning area obtained by millimeter wave signal processing, and identifying the outline shape of the target object in the distribution diagram by using a software algorithm.
In step 504, the motor control component controls the rotation of the audio processing component based on the related information, and the rotated audio processing component corresponds to the target object.
The rotated audio processing component corresponding to the target object means that the rotated audio processing component is oriented toward the target object. That is, the rotated audio processing component is oriented to output audio to the target object, or the rotated audio processing component is oriented to capture audio from the target object.
In the case where the target object includes a person, the head position of the person is taken as a target portion of audio output or audio collection. Illustratively, the audio processing assembly is controlled to rotate by the motor control assembly, the rotated audio processing assembly corresponding to the target portion. The audio processing component outputs audio to the target portion or the audio processing component collects audio from the target portion.
In the case where the target object includes an article, the sound-producing position of the article is taken as a target portion of the audio collection. Illustratively, the audio processing assembly is controlled to rotate by the motor control assembly, the rotated audio processing assembly corresponding to the target portion. The audio processing component collects audio from the target portion.
The motor control component converts the signal from the positioning component into a control signal of the stepping motor, taking the audio processing component as a loudspeaker as an example, and the loudspeaker can be rotated by a small margin by the stepping motor in the z direction as an axis to control the audio direction of the loudspeaker to adjust and move in the x-y plane; alternatively, the speaker may be rotated by a small amount about the y-direction by a stepper motor to control the audio directional adjustment movement of the speaker in the x-z plane. Based on this adjustment, the audio direction of the speaker will be directed outwards from the aperture, and the final direction will be based on the target located by the locating component.
According to the embodiment of the application, the terminal can accurately output or collect the audio through dynamically adjusting the audio processing component of the terminal based on the related information of the target object, and the accurate audio output can reduce the interference to other people; accurate audio acquisition can reduce noise source by a wide margin, improves recording quality.
In summary, in the technical solution provided in the embodiments of the present application, the rotation of the audio processing component is controlled based on the outline shape of the target object and/or the positional relationship between the target object and the terminal, so that the audio processing component may output audio to the target object more accurately, or collect audio from the target object, thereby reducing interference to non-target audience and improving the effectiveness of audio output or collection.
In an exemplary embodiment, the target object includes a person, and the related information includes a human outline of the person and/or a positional relationship between the person and the terminal. Referring to fig. 6, a flowchart of a terminal control method according to another embodiment of the present application is shown. The method may be applied to a terminal as shown in fig. 2 to 4, and the method may include the following steps.
In step 601, indication information is displayed, where the indication information is used to indicate whether to turn on the audio pointing function.
The audio pointing function refers to a function in which an audio processing component outputs audio to or collects audio from a specific area. The specific region refers to a region where the target object is located.
In a possible implementation manner, the indication information is displayed in the system setting interface, and if the user confirms that the audio pointing function is started in the system setting interface, the following terminal automatically executes the terminal control method when outputting audio or collecting audio.
In a possible implementation manner, the indication information is displayed in the audio acquisition interface, and if the user confirms that the audio pointing function is started in the audio acquisition interface, the following terminal automatically executes the following terminal control method when the audio is acquired. Illustratively, the audio collection interface includes a call interface, a voice chat interface, a video chat interface, a web conference interface, and the like, and the embodiment of the present application does not limit the type of the audio collection interface.
In a possible implementation manner, the indication information is displayed in the audio output interface, and if the user confirms that the audio pointing function is started in the audio output interface, the following terminal automatically executes the following terminal control method when outputting the audio. Illustratively, the audio output interface includes a music playing interface, a video playing interface, a talking interface, and the like, and the embodiment of the present application does not limit the type of the audio output interface.
In response to receiving a confirmation instruction for the indication information, a first millimeter wave signal is transmitted by the positioning component, step 602.
The confirmation instruction is used for indicating to turn on the audio pointing function.
In a possible implementation manner, the user triggers the confirmation instruction of the indication information through a gesture, touch control, voice and the like.
For description of the first millimeter wave signal transmitted by the positioning component, reference may be made to the above embodiments, and details thereof are omitted herein.
It should be noted that, in a possible implementation manner, the terminal does not display the indication information, but turns on the audio pointing function by default.
Step 603, receiving, by the positioning component, a second millimeter wave signal reflected by the target object from the first millimeter wave signal.
In step 604, information about the target object is determined based on the first millimeter wave signal and the second millimeter wave signal.
In the embodiment of the application, the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal.
For the description of steps 603 to 604, refer to the above embodiments, and are not repeated here.
Step 605, determining the head position of the person based on the related information and the head characteristics.
The head features are used to indicate head contour features of the character.
The head position of the person is used to indicate the positional relationship between the head of the person and the terminal. For example, the head position of the person may be such that the head is located on the upper side of the terminal.
In a possible implementation manner, the terminal determines a head region of the person from the outline of the person based on the outline of the person and the head characteristics; the terminal determines the positional relationship between the head and the terminal based on the positional relationship between the person and the terminal and the head region of the person.
In a possible implementation, steps 602 to 605 may be performed by the positioning component. Illustratively, steps 602-603 may be performed by a millimeter wave transceiver and a radio frequency integrated unit in the positioning assembly, and steps 604-605 may be performed by a digital signal processing unit in the positioning assembly.
In step 606, the motor control assembly controls the audio processing assembly to rotate, and the rotated audio processing assembly faces the head of the person.
After the terminal determines the head position of the person, the terminal can determine rotation information corresponding to the audio processing component based on the current position of the audio processing component and the head position of the person, wherein the rotation information is used for indicating how the audio processing component rotates. Illustratively, the rotation information may be determined by a digital signal processing unit, which, upon determination, sends the rotation information to the motor control assembly; the motor control assembly controls the rotation of the audio processing assembly based on the rotation information.
In step 607, an updated positional relationship between the person and the terminal is determined, and the updated positional relationship is obtained by measuring the position between the person and the terminal again after the time when the related information is acquired.
In a possible implementation, the terminal performs the step of determining an updated positional relationship between the person and the terminal after the audio processing component has completed the rotation.
The updated positional relationship is the positional relationship between the character and the terminal obtained after the terminal re-measures. In a possible implementation manner, millimeter wave signals are sent through the positioning component, and reflected millimeter wave signals reflected by the person are received; and determining the updated position relationship between the person and the terminal based on the transmitted millimeter wave signal and the reflected millimeter wave signal.
Step 608, determining whether the update location relationship and the change between the location relationships satisfy the preset condition. In response to the update location relationship and the change between location relationships satisfying the preset condition, the execution starts again from step 602; in response to the update positional relationship and the change between the positional relationships not satisfying the preset condition, the execution starts again from step 607.
In a possible implementation manner, after determining the correlation of the target object, the terminal stores the correlation into a register, and after determining the update position relationship, the terminal compares the update position relationship with the position relationship to determine the change between the update position relationship and the position relationship. The terminal also stores the updated positional relationship into the register.
In a possible implementation, the updated positional relationship comprises an updated distance between the target object and the terminal and/or an updated position between the target object and the terminal.
Updating the positional relationship and the change between the positional relationships may include at least one of: a change in distance and a change in orientation.
In a possible implementation, the preset condition includes updating the positional relationship and the change between the positional relationships being greater than a preset threshold, which may be set by a technician. Illustratively, when the change between the updated positional relationship and the positional relationship is greater than a preset threshold, determining that the change between the updated positional relationship and the positional relationship satisfies a preset condition; when the change between the updated positional relationship and the positional relationship is smaller than a preset threshold, it is determined that the change between the updated positional relationship and the positional relationship does not satisfy a preset condition. Illustratively, the preset threshold value corresponding to the change in distance and the preset threshold value corresponding to the change in orientation may be different, the change in distance may be measured in cm (centimeters), and the change in orientation may be measured in angle.
In a possible implementation, in response to the update position relationship and the change between the position relationships satisfying the preset condition, the step of transmitting the first millimeter wave signal through the positioning component is performed again. When the update position relationship and the change between the position relationships satisfy the preset condition, it is indicated that the position of the person has changed significantly, the terminal needs to determine the head position of the person again, and then the steering of the audio processing component is adjusted.
In a possible implementation, in response to the update positional relationship and the change between the positional relationships not meeting the preset condition, the step of determining the update positional relationship between the person and the terminal is performed again. When the change between the updated position relation and the position relation does not meet the preset condition, the position of the person is not obviously changed, the terminal can still output audio or collect audio through the audio processing assembly which rotates before, and steering of the audio processing assembly is not required to be adjusted. The terminal may continually determine an updated positional relationship and determine whether a readjustment of the orientation of the audio processing component is required based on the updated positional relationship.
Illustratively, when the head of the person, such as head up, head down, etc., has a small relative movement amplitude with the terminal, it is determined that the audio pointing does not need to be adjusted; when the relative movement amplitude is large or the target object is replaced, it is determined that repositioning of the head of the person is required.
In the embodiment of the application, even if the target object moves, the orientation of the audio processing assembly can be dynamically adjusted, so that the best audio working effect is ensured.
In a possible implementation, the target object includes a first object and a second object; a first audio processing component of the audio processing components corresponds to the first object; a second one of the audio processing components corresponds to a second object. A first one of the audio processing components serves a first object (i.e., the first audio processing component outputs audio for the first object or the first audio processing component captures audio from the first object), and a second one of the audio processing components serves a second object (the second audio processing component outputs audio for the second object or the second audio processing component captures audio from the second object). The first audio processing component and the second audio processing component are different components. In a possible implementation, the second audio processing component is all components of the audio processing components except the first audio processing component, or the second audio processing component is a part of the audio processing components except the first audio processing component. In one example, the number of first audio processing components and the number of second audio processing components are the same; in another example, the number of first audio processing components and the number of second audio processing components are different, which is not limited by the embodiments of the present application.
In a possible implementation, a component closest to the first object among the audio processing components is determined as a first audio processing component; the component closest to the second object among the audio processing components is determined to be the second audio processing component.
In a possible implementation, in case the target object comprises a plurality of objects, the object closest to the terminal by default corresponds to the audio processing component. That is, in the case where the target object includes a plurality of objects, the default audio processing component serves the object closest to the terminal (i.e., the audio processing component outputs audio for the object closest to the terminal, or the audio processing component collects audio from the object closest to the terminal).
In summary, in the technical solution provided in the embodiments of the present application, by displaying the indication information, and then after receiving the confirmation instruction for the indication information, transmitting the millimeter wave signal through the positioning component, the user can flexibly select whether to start the audio pointing function, which is more flexible.
In addition, according to the embodiment of the application, whether the orientation of the audio processing component needs to be readjusted is determined based on the updated position relation and the position relation, and even if the target object moves, the orientation of the audio processing component can be dynamically adjusted, so that the best audio working effect is ensured.
In an exemplary embodiment, the target object comprises an item. Referring to fig. 7, a flowchart of a terminal control method according to another embodiment of the present application is shown. The method may be applied to a terminal as shown in fig. 2 to 4, and the method may include the following steps.
In step 701, an initial positional relationship between an item and a terminal is determined by a camera assembly.
In a possible implementation manner, the terminal displays indication information, wherein the indication information is used for indicating whether to start an audio pointing function; and in response to receiving the confirmation instruction of the indication information, determining the initial position relationship between the article and the terminal through the camera component.
In a possible implementation, the user focuses on the object through the camera assembly, so that the terminal determines the initial position relationship between the object and the terminal. The initial positional relationship is used to indicate the approximate position between the item and the terminal.
At step 702, a first millimeter wave signal is transmitted to an item based on an initial positional relationship by a positioning component.
In a possible implementation, after the terminal determines the initial positional relationship, the positioning component may determine the approximate location of the article, at which time the positioning component may send a first millimeter wave signal to the article based on the initial positional relationship.
Step 703, receiving, by the positioning component, a second millimeter wave signal reflected by the article from the first millimeter wave signal.
Step 704, determining relevant information of the article based on the first millimeter wave signal and the second millimeter wave signal, the relevant information including a contour shape of the article and/or a positional relationship between the article and the terminal.
The description of steps 702 to 704 can be found in the above embodiments, and will not be repeated here.
Step 705, controlling the rotation of the audio processing component based on the related information by the motor control component, wherein the rotated audio processing component corresponds to the article.
In a possible implementation, when the target object comprises an item, the audio processing component is primarily used to capture audio of the item. Thus, the rotated audio processing assembly corresponding to the item means that the rotated audio processing assembly captures audio of the item.
In a possible implementation manner, the sounding positions of different articles may be different, the terminal may determine the sounding part of the article based on the outline shape of the article, and determine the positional relationship between the sounding part and the terminal based on the positional relationship between the article and the terminal, so as to control the motor control assembly to control the audio processing assembly to rotate, and the audio processing assembly after rotation faces the sounding part.
In step 706, a new positional relationship between the article and the terminal is determined, and the new positional relationship is obtained by measuring the position between the article and the terminal again after the time when the related information is acquired.
Step 707, it is determined whether the new positional relationship and the change between the positional relationships satisfy a preset condition. In response to the new positional relationship and the change between the positional relationships satisfying the preset condition, the execution starts again from step 701; in response to the new positional relationship and the change between the positional relationships not meeting the preset condition, the execution starts again from step 706.
In a possible implementation, in response to the new positional relationship and the change between the positional relationships satisfying the preset condition, the step of determining the initial positional relationship between the article and the terminal by the camera is performed again.
In a possible implementation, in response to the new positional relationship and the change between the positional relationships not meeting the preset condition, the step of determining the new positional relationship between the item and the terminal is performed again.
The description of steps 706 to 707 can be found in the above embodiments, and will not be repeated here.
In summary, in the technical scheme provided by the embodiment of the application, the position relationship between the object and the terminal is determined, so that the audio processing component faces the object, the audio processing component can collect the audio of the object more accurately, and the interference of other noise is reduced.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Referring to fig. 8, a block diagram of a terminal control apparatus according to an embodiment of the present application is shown, where the apparatus may be applied to a terminal as shown in fig. 2 to 4, and the apparatus has a function of implementing an example of the terminal control method described above, where the function may be implemented by hardware or may be implemented by executing corresponding software by hardware. The apparatus 800 may include:
a signal transmitting module 810 for transmitting a first millimeter wave signal through the positioning component;
a signal receiving module 820, configured to receive, by using the positioning component, a second millimeter wave signal reflected by the target object from the first millimeter wave signal;
an information determining module 830, configured to determine, based on the first millimeter wave signal and the second millimeter wave signal, related information of the target object, where the related information includes a contour shape of the target object and/or a positional relationship between the target object and the terminal;
and the component rotating module 840 is configured to control, by the motor control component, the rotation of the audio processing component based on the related information, where the rotated audio processing component corresponds to the target object.
In summary, in the technical solution provided in the embodiments of the present application, the rotation of the audio processing component is controlled based on the outline shape of the target object and/or the positional relationship between the target object and the terminal, so that the audio processing component may output audio to the target object more accurately, or collect audio from the target object, thereby reducing interference to non-target audience and improving the effectiveness of audio output or collection.
In an exemplary embodiment, the target object includes a person, and the related information includes a human outline of the person and/or a positional relationship between the person and the terminal;
the assembly rotation module 840 is configured to:
determining a head position of the person based on the related information and the head features;
and the motor control assembly controls the audio processing assembly to rotate, and the rotated audio processing assembly faces the head position of the person.
In an exemplary embodiment, the apparatus further comprises: a position determination module (not shown).
The position determining module is used for determining an updated position relation between the person and the terminal, wherein the updated position relation is obtained by measuring the position between the person and the terminal again after the moment of acquiring the related information;
The signal sending module 810 is further configured to, in response to the update location relationship and the change between the location relationships meeting a preset condition, perform the step of sending the first millimeter wave signal through the positioning component again;
the position determining module is further configured to, in response to the update position relationship and the change between the position relationships not meeting a preset condition, execute from the step of determining the update position relationship between the person and the terminal again.
In an exemplary embodiment, the target object comprises an article, and the terminal further comprises a camera assembly;
the device further comprises: a position determination module (not shown).
The position determining module is used for determining an initial position relation between the object and the terminal through the camera component;
the signal sending module 810 is configured to:
the first millimeter wave signal is transmitted to the article based on the initial positional relationship by the positioning component.
In an exemplary embodiment, the location determination module is further configured to:
determining a new position relation between the article and the terminal, wherein the new position relation is obtained by measuring the position between the article and the terminal again after the moment of acquiring the related information;
In response to the change between the new positional relationship and the positional relationship satisfying a preset condition, performing again from the step of determining an initial positional relationship between the article and the terminal through the camera;
and in response to the change between the new positional relationship and the positional relationship not meeting a preset condition, performing again from the step of determining the new positional relationship between the item and the terminal.
In an exemplary embodiment, the target object includes a first object and a second object;
a first one of the audio processing components corresponds to the first object;
a second one of the audio processing components corresponds to the second object.
In an exemplary embodiment, the apparatus further comprises:
an information display module (not shown in the figure) for displaying indication information for indicating whether or not to turn on the audio pointing function;
the signal sending module 810 is further configured to, in response to receiving an acknowledgement instruction for the indication information, start to perform the step of sending, by the positioning component, the first millimeter wave signal.
It should be noted that, when the apparatus provided in the foregoing embodiment performs the functions thereof, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the apparatus and the method embodiments provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the apparatus and the method embodiments are detailed in the method embodiments and are not repeated herein.
Referring to fig. 9, a block diagram of a terminal according to an embodiment of the present application is shown.
The terminal in the embodiment of the application may include one or more of the following components: a processor 910 and a memory 920.
Processor 910 may include one or more processing cores. The processor 910 connects various parts within the overall terminal using various interfaces and lines, performs various functions of the terminal and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 920, and invoking data stored in the memory 920. Alternatively, the processor 910 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 910 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU) and a modem, etc. Wherein, the CPU mainly processes an operating system, application programs and the like; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 910 and may be implemented by a single chip.
Optionally, the processor 910 implements the methods provided by the various method embodiments described above when executing program instructions in the memory 920.
The Memory 920 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (ROM). Optionally, the memory 920 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 920 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 920 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function, instructions for implementing the various method embodiments described above, and the like; the storage data area may store data created according to the use of the terminal, etc.
The structure of the terminal described above is merely illustrative, and in actual implementation, the terminal may include more or fewer components, such as: a display screen, etc., which is not limited in this embodiment.
It will be appreciated by those skilled in the art that the structure shown in fig. 9 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, there is also provided a computer-readable storage medium having stored therein a computer program that is loaded and executed by a processor of a terminal to implement the steps in the above-described terminal control method embodiments.
In an exemplary embodiment, a computer program product is provided that includes computer instructions stored in a computer-readable storage medium. The processor of the terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the terminal performs the terminal control method described above.
It should be understood that references herein to "a plurality" are to two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. In addition, the step numbers described herein are merely exemplary of one possible execution sequence among steps, and in some other embodiments, the steps may be executed out of the order of numbers, such as two differently numbered steps being executed simultaneously, or two differently numbered steps being executed in an order opposite to that shown, which is not limited by the embodiments of the present application.
The foregoing description of the exemplary embodiments of the present application is not intended to limit the invention to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, alternatives, and alternatives falling within the spirit and scope of the invention.

Claims (10)

1. A terminal, characterized in that the terminal comprises a positioning component, a motor control component and an audio processing component; the positioning assembly is coupled with the motor control assembly; the motor control assembly is coupled with the audio processing assembly; the positioning component comprises a millimeter wave transceiver, a radio frequency integrated unit and a digital signal processing unit; the millimeter wave transceiver is used for transmitting and receiving millimeter waves; the millimeter wave transceiver is coupled with the radio frequency integrated unit; the millimeter wave transceiver is coupled with the digital signal processing unit;
the positioning component is used for sending first millimeter wave signals to a plurality of directions; receiving second millimeter wave signals reflected by the target objects from the first millimeter wave signals in the directions; determining distances between the terminal and the target object in each direction based on the first millimeter wave signal and the second millimeter wave signal in each direction; determining a target object with the smallest distance with the terminal under the condition that the number of the target objects is a plurality of, and serving the audio processing component; determining the outline shape of the target object of the service according to the distances between the terminal and the target object of the service in all directions;
In the case that the target object of the service is a character, the motor control component is configured to determine a head region of the character based on an outline shape of the character;
the positioning component is further configured to determine a positional relationship between the person and the terminal based on the first millimeter wave signal and the second millimeter wave signal in respective directions, the positional relationship between the person and the terminal including: a distance between the person and the terminal, and an orientation between the person and the terminal;
the motor control component is further configured to determine a head position of the person based on a positional relationship between the person and the terminal and a head region of the person, the head position of the person being used to indicate the positional relationship between the person's head and the terminal;
the motor control assembly is also used for controlling the audio processing assembly to rotate, and the rotated audio processing assembly faces the head position of the person;
the audio processing component is configured to output or collect audio toward the head position of the person.
2. The terminal of claim 1, wherein the motor control assembly comprises a stepper motor.
3. A terminal control method, characterized in that the method is performed by a terminal comprising a positioning component, a motor control component and an audio processing component; the positioning assembly is coupled with the motor control assembly; the motor control assembly is coupled with the audio processing assembly; the positioning component comprises a millimeter wave transceiver, a radio frequency integrated unit and a digital signal processing unit; the millimeter wave transceiver is used for transmitting and receiving millimeter waves; the millimeter wave transceiver is coupled with the radio frequency integrated unit; the millimeter wave transceiver is coupled with the digital signal processing unit;
the method comprises the following steps:
transmitting a first millimeter wave signal to a plurality of directions through the positioning assembly;
receiving second millimeter wave signals reflected by the target object from the first millimeter wave signals in the directions through the positioning component;
determining distances between the terminal and the target object in each direction based on the first millimeter wave signal and the second millimeter wave signal in each direction; determining a target object with the smallest distance with the terminal under the condition that the number of the target objects is a plurality of, and serving the audio processing component;
Determining the outline shape of the target object of the service according to the distances between the terminal and the target object of the service in all directions;
in the case that the target object of the service is a person, determining, by the motor control component, a head region of the person based on an outline shape of the person;
determining, by the positioning component, a positional relationship between the person and the terminal based on the first millimeter wave signal and the second millimeter wave signal in respective directions, the positional relationship between the person and the terminal including: a distance between the person and the terminal, and an orientation between the person and the terminal;
determining, by the motor control component, a head position of the person based on a positional relationship between the person and the terminal and a head region of the person, the head position of the person being used to indicate the positional relationship between the person's head and the terminal;
and the motor control assembly controls the audio processing assembly to rotate, and the rotated audio processing assembly faces the head position of the person.
4. The method of claim 3, wherein the controlling the rotation of the audio processing component by the motor control component, after the rotating audio processing component faces the head position of the person, further comprises:
Determining an updated position relationship between the person and the terminal, wherein the updated position relationship is obtained by measuring the position between the person and the terminal again after the moment of acquiring the related information of the target object;
in response to the updated positional relationship and the change in the positional relationship satisfying a preset condition, again starting from the step of transmitting the first millimeter wave signals in a plurality of directions through the positioning assembly;
and in response to the change between the updated positional relationship and the positional relationship not meeting a preset condition, performing again from the step of determining the updated positional relationship between the character and the terminal.
5. A method according to claim 3, wherein in the case where the target object is an article, the terminal further comprises a camera assembly;
before the first millimeter wave signals are sent to a plurality of directions by the positioning component, the method further comprises:
determining an initial positional relationship between the item and the terminal through the camera assembly;
the transmitting, by the positioning component, the first millimeter wave signal to a plurality of directions includes:
the first millimeter wave signal is transmitted to the article based on the initial positional relationship by the positioning component.
6. The method of claim 5, wherein the method further comprises:
determining a new position relation between the object and the terminal, wherein the new position relation is obtained by measuring the position between the object and the terminal again after the moment of acquiring the related information of the target object;
in response to the change between the new positional relationship and the positional relationship satisfying a preset condition, performing again from the step of determining an initial positional relationship between the article and the terminal through the camera;
and in response to the change between the new positional relationship and the positional relationship not meeting a preset condition, performing again from the step of determining the new positional relationship between the item and the terminal.
7. The method of any one of claims 3 to 6, wherein the target object comprises a first object and a second object;
a first one of the audio processing components corresponds to the first object;
a second one of the audio processing components corresponds to the second object.
8. The method according to any one of claims 3 to 6, further comprising:
Displaying indication information, wherein the indication information is used for indicating whether an audio pointing function is started or not;
in response to receiving an acknowledgement instruction for the indication information, the method is performed from the step of transmitting a first millimeter wave signal to a plurality of directions through the positioning component.
9. A terminal control device, characterized in that the device is arranged in a terminal, and the terminal comprises a positioning component, a motor control component and an audio processing component; the positioning assembly is coupled with the motor control assembly; the motor control assembly is coupled with the audio processing assembly; the positioning component comprises a millimeter wave transceiver, a radio frequency integrated unit and a digital signal processing unit; the millimeter wave transceiver is used for transmitting and receiving millimeter waves; the millimeter wave transceiver is coupled with the radio frequency integrated unit; the millimeter wave transceiver is coupled with the digital signal processing unit;
the device comprises:
the signal sending module is used for sending first millimeter wave signals to a plurality of directions through the positioning assembly;
the signal receiving module is used for receiving second millimeter wave signals reflected by the target object from the first millimeter wave signals in the directions through the positioning assembly;
An information determining module, configured to determine distances between the terminal and the target object in each direction, based on the first millimeter wave signal and the second millimeter wave signal in each direction; determining a target object with the smallest distance with the terminal under the condition that the number of the target objects is a plurality of, and serving the audio processing component; determining the outline shape of the target object of the service according to the distances between the terminal and the target object of the service in all directions;
a component rotation module for determining, by the motor control component, a head region of the character based on an outline shape of the character in a case where a target object of the service is the character;
the information determining module is further configured to determine, by the positioning component, a positional relationship between the person and the terminal based on the first millimeter wave signal and the second millimeter wave signal in each direction, where the positional relationship between the person and the terminal includes: a distance between the person and the terminal, and an orientation between the person and the terminal;
The component rotating module is further used for determining the head position of the person based on the position relation between the person and the terminal and the head area of the person through the motor control component, and the head position of the person is used for indicating the position relation between the head of the person and the terminal; and the motor control assembly controls the audio processing assembly to rotate, and the rotated audio processing assembly faces the head position of the person.
10. A computer-readable storage medium, in which a computer program is stored, the computer program being loaded and executed by a processor to implement the terminal control method according to any one of claims 3 to 8.
CN202011521562.2A 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium Active CN112565973B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011521562.2A CN112565973B (en) 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium
PCT/CN2021/129846 WO2022134910A1 (en) 2020-12-21 2021-11-10 Terminal, terminal control method and apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011521562.2A CN112565973B (en) 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112565973A CN112565973A (en) 2021-03-26
CN112565973B true CN112565973B (en) 2023-08-01

Family

ID=75031199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011521562.2A Active CN112565973B (en) 2020-12-21 2020-12-21 Terminal, terminal control method, device and storage medium

Country Status (2)

Country Link
CN (1) CN112565973B (en)
WO (1) WO2022134910A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565973B (en) * 2020-12-21 2023-08-01 Oppo广东移动通信有限公司 Terminal, terminal control method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07159190A (en) * 1993-12-09 1995-06-23 Zanabui Informatics:Kk Sound device totallizing system on vehicle
WO2013093187A2 (en) * 2011-12-21 2013-06-27 Nokia Corporation An audio lens
CN107182011A (en) * 2017-07-21 2017-09-19 深圳市泰衡诺科技有限公司上海分公司 Audio frequency playing method and system, mobile terminal, WiFi earphones
WO2018041359A1 (en) * 2016-09-01 2018-03-08 Universiteit Antwerpen Method of determining a personalized head-related transfer function and interaural time difference function, and computer program product for performing same
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8189825B2 (en) * 1994-05-09 2012-05-29 Breed David S Sound management techniques for vehicles
US9414144B2 (en) * 2013-02-21 2016-08-09 Stuart Mathis Microphone positioning system
CN105721645A (en) * 2016-02-22 2016-06-29 梁天柱 Voice peripheral of mobile phone
CN106157986B (en) * 2016-03-29 2020-05-26 联想(北京)有限公司 Information processing method and device and electronic equipment
CN107656718A (en) * 2017-08-02 2018-02-02 宇龙计算机通信科技(深圳)有限公司 A kind of audio signal direction propagation method, apparatus, terminal and storage medium
CN108805086B (en) * 2018-06-14 2021-09-14 联想(北京)有限公司 Media control method and audio processing device
WO2020042121A1 (en) * 2018-08-30 2020-03-05 Oppo广东移动通信有限公司 Gesture recognition method, terminal, and storage medium
CN109284081B (en) * 2018-09-20 2022-06-24 维沃移动通信有限公司 Audio output method and device and audio equipment
CN111050269B (en) * 2018-10-15 2021-11-19 华为技术有限公司 Audio processing method and electronic equipment
CN110493690B (en) * 2019-08-29 2021-08-13 北京搜狗科技发展有限公司 Sound collection method and device
CN111060874B (en) * 2019-12-10 2021-10-29 深圳市优必选科技股份有限公司 Sound source positioning method and device, storage medium and terminal equipment
CN111161768A (en) * 2019-12-23 2020-05-15 秒针信息技术有限公司 Recording apparatus
CN112565973B (en) * 2020-12-21 2023-08-01 Oppo广东移动通信有限公司 Terminal, terminal control method, device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07159190A (en) * 1993-12-09 1995-06-23 Zanabui Informatics:Kk Sound device totallizing system on vehicle
WO2013093187A2 (en) * 2011-12-21 2013-06-27 Nokia Corporation An audio lens
WO2018041359A1 (en) * 2016-09-01 2018-03-08 Universiteit Antwerpen Method of determining a personalized head-related transfer function and interaural time difference function, and computer program product for performing same
CN107182011A (en) * 2017-07-21 2017-09-19 深圳市泰衡诺科技有限公司上海分公司 Audio frequency playing method and system, mobile terminal, WiFi earphones
CN109343902A (en) * 2018-09-26 2019-02-15 Oppo广东移动通信有限公司 Operation method, device, terminal and the storage medium of audio processing components

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Facial movement synthesis by HMM from audio speech;Kiyotsugu Kakihara 等;《Electronics and Communications in Japan (Part II: Electronics)Volume 85, Issue 4》;第1-10页 *
基于DSP的高速音频采集系统的硬件设计与实现;朱朝文;曾水平;;自动化技术与应用(第04期);第36-39页 *
基于听觉感知特性的双耳音频处理技术;李军锋;徐华兴;夏日升;颜永红;;应用声学(第05期);第124-134页 *

Also Published As

Publication number Publication date
WO2022134910A1 (en) 2022-06-30
CN112565973A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
EP3829149B1 (en) Audio control method and device, and terminal
CN103988366A (en) Sleeve with electronic extensions for cell phone
US20140199950A1 (en) Sleeve with electronic extensions for a cell phone
WO2014161309A1 (en) Method and apparatus for mobile terminal to implement voice source tracking
CN110519450B (en) Ultrasonic processing method, ultrasonic processing device, electronic device, and computer-readable medium
US20220278704A1 (en) Apparatus and method for adjusting level of low-noise amplifier (lna), and terminal device
EP4270639A1 (en) Terminal antenna and method for controlling beam direction of antenna
EP1803285A1 (en) Combined wireless communications device and radio broadcast receiver
CN112565973B (en) Terminal, terminal control method, device and storage medium
CN111885704A (en) Method for determining installation position of user front equipment, electronic equipment and storage medium
CN108832944B (en) Power compensation method, device, terminal equipment and storage medium
CN111399011A (en) Position information determining method and electronic equipment
CN111669249B (en) Cellular network electromagnetic interference method and system based on environment recognition
CN115184927B (en) Microwave nondestructive imaging target detection method
KR20120070966A (en) Radio channel measurement apparatus using multiple-antennas
JP2023159842A (en) Device determination method and device, electronic equipment and computer-readable storage medium
CN115407272A (en) Ultrasonic signal positioning method and device, terminal and computer readable storage medium
CN116846487A (en) Non-signaling mode test method, device, system, terminal and storage medium
CN109644456B (en) Beam scanning range determining method, device, equipment and storage medium
CN103873987A (en) Anti-interference remote wireless microphone system and wireless audio transmission method thereof
JP3643217B2 (en) System for evaluating electromagnetic field environment characteristics of wireless terminals
CN111983598B (en) Axis locus determining method and device based on multipath signals
CN114966262B (en) Anti-interference testing device and anti-interference testing method
CN215268270U (en) Antenna switching device and terminal
CN115085826B (en) Transmitting power detection circuit, method and wireless communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant