CN116414340A - Screen sounding control method, electronic equipment and computer readable storage medium - Google Patents

Screen sounding control method, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN116414340A
CN116414340A CN202111682609.8A CN202111682609A CN116414340A CN 116414340 A CN116414340 A CN 116414340A CN 202111682609 A CN202111682609 A CN 202111682609A CN 116414340 A CN116414340 A CN 116414340A
Authority
CN
China
Prior art keywords
screen
user
sounding
controlling
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111682609.8A
Other languages
Chinese (zh)
Inventor
张婧靥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202111682609.8A priority Critical patent/CN116414340A/en
Priority to PCT/CN2022/126093 priority patent/WO2023124430A1/en
Publication of CN116414340A publication Critical patent/CN116414340A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The application provides a control method of screen sounding, electronic equipment and a computer readable storage medium, wherein the control method of screen sounding comprises the following steps: under the condition that sounding through the screen is needed, determining the relative orientation between the user and the screen; determining a vertical distance between an ear of the user and the screen if the relative position is directly in front of the screen; controlling a first part of the screen to perform directional sounding according to the relative azimuth under the condition that the relative azimuth is that the user is not positioned right in front of the screen or the vertical distance is larger than a first preset threshold value; and under the condition that the vertical distance is smaller than or equal to a first preset threshold value, controlling a second part of the screen to make non-directional sound according to the projection position of the ear of the user on the screen.

Description

Screen sounding control method, electronic equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of terminal screen sounding, in particular to a screen sounding control method, electronic equipment and a computer readable storage medium.
Background
Along with the fact that mobile phones occupy more and more important positions in daily life, the light and thin and comprehensive screen of the mobile phones becomes a key point of demands. In order to realize a true full screen, the holes required by a camera, a receiver and the like on the screen are omitted, no holes are realized on the camera through under-screen shooting, and no holes are realized on the receiver through a screen sounding technology.
The screen sounding technology is that the mobile phone screen vibrates through the exciter, sound is emitted by utilizing the vibration of the mobile phone screen, and the 'Liu' opening required by the earphone loudspeaker can be avoided, but the screen vibration is the vibration sounding of the whole screen, so that the generated sound has wider transmission range and serious sound leakage; if the volume of the sound produced by the screen is reduced, the sound transmission effect is poor, and the conversation content cannot be heard.
Disclosure of Invention
The embodiment of the application provides a control method for screen sounding, electronic equipment and a computer readable storage medium.
In a first aspect, an embodiment of the present application provides a method for controlling sounding on a screen, including: under the condition that sounding through the screen is needed, determining the relative orientation between the user and the screen; determining a vertical distance between an ear of the user and the screen if the relative position is directly in front of the screen; and controlling the first part of the screen to perform directional sounding according to the relative azimuth under the condition that the relative azimuth is that the user is not positioned right in front of the screen or the vertical distance is larger than the first preset threshold value.
In a second aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and the memory is stored with at least one program, and when the at least one program is executed by the at least one processor, the control method for any one of the screen sounding is realized.
In a third aspect, embodiments of the present application provide a computer readable storage medium, on which a computer program is stored, the computer program implementing any one of the above-mentioned control methods for screen sound production when executed by a processor.
According to the screen sounding control method, when the relative position between the user and the screen is that the user is not located right in front of the screen or the vertical distance between the ear and the screen is larger than the first preset threshold value, the first part of the screen is controlled to sound in a directional mode according to the relative position, the whole screen is not controlled to sound, the sound transmission range generated by screen vibration is narrow, and the sound leakage degree is reduced; in addition, the sound transmission is realized by not reducing the sound volume of the sound production of the screen, so that the sound transmission effect is ensured, and the problems that the conversation content cannot be heard are avoided.
Drawings
FIG. 1 is a flow chart of a method for controlling on-screen sound production according to one embodiment of the present application;
fig. 2 is a schematic diagram of beamforming performed by a base station in an embodiment of the present application;
FIG. 3 is a schematic diagram of detecting a vertical distance between an ear and a screen in an embodiment of the present application;
fig. 4 is a schematic view of the middle ear hanging above the screen in the embodiment of the present application;
fig. 5 is a schematic diagram of a distribution of 12 actuators in a terminal according to an embodiment of the present application;
FIG. 6 is a schematic diagram showing the relative orientation between the user and the screen being biased above the screen by the user in an embodiment of the present application;
fig. 7 is a schematic diagram of a transmission direction of an acoustic signal in an embodiment of the present application;
fig. 8 is a flowchart of a control method for screen sounding provided by an example of an embodiment of the present application;
fig. 9 is a block diagram of a control device for screen sounding according to another embodiment of the present application.
Detailed Description
In order to better understand the technical solutions of the present application, the following describes in detail a control method for screen sounding, an electronic device, and a computer readable storage medium provided by the present application with reference to the accompanying drawings.
Example embodiments will be described more fully hereinafter with reference to the accompanying drawings, but may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the absence of conflict, embodiments and features of embodiments herein may be combined with one another.
As used herein, the term "and/or" includes any and all combinations of at least one of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of at least one other feature, integer, step, operation, element, component, and/or group thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and this application and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 is a flowchart of a control method for screen sounding according to an embodiment of the present application.
In a first aspect, referring to fig. 1, an embodiment of the present application provides a method for controlling a screen sounding, where the method may be applied to any terminal capable of sounding through a screen, and the method includes:
step 100, in the case of sounding through the screen, determining the relative orientation between the user and the screen.
The embodiment of the application is not particularly limited to the case where sound production through a screen is required. For example, the need to sound through the screen means that the phone is currently in a talking state, no earphone connection is detected, and the voice play function is not turned on.
In some exemplary embodiments, determining the relative position between the user and the screen includes: respectively receiving first millimeter wave signals sent by a base station in different directions; the relative orientation is determined from the received first millimeter wave signals in different directions.
In this embodiment of the present application, in order to detect the relative azimuth between the user and the screen, as shown in fig. 2, at least one base station performs hybrid beamforming by using an antenna array formed by at least one millimeter wave antenna, specifically, the base station performs digital domain beamforming by using a baseband precoding module, performs analog domain beamforming by using a first radio frequency link module, and finally sends a millimeter wave signal after beamforming, that is, the first millimeter wave signal by using the first antenna array; the terminal receives the first millimeter wave signal through the second antenna array, performs inverse processing of analog domain beam forming on the first millimeter wave signal through the second radio frequency link module, and performs inverse processing of digital domain beam forming through the baseband combined fist.
The first millimeter wave signal has higher directivity, and the signal intensity of the first millimeter wave signal in the main lobe direction is far greater than the signal intensity in the side lobe direction, that is, the first millimeter wave signal points to the main lobe direction, that is, the direction with the maximum signal intensity. For the terminal, the terminal can receive a first millimeter wave signal in the main lobe direction to determine whether a shielding object exists in the main lobe direction, and under the condition that the shielding object exists in the main lobe direction, the relative position between the user and the screen is determined to be the main lobe direction; and under the condition that no shielding object exists in the main lobe direction, determining that the relative position between the user and the screen is not the main lobe direction.
In some exemplary embodiments, the base station may continuously transmit the first millimeter wave signal in different directions, while the terminal may not need to continuously receive the first millimeter wave signal in different directions, but may receive the first millimeter wave signal in different directions if it is necessary to determine the relative orientation between the user and the screen.
In some exemplary embodiments, the relative position between the user and the screen may be determined according to the signal receiving intensities of the first millimeter wave signals in different directions, that is, the relative position between the user and the screen is calculated according to the main lobe direction of the first millimeter wave signal with the minimum signal receiving intensity, where the main lobe direction refers to the direction with the maximum signal intensity when the base station performs beamforming.
Step 101, determining a vertical distance between the ear of the user and the screen in the case that the relative orientation is that the user is located right in front of the screen.
In some exemplary embodiments, the user being directly in front of the screen means that the user's head is directly in front of the screen.
In some exemplary embodiments, determining the vertical distance between the user's ear and the screen comprises: transmitting a second millimeter wave signal; receiving an echo signal corresponding to the second millimeter wave signal; a vertical distance between the user's ear and the screen is determined from the echo signals.
In some exemplary embodiments, the echo signal corresponding to the second millimeter wave signal refers to a signal reflected back by the second millimeter wave signal when encountering an obstacle.
In some exemplary embodiments, when the second millimeter wave signal is transmitted, the second millimeter wave signal may be transmitted through at least one antenna, and each antenna may transmit one second millimeter wave signal, or may transmit two or more second millimeter wave signals.
In some exemplary embodiments, when the second millimeter wave signal is transmitted, the second millimeter wave signal may be transmitted in a direction perpendicular to the screen, or may be transmitted in a direction at an angle to the screen.
Under the condition that a second millimeter wave signal is sent along the direction perpendicular to the screen, after an echo signal corresponding to the second millimeter wave signal is received, the second millimeter wave signal and the echo signal are mixed through a mixer to obtain an intermediate frequency signal, the intermediate frequency signal is subjected to Fourier transformation to obtain the frequency f of the intermediate frequency signal, and the vertical distance d between the reflection point and the screen is calculated according to the frequency f of the intermediate frequency signal 1 I.e. d 1 =f×c×T c /(2B)。
Wherein d 1 For the vertical distance between the reflection point and the screen, f is the frequency of the intermediate frequency signal, c is the sound velocity, T c And B is the bandwidth of the second millimeter wave signal.
Under the condition that a second millimeter wave signal is sent along the direction forming an included angle with the screen, after an echo signal corresponding to the second millimeter wave signal is received, the second millimeter wave signal and the echo signal are mixed through a mixer to obtain an intermediate frequency signal, the intermediate frequency signal is subjected to Fourier transformation to obtain the frequency f of the intermediate frequency signal, and the vertical distance d between the reflection point and the screen is calculated according to the frequency f of the intermediate frequency signal 1 I.e. d 1 =sinα×f×c×T c /(2B)。
Wherein alpha is the included angle between the transmission direction of the second millimeter wave signal and the screen, d 1 For the vertical distance between the reflection point and the screen, f is the frequency of the intermediate frequency signal, c is the sound velocity, T c And B is the bandwidth of the second millimeter wave signal.
That is, every time a second millimeter wave signal is transmitted, the vertical distance between a reflection point and the screen can be calculated based on the second millimeter wave signal and the corresponding echo signal, and the vertical distance between the reflection point and the screen can be calculated regardless of the transmission direction of the second millimeter wave signal.
In some exemplary embodiments, in order to detect the vertical distance between the ear and the screen, the vertical distance between the screen and the reflection point with the smallest distance may be used as the vertical distance between the ear and the screen by detecting the vertical distance between the screen and the reflection points, as shown in fig. 3, and when the user performs the close-range call, the side face of the user is close to the screen, and the ear is the position closest to the screen, so that the nearest position to the screen is the position of the ear, and d3 in fig. 3 is the vertical distance between the ear and the screen.
In some exemplary embodiments, in order to improve accuracy, a manner of detecting the vertical distance between the same reflection point and the screen a plurality of times and then taking an average value may be adopted.
Step 102, controlling a first part of the screen to conduct directional sounding according to the relative position when the relative position is that the user is not located right in front of the screen or the vertical distance is larger than a first preset threshold value.
In some exemplary embodiments, making directional sound production from a first portion of a relative position control screen includes: and controlling the corresponding exciter of the first part of the screen to work according to the relative azimuth, so that the corresponding exciter of the first part of the screen drives the first part of the screen to sound in the direction of the relative azimuth.
In this embodiment of the present application, the exciter corresponding to the first portion of the screen may be selected according to actual needs, for example, at least one exciter having a projection position on the plane where the screen is located closest to a projection position of the user on the screen where the screen is located may be selected, or at least one exciter capable of directional sounding in a relative azimuth direction where the user is located may be selected. For another example, at least two actuators that are on the same line may be selected, and in some cases, the vertical distance of the user from the line on which the at least two actuators are located may be minimized.
In this embodiment of the present application, when the first portion of the control screen performs directional sounding according to the relative azimuth, the first portion of the control screen performs directional sounding toward the direction of the phase azimuth, so that sound cannot propagate to other directions, and a missing sound situation is avoided.
The principle of directional sounding using at least two exciters positioned in the same line is described below.
Because two sound waves with stable frequency can generate a difference frequency signal and a sum frequency signal of the two sound waves in a medium due to nonlinear effect in the process of simultaneously emitting and transmitting the two sound waves, if the frequencies of the two sound wave signals are f1 and f2 respectively, two signals with the frequencies of f1 and f2 which are originally emitted in the transmission process and two new signals with the frequencies of f1-f2 and f1+f2 are generated in the transmission process, the attenuation speed of the signal with high frequency in the air is faster, but the directivity is better, and the attenuation of the signal with low frequency in the air is slower, so that the sound wave signal with lower frequency, namely the sound wave signal with the frequency of f1-f2, is heard by an end user.
Only the sum frequency signals generated by two sound sources may have lower frequency, multiple sum frequencies can be performed by multiple sound sources to obtain high-frequency sound waves with higher directivity degree, in the embodiment of the application, the three or more exciters are controlled to work simultaneously, so that the three or more exciters drive corresponding parts to sound simultaneously, and the sum frequency generated by sound signals with the same frequency is n×f1, the directivity can be higher, n is the number of the exciters working, and f1 is the frequency of the sound signals.
Since the sound wave signals with the same frequency emitted at the same time can only directionally sound in a certain direction, the directional orientation of the sound wave signals needs to be controlled according to the relative position between the user and the screen.
The azimuth deflection control of the sound wave radiation can be realized through phase shift, different delay times are introduced into the sounding positions through digital signal processing (DSP, digital Signal Processing), and the generated sound wave signals with directivity generate certain deflection. In the terminal, the actuators are distributed in a plurality of directions, and deflection of the transmission direction of the acoustic wave signal can be performed by controlling the operation of the actuators, for example, by detecting that the user is deflected to the upper side of the screen direction, as shown in fig. 6. According to the distribution of the drivers in fig. 5, it is assumed that vibration sound is produced by the driver 1, the driver 3 and the driver 5, and a simple model is shown in fig. 7, in which three drivers are three sound sources S. Because the sound is required to be transmitted upwards, the wave surface is required to deflect upwards, the delay time of the exciter 1 is set to be 2t, the delay time of the exciter 3 is set to be t, the exciter 5 is not required to delay, the obtained sound transmission can produce upward deflection, and the deflection angle is that
Figure BDA0003448025160000051
Where x=ct, c is the speed of sound, d 2 Is the distance between two adjacent actuators. More generally, the angle of deflection can be expressed as +.>
Figure BDA0003448025160000052
Wherein θ 0 For the angle of deflection required, z is the difference in propagation height between two adjacent actuators and D is the distance between two adjacent actuators.
The deflection is required to be in other directions, and the transmission of sound in all directions can be realized by controlling the actuators at different positions.
In some exemplary embodiments, in the case that the vertical distance is less than or equal to a first preset threshold, the method further comprises: a second portion of the screen is controlled to produce a non-directional sound based on the projected position of the user's ear on the screen.
In this embodiment of the present application, the first preset threshold may be set according to an actual situation. For example, a first preset threshold value of 0.5 centimeters (cm), i.e. the distance the ear is substantially in close proximity to the screen, may be set; for another example, the first preset threshold may be set to 10cm, that is, the distance that the user does not want the screen to be tightly attached to the ear and hangs the ear above the screen, as shown in fig. 4.
In the embodiment of the application, at least two exciters are arranged in the terminal, and the control accuracy is higher as the number of the exciters arranged in the terminal is larger. In fig. 5, a schematic diagram of the distribution of the actuators is given by taking 12 actuators as an example, and as shown in fig. 5, each actuator can drive a part of the screen to vibrate and sound, and the driving parts of different actuators should be the same as possible or have smaller difference, so that the accurate control of the vibration and sound of the screen is ensured.
In some exemplary embodiments, in the case where the vertical distance between the user's ear and the screen is determined using the second millimeter wave signal transmitted in the direction perpendicular to the screen, the projection position of the user's ear on the screen is the projection position of the antenna transmitting the second millimeter wave signal on the screen.
In the case where the vertical distance between the user's ear and the screen is determined using the second millimeter wave signal transmitted in a direction at an angle to the screen, the projection position of the user's ear on the screen is determined from the projection position of the antenna transmitting the second millimeter wave signal on the screen.For example, the distance between the projection position of the ear on the screen and the projection position of the antenna for transmitting the second millimeter wave signal on the screen is d 1 cotα。
In some exemplary embodiments, controlling the second portion of the screen to sound non-directionally based on a projected position of the user's ear on the screen comprises: and controlling the operation of the exciter corresponding to the second part of the screen according to the projection position, so that the exciter corresponding to the second part of the screen drives the second part of the screen to make non-directional sound.
In some exemplary embodiments, a distance between a projected position of an actuator corresponding to the second portion of the screen on the screen and a projected position of an ear of the user on the screen is less than or equal to a second preset threshold. As shown in fig. 5, the actuators satisfying this condition are the actuator 1 and the actuator 3.
In some exemplary embodiments, the non-directional sounding refers to that the drivers corresponding to the second part of the screen are independent from each other when the second part of the screen is driven to make vibration sounding, and no relation exists between the different drivers, so that the sound waves emitted by the respective driven screens do not generate nonlinear effects.
In some exemplary embodiments, the method further comprises: the volume is controlled according to the vertical distance between the user's ear and the screen.
In some exemplary embodiments, the greater the distance between the user's ear and the screen, the greater the volume may be controlled.
According to the screen sounding control method, when the relative position between the user and the screen is that the user is not located right in front of the screen or the vertical distance between the ear and the screen is larger than the first preset threshold value, the first part of the screen is controlled to sound in a directional mode according to the relative position, the whole screen is not controlled to sound, the sound transmission range generated by screen vibration is narrow, and the sound leakage degree is reduced; in addition, the sound transmission is realized by not reducing the sound volume of the sound production of the screen, so that the sound transmission effect is ensured, and the problems that the conversation content cannot be heard are avoided.
In order to more intuitively present the entire control flow, the following description is given by way of an example, which is not intended to limit the scope of the embodiments of the present application.
Example
As shown in fig. 8, the method includes:
step 800, detecting whether a current conversation state exists, whether an earphone is inserted, whether a voice playing function is opened, and continuously executing step 801 when the current conversation state exists, the earphone insertion is not detected, and the voice playing function is not opened; and ending the flow when the current state is not in a call state, or the earphone insertion is detected, or the voice playing function is opened.
Step 801, respectively receiving first millimeter wave signals sent by a base station in different directions; and determining the relative orientation between the user and the screen according to the received first millimeter wave signals in different directions.
Step 802, if the relative orientation is that the user is located right in front of the screen, continuing to execute step 803; in the event that the relative orientation is that the user is not directly in front of the screen, execution continues with step 806.
803, sending a second millimeter wave signal; receiving an echo signal corresponding to the second millimeter wave signal; a vertical distance between the user's ear and the screen is determined from the echo signals.
Step 804, if the vertical distance is less than or equal to the first preset threshold, continuing to execute step 805; in the event that the vertical distance is greater than the first preset threshold, step 806 continues.
Step 805, controlling an exciter corresponding to a second part of the screen to work according to the projection position of the ear of the user on the screen, so that the exciter corresponding to the second part of the screen drives the second part of the screen to make a non-directional sound; wherein the distance between the projection position of the actuator corresponding to the second portion of the screen on the screen and the projection position of the ear of the user on the screen is less than or equal to a second preset threshold.
Step 806, controlling the corresponding exciter of the first part of the screen to work according to the relative azimuth, so that the corresponding exciter of the first part of the screen drives the first part of the screen to sound in the direction of the relative azimuth.
In a second aspect, another embodiment of the present application provides an electronic device, including: at least one processor; and the memory is stored with at least one program, and when the at least one program is executed by the at least one processor, the control method for sounding any screen is realized.
Wherein the processor is a device having data processing capabilities including, but not limited to, a Central Processing Unit (CPU) or the like; the memory is a device with data storage capability including, but not limited to, random access memory (RAM, more specifically SDRAM, DDR, etc.), read-only memory (ROM), electrically charged erasable programmable read-only memory (EEPROM), FLASH memory (FLASH).
In some embodiments, the processor, the memory, and the other components of the computing device are connected to each other via a bus.
In a third aspect, another embodiment of the present application provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements a method for controlling any one of the above-mentioned screen shots.
Fig. 9 is a block diagram of a control device for screen sounding according to another embodiment of the present application.
In a fourth aspect, another embodiment of the present application provides a control device for sounding a screen, including: a user position detection module 901 and a sound production control module 902.
The user position detection module 901 is used for determining the relative position between a user and a screen under the condition that sounding through the screen is required; in the case where the relative orientation is that the user is directly in front of a screen, a vertical distance between the user's ear and the screen is determined.
The sound emission control module 902 is configured to control, according to the relative position, directional sound emission of a first portion of the screen when the relative position is that the user is not located directly in front of the screen or the vertical distance is greater than a first preset threshold.
The specific implementation process of the control device for screen sounding is the same as that of the control method for screen sounding in the foregoing embodiment, and will not be repeated here.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, it will be apparent to one skilled in the art that features, characteristics, and/or elements described in connection with a particular embodiment may be used alone or in combination with other embodiments unless explicitly stated otherwise. It will therefore be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the scope of the present application as set forth in the following claims.

Claims (10)

1. A method of controlling a sound production of a screen, comprising:
under the condition that sounding through the screen is needed, determining the relative orientation between the user and the screen;
determining a vertical distance between an ear of the user and the screen if the relative position is directly in front of the screen;
and controlling the first part of the screen to perform directional sounding according to the relative azimuth under the condition that the relative azimuth is that the user is not positioned right in front of the screen or the vertical distance is larger than a first preset threshold value.
2. The control method of screen sound production according to claim 1, the method further comprising:
and under the condition that the vertical distance is smaller than or equal to a first preset threshold value, controlling a second part of the screen to make non-directional sound according to the projection position of the ear of the user on the screen.
3. The method of controlling sound production of a screen according to claim 2, wherein the controlling the second portion of the screen to produce non-directional sound according to the projection position of the user's ear on the screen comprises:
and controlling the corresponding exciter of the second part of the screen to work according to the projection position, so that the corresponding exciter of the second part of the screen drives the second part of the screen to carry out non-directional sounding.
4. A control method of sounding a screen as set forth in claim 3, wherein a distance between a projection position of an exciter corresponding to a second portion of the screen on the screen and a projection position of an ear of the user on the screen is less than or equal to a second preset threshold.
5. The control method of on-screen sounding of any one of claims 1-4, wherein the requiring sounding through the screen comprises:
the current conversation state is in, and no earphone connection is detected, and the voice play function is not opened.
6. The control method of on-screen vocalization according to any one of claims 1 to 4, wherein the determining of the relative orientation between the user and the screen includes:
respectively receiving first millimeter wave signals sent by a base station in different directions;
the relative orientation is determined from the received first millimeter wave signals in different directions.
7. The control method of screen sounding of any one of claims 1-4, wherein the determining a vertical distance between the user's ear and the screen comprises:
transmitting a second millimeter wave signal;
receiving an echo signal corresponding to the second millimeter wave signal;
a vertical distance between the user's ear and the screen is determined from the echo signals.
8. The method of controlling sound production of a screen according to any one of claims 1 to 4, wherein the controlling the first portion of the screen to produce directional sound according to the relative orientation includes:
and controlling the corresponding exciter of the first part of the screen to work according to the relative azimuth, so that the corresponding exciter of the first part of the screen drives the first part of the screen to sound in the direction of the relative azimuth.
9. An electronic device, comprising:
at least one processor;
a memory having at least one program stored thereon, which when executed by the at least one processor, implements the method of controlling on-screen sound production of any one of claims 1-8.
10. A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of controlling a screen sound production according to any one of claims 1-8.
CN202111682609.8A 2021-12-30 2021-12-30 Screen sounding control method, electronic equipment and computer readable storage medium Pending CN116414340A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111682609.8A CN116414340A (en) 2021-12-30 2021-12-30 Screen sounding control method, electronic equipment and computer readable storage medium
PCT/CN2022/126093 WO2023124430A1 (en) 2021-12-30 2022-10-19 Screen sound production control method, and electronic device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111682609.8A CN116414340A (en) 2021-12-30 2021-12-30 Screen sounding control method, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116414340A true CN116414340A (en) 2023-07-11

Family

ID=86997472

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111682609.8A Pending CN116414340A (en) 2021-12-30 2021-12-30 Screen sounding control method, electronic equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN116414340A (en)
WO (1) WO2023124430A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108712571A (en) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 Method, apparatus, electronic device and the storage medium of display screen sounding
CN108874357B (en) * 2018-06-06 2021-09-03 维沃移动通信有限公司 Prompting method and mobile terminal
CN108958697B (en) * 2018-07-09 2021-08-17 Oppo广东移动通信有限公司 Screen sounding control method and device and electronic device
CN109361797A (en) * 2018-10-30 2019-02-19 维沃移动通信有限公司 A kind of vocal technique and mobile terminal
CN111918168B (en) * 2020-06-28 2022-06-17 合肥维信诺科技有限公司 Sound production screen and display device

Also Published As

Publication number Publication date
WO2023124430A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
KR102224872B1 (en) Wireless coordination of audio sources
CN111093267B (en) IRS-based UE position determination method, communication method and system
US10110330B1 (en) Relay system calibration for wireless communications between a head-mounted display and a console
EP3326387B1 (en) Speaker apparatus and electronic apparatus including same
EP2879405B1 (en) Audio speaker with spatially selective sound cancelling
WO2021088572A1 (en) Antenna array and electronic device
EP3279687A1 (en) Beam signal tracking method, device and system
US11051103B2 (en) Sound output apparatus, display apparatus and method for controlling the same
CN115052230A (en) Sound orientation method, device and equipment based on digital sound production chip
CN108141664B (en) Method and sound device for beamforming sound for driver units in a beamforming array
JP2006229738A (en) Device for controlling wireless connection
US10197437B2 (en) Method and apparatus for obtaining vibration information and user equipment
CN110247689B (en) Terminal communication area allocation method, device, communication equipment and storage medium
CN114374279A (en) Wireless charging method and device and electronic equipment
CN116414340A (en) Screen sounding control method, electronic equipment and computer readable storage medium
JPH1138056A (en) Estimating system for environmental characteristic of electromagnetic field in wireless terminal machine
CN112312273B (en) Sound playing method, sound receiving method and electronic equipment
CN114978265A (en) Beamforming method and apparatus, terminal and storage medium
CN111200678B (en) Electronic device and audio output method
KR102144810B1 (en) Sound radiation and interactive communication system using point source acoustic array figured in sphere or truncated sphere
CN111078178A (en) Method, device and equipment for determining bending angle and storage medium
US11445382B2 (en) Communication zone allocation method of terminal, device therefor, and communication equipment
CN216563526U (en) Antenna assembly and terminal equipment
CN112956211B (en) Dual panel audio actuator and mobile device including the same
EP4336738A1 (en) Beam control method and apparatus for intelligent surface device, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication