CN109194796B - Screen sounding method and device, electronic device and storage medium - Google Patents

Screen sounding method and device, electronic device and storage medium Download PDF

Info

Publication number
CN109194796B
CN109194796B CN201810745830.5A CN201810745830A CN109194796B CN 109194796 B CN109194796 B CN 109194796B CN 201810745830 A CN201810745830 A CN 201810745830A CN 109194796 B CN109194796 B CN 109194796B
Authority
CN
China
Prior art keywords
sound
screen
portrait
target
target position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810745830.5A
Other languages
Chinese (zh)
Other versions
CN109194796A (en
Inventor
张海平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810745830.5A priority Critical patent/CN109194796B/en
Publication of CN109194796A publication Critical patent/CN109194796A/en
Application granted granted Critical
Publication of CN109194796B publication Critical patent/CN109194796B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/03Constructional features of telephone transmitters or receivers, e.g. telephone hand-sets
    • H04M1/035Improving the acoustic characteristics by means of constructional features of the housing, e.g. ribs, walls, resonating chambers or cavities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a screen sounding method and device, an electronic device and a storage medium, and relates to the technical field of electronic devices. The electronic device comprises a screen capable of vibrating to generate sound and an exciter used for driving the screen to generate sound, the screen comprises a plurality of sound generation areas, different sound generation areas are driven by different exciters to generate sound, and the method comprises the following steps: when video content is displayed, acquiring a position corresponding to a portrait in a current video frame as a target position; determining a sound production area corresponding to the target position in the screen as a target sound production area; and driving the target sound production area to produce sound in a vibration mode through an exciter according to the sound signal of the video. In the method, the electronic device drives the target sounding area in the screen to vibrate through the exciter, so that the electronic device can conform to the direction of thin design.

Description

Screen sounding method and device, electronic device and storage medium
Technical Field
The present disclosure relates to the field of electronic devices, and more particularly, to a method and an apparatus for generating a screen sound, an electronic device and a storage medium.
Background
Currently, in electronic devices, such as mobile phones, tablet computers, and the like, sound is generated through a speaker to output a sound signal. However, the speaker arrangement occupies a large design space, resulting in the electronic device not conforming to the direction of the slim design.
Disclosure of Invention
In view of the above problems, the present application provides a method, an apparatus, an electronic apparatus, and a storage medium for generating a screen sound to improve the above problems.
In a first aspect, an embodiment of the present application provides a screen sound emission method, which is applied to an electronic device, where the electronic device includes a screen capable of vibrating to emit sound and an exciter for driving the screen to emit sound, the screen includes a plurality of sound emission areas, and different sound emission areas are driven to emit sound by different exciters, and the method includes: when video content is displayed, acquiring a position corresponding to a portrait in a current video frame as a target position; determining a sound production area corresponding to the target position in the screen as a target sound production area; and driving the target sound production area to produce sound in a vibration mode through an exciter according to the sound signal of the video.
In a second aspect, the embodiment of the present application provides a screen sound generating device, which is applied to an electronic device, the electronic device includes a screen capable of vibrating and sounding and is used for driving an exciter of the screen sound, the screen includes a plurality of sounding areas, and different sounding areas are driven by different exciters to sound, the screen sound generating device includes: the position acquisition module is used for acquiring a position corresponding to a portrait in a current video frame as a target position when video content is displayed; the target sound production area acquisition module is used for determining a sound production area corresponding to the target position in the screen as a target sound production area; and the sound production module is used for driving the target sound production area to produce sound through vibration of the exciter according to the sound signal of the video.
In a third aspect, an embodiment of the present application provides an electronic device, including a screen, an actuator for driving the screen to sound, a memory, and a processor, where the screen, the actuator, and the memory are coupled to the processor, and the memory stores instructions, and the processor performs the above method when the instructions are executed by the processor.
In a fourth aspect, the present application provides a computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the above-mentioned method.
In a fifth aspect, an embodiment of the present application provides an electronic apparatus, including: a screen including a plurality of sound emitting areas; the exciter is connected with the sounding area of the screen and is used for driving the screen to sound; the circuit is connected with the exciter and comprises a detection circuit and a driving circuit, wherein the detection circuit is used for acquiring a position corresponding to a human image in a current video frame as a target position and determining a sound production area corresponding to the target position in a screen as a target sound production area when video content is displayed; and the driving circuit drives the target sound production area to produce sound in a vibration mode through an exciter according to the sound signal of the video.
The screen sounding method, the screen sounding device, the electronic device and the storage medium are applied to the electronic device. The screen of the electronic device comprises a plurality of sounding areas, and different exciters are used for driving sounding respectively. When the video content is displayed on the screen, the position corresponding to the portrait can be acquired, and the sound production area corresponding to the position is determined as the target sound production area. When the sound of the video content is played, the target sound production area is driven to produce sound through the exciter according to the sound signal, the sound can be produced through the screen, the electronic device does not depend on sound production devices such as a loudspeaker and the like needing sound holes, the electronic device accords with the direction of thin design, the sound production position can be consistent with the display position of a sound production object, and the user experience is improved.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram illustrating a viewing angle of an electronic device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram illustrating another view of an electronic device provided in an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a division of a sound emitting area of an electronic device according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating a screen sound generation method proposed by an embodiment of the present application;
FIG. 5 is a flow chart illustrating a method of screen sound generation as set forth in another embodiment of the present application;
FIG. 6 is a schematic diagram of a display of an electronic device according to an embodiment of the present application;
FIG. 7 is a flow chart showing some of the steps of a screen sound generation method proposed by an embodiment of the present application;
fig. 8 is a schematic display diagram illustrating a corresponding sound emitting area of an electronic device according to an embodiment of the present application;
fig. 9 is another display diagram illustrating a corresponding sound emitting area of an electronic device according to an embodiment of the present application;
fig. 10 is a functional block diagram showing a screen sound emission device according to a third embodiment of the present application;
fig. 11 is a block diagram of an electronic device according to an embodiment of the present application;
fig. 12 is a block diagram showing another structure of an electronic apparatus according to an embodiment of the present application;
fig. 13 is a block diagram of an electronic device according to an embodiment of the present application for executing a screen sound emission method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Electronic devices often produce sound through speakers, earphones, and the like, which often require openings in the electronic devices. On one hand, the product performance is reduced due to the open pores, and in order to achieve the appearance effect, various manufacturers try to reduce the size of the open pores, but the smaller the open pores are, the more the performance is affected; on the other hand, the screen accounts for the higher full screen user experience better, but because need keep certain panel region for the receiver realizes the vocal function, at the regional trompil of panel, can't realize full screen, influence user experience.
In addition, the display screen generally plays a role in an electronic device such as a mobile phone or a tablet computer to display contents such as text, pictures, icons, or video. With the development of touch technologies, more and more display screens arranged in electronic devices are touch display screens, and when a user is detected to perform touch operations such as dragging, clicking, double clicking, sliding and the like on the touch display screen, the touch operations of the user can be responded under the condition of arranging the touch display screens.
With the increasing requirements of users on the definition and the fineness of displayed contents, more electronic devices adopt touch display screens with larger sizes to achieve the display effect of a full screen. However, in the process of setting a touch display screen with a large size, it is found that functional devices such as a front camera, a proximity optical sensor, and a receiver, which are disposed at the front end of the electronic device, affect an area that the touch display screen can extend to.
Generally, an electronic device includes a front panel, a rear cover, and a bezel. The front panel includes a forehead area, a middle screen area and a lower key area. Generally, the forehead area is provided with a sound outlet of a receiver and functional devices such as a front camera, the middle screen area is provided with a touch display screen, and the lower key area is provided with one to three physical keys. With the development of the technology, the lower key area is gradually cancelled, and the physical keys originally arranged in the lower key area are replaced by the virtual keys in the touch display screen.
The earphone sound outlet holes arranged in the forehead area are important for the function support of the mobile phone and are not easy to cancel, so that the difficulty in expanding the displayable area of the touch display screen to cover the forehead area is high. After a series of researches, the inventor finds that sound can be emitted by controlling the screen, the frame or the rear cover of the mobile phone to vibrate, so that the arrangement of the sound outlet hole of the receiver can be eliminated.
Referring to fig. 1, an electronic device 100 according to an embodiment of the present disclosure is shown. The electronic device 100 comprises an electronic body 10, wherein the electronic body 10 comprises a housing 12 and a screen 120 arranged on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. As shown in fig. 2, the housing 12 may include a front panel 132, a rear cover 133, and a bezel 134, the bezel 134 being used to connect the front panel 132 and the rear cover 133, the screen 120 being disposed on the front panel. In this embodiment, the screen 120 may include a display screen, which generally includes the display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
The electronic device further comprises an exciter 131, wherein the exciter 131 is used for driving a vibration component of the electronic device to vibrate and sound, specifically, the vibration component is at least one of the screen 120 or the housing 12 of the electronic device, that is, the vibration component can be the screen 120, part or all of the screen, part or all of the housing 12, or a combination of the screen 120 and the housing 12. As an embodiment, when the vibration member is the housing 12, the vibration member may be a rear cover of the housing 12. The embodiment of the present application is mainly explained by controlling the screen sound.
In the embodiment of the present application, if the vibration component is the screen 120, the exciter 131 is connected to the screen 120 for driving the screen 120 to vibrate. In particular, the actuator 131 is attached below the screen 120, and the actuator 131 may be a piezoelectric driver or a motor. In one embodiment, actuator 131 is a piezoelectric actuator. The piezoelectric actuator transmits its own deformation to the screen 120 by a moment action, so that the screen 120 vibrates to generate sound. The screen 120 includes a touch screen and a display screen, the display screen is located below the touch screen, and the piezoelectric driver is attached below the display screen, that is, a surface of the display screen away from the touch screen. The piezoelectric driver includes a plurality of piezoelectric ceramic sheets. When the multilayer piezoelectric ceramic piece produces sound and expands and contracts, the screen is driven to bend and deform, and the whole screen forms bending vibration repeatedly, so that the screen can push air and produce sound.
As an embodiment, the electronic device 100 includes a detection circuit 143 and a driving circuit 135, where the detection circuit 143 is configured to, when displaying video content, acquire a position corresponding to a human image in a current video frame as a target position, and determine a sound emitting area corresponding to the target position in a screen as a target sound emitting area; the driving circuit 135 is configured to drive the target sound emitting area to emit sound through vibration by an exciter according to the sound signal of the video. The exciter 131 is connected to a driving circuit 135 of the electronic device, and the driving circuit 135 is configured to input a control signal value to the exciter 131 according to the vibration parameter, so as to drive the exciter 131 to vibrate, thereby driving the vibrating component to vibrate. The vibration parameter may be determined by the received sound signal, and specifically may be determined according to a vibration frequency and a vibration amplitude of the sound signal to be sounded.
In particular, the driving circuit may be a processor of the electronic device, or may be an integrated circuit capable of generating a driving voltage or current within the electronic device. The driving circuit outputs a high-low level driving signal to the exciter 131, the exciter 131 vibrates according to the driving signal, and the different electrical parameters of the driving signal output by the driving circuit may cause the different vibration parameters of the exciter 131, for example, the duty ratio of the driving signal corresponds to the vibration frequency of the exciter 131, and the amplitude of the driving signal corresponds to the vibration amplitude of the exciter 131.
In the present embodiment, there may be two or more actuators 131. As shown in fig. 1, the plurality of actuators 131 may be uniformly distributed on the screen 120, so that the screen 120 may divide different sound emission areas according to the settings of the actuators. Wherein each sound emitting area can be driven to emit sound by one or more than one actuator 131. For example, as shown in fig. 3, if the number of the drivers is 4, the display screen may include 4 rectangular areas a, b, c, and d divided by dotted lines in fig. 3, the center lines in the vertical direction and the horizontal direction are equally divided into 4, 4 drivers are disposed below the 4 rectangular areas, the 4 drivers are in one-to-one correspondence with the 4 rectangular areas, and each rectangular area may be used as a sound emission area. Of course, the number of the exciters is not limited in the embodiment of the present application, and the specific distribution of the exciters and the specific division of the sound emitting area are not limited in the embodiment of the present application.
When the sound is produced through the screen, the sound can be produced by one sound production area, and a plurality of sound production areas or all the sound production areas. The inventor finds that when video content is played, if a user who makes a sound in a video moves and the position where a screen makes a sound is fixed, the position where the sound is made and the display position of the user who makes the sound in the screen are inconsistent, the reality of video watching is not enough, and the user experience is poor, for example, in a one-to-one video call.
The embodiment of the application provides a screen sound production method, a screen sound production device, an electronic device and a storage medium, wherein a screen is divided into a plurality of sound production areas, and sound production is carried out on the sound production areas corresponding to positions of human images in video frames under the driving of corresponding exciters, so that the sound production positions are consistent with the positions of sound producers, the reality is better, and the user experience is improved. The screen sounding method, device, electronic device and storage medium provided by the embodiments of the present application will be described with reference to the accompanying drawings and specific embodiments.
Referring to fig. 4, an embodiment of the present application provides a screen sounding method applied to an electronic device. As previously mentioned, the electronic device includes a screen capable of vibrating sound production and an actuator for driving the screen to produce sound. The screen includes a plurality of sound emitting areas, each driven by a different actuator. Specifically, as shown in fig. 4, the screen sound generating method may specifically include the following steps:
step S110: when the video content is displayed, the position corresponding to the portrait in the current video frame is acquired as the target position.
When the video content is displayed, a specific target object, such as a portrait, in the video content can be acquired.
The video content may be a real-time voice chat video, or may be a cached recorded video content.
In the embodiment of the application, the portrait of the current frame in the video content can be acquired through one or more frames of video images. The specific algorithm for acquiring the portrait from the video frame is not limited in the embodiment of the present application, such as an edge detection algorithm, feature point matching, and the like.
In the case of acquiring the portrait in the current frame video content, the position of the portrait in the screen may be acquired again. For example, the coordinate area of the portrait in the display window of the video may be obtained on line, and then the coordinate area of the portrait in the screen of the video content may be obtained according to the coordinate area of the display window in the display area of the screen, and the position of the portrait in the screen may be determined as the target position.
Step S120: and determining a sound production area corresponding to the target position in the screen as a target sound production area.
And determining a target sounding area according to the target position. The electronic device stores the position range of each sound production area, such as the coordinate range in the screen, and then can acquire the sound production area corresponding to the target position of the displayed portrait in the displayed content according to the coordinate range of the portrait in the screen, and the sound production area is used as the target sound production area.
Step S130: and driving the target sound production area to produce sound in a vibration mode through an exciter according to the sound signal of the video.
And when the video is played, determining the sound signal of the video in the received sound signals. When the sound signal of the video is played, the target sound production area is driven by the exciter to vibrate and produce sound. Therefore, when the video displays a person, when the sound emitted by the person is played from the video, the sound production position is consistent with the position of the person in the video.
In the embodiment of the application, the portrait in the video is obtained according to the display of the video, and the position of the portrait in the display area of the screen is determined according to the portrait, so that the position of the portrait in the screen is determined. According to the position of each sounding area in the screen, the sounding area corresponding to the position of the portrait can be determined, the sounding area is used for vibrating and sounding, the sounding position is consistent with the position of a sounder in the video, and user experience is improved.
For example, in a specific application scenario, a video call is performed through a video call application program, and received video content including a user portrait of a chat peer is displayed in the video. And controlling the target sound production area to produce sound by taking the sound production area of the position of the portrait of the user in the video content as the target sound production area. Therefore, if the user at the opposite chat terminal moves, the sounding position in the screen can be consistent with the position for displaying the portrait of the user, and the video call experience is improved.
In the screen sound production method provided by the embodiment of the application, the target sound production area can be determined at a certain frequency to produce sound. Specifically, as shown in fig. 5, the embodiment of the present application may include:
step S210: when the video content is displayed, the position corresponding to the portrait in the current video frame is acquired as the target position.
When the electronic device displays the video content, the position of the portrait in the video content in the screen can be acquired as the target position.
Optionally, because the portrait is an irregular image, a rectangular region corresponding to the portrait in the screen may be used as the target position corresponding to the portrait, that is, a region defined by a maximum value of an abscissa, a minimum value of the abscissa, a maximum value of an ordinate, and a minimum value of the ordinate corresponding to the portrait in the screen may be used as the target position corresponding to the portrait. As shown in fig. 6, the target position corresponding to the portrait P may be shown by a dotted rectangular area K in fig. 6.
As an implementation manner, in the embodiment of the present application, the position of the whole portrait in the current video frame in the screen may be acquired as the target position.
As an embodiment, when a plurality of human figures are included in the video frame, the target position may be a position in the screen of a human figure being spoken among the human figures. Specifically, as shown in fig. 7, in this embodiment, acquiring a position corresponding to a portrait in a current video frame as a target position may include:
step S211: when the video content comprises a plurality of portraits, determining the portrait in the speaking of the plurality of portraits according to the adjacent multi-frame video content.
When a plurality of figures are included, the speech sound of the person in the video is the sound made by the person who is speaking, and therefore, the figure in which the person is speaking can be determined.
Since the lips of the portrait being spoken may be opening and closing to produce sound, when determining the portrait being spoken, it may be determined that the lips are opening and closing. Specifically, the form of the lips of each portrait in the adjacent multi-frame video frames can be obtained, then the form of the lips of each portrait in the adjacent multi-frame video frames is compared, whether each portrait has opening and closing of the lips is judged according to the form of the lips, and therefore the portrait in the speaking process is determined.
Step S212: and taking the position corresponding to the person speaking as a target position.
And determining the position corresponding to the image of the person who is speaking as a target position, so that when the screen sounds, the position of the image of the person who sounds in the screen is consistent with the sound-emitting position.
As an embodiment, in the present embodiment, since the person utters by mouth, the position of the head of the person in the current video frame may also be obtained. When the position corresponding to the portrait in the current video frame is obtained as the target position, the position of the head of the portrait in the current video frame, namely the position of the head of the portrait displayed in the screen, can be obtained, and then the position of the head is determined as the target position, so that the obtained target position is the position of the head of the portrait.
In addition, in the embodiment of the application, whether the sound signal corresponding to the video content is the sound signal corresponding to the sound made by a person or not can be detected, and if yes, the target position corresponding to the portrait in the screen is detected.
In the embodiment of the present application, the time for obtaining the target position corresponding to the portrait is not limited, and the portrait may be obtained when the video starts to be played, and the target position is obtained according to the portrait. In addition, the target position may be obtained once every preset time length, and the specific time length of the preset time length is not limited in the embodiment of the present application.
Optionally, if the position of the portrait is obtained continuously for multiple times, the portrait is not obtained in the video content, and the time interval for obtaining the position of the portrait can be prolonged. For example, the electronic device acquires the position of the portrait in the video every 1 second; if no portrait is detected in the video frames for 100 times continuously, the time interval is prolonged, and if the position of the portrait in the screen is acquired every 2 seconds; if the human figure is not detected in the video frame for 200 times, the time interval for acquiring the position of the human figure in the screen can be prolonged again, for example, the position of the human figure in the screen is acquired every 10 seconds. The above-mentioned number of times of succession and the interval time are merely examples and are not limited in the embodiment of the present application, and the interval time to which extension is specifically allowed is not limited in the embodiment of the present application. In addition, if a person is detected in the video content at a certain time in the case of detecting the position of the person in the screen by extending the interval time, the interval time is set to the first shorter interval time, and in the above example, in the case of detecting a person in the video content, the interval time is set to 1s again.
Optionally, in this embodiment of the application, the target position is obtained every other preset time, and if the target position obtained at a certain time is different from the target position obtained at the previous time, a new target sounding area is determined again according to the currently determined target position. Therefore, optionally, as shown in fig. 5, before step S230, step S220 may be further included: and judging whether the target position is the same as the target position obtained last time, if not, executing the step S230.
If the target position acquired at a certain time is the same as the target position acquired at the previous time, the person in the video does not make a sound, the area to be sounded in the screen is the same as the area to be sounded determined at the previous time, and the determination that the area to be sounded corresponding to the target position in the screen is the target sounding area is not performed any more, that is, the area to be sounded is not determined again, the area to be sounded determined at the previous time can be directly driven to sound, and if the determination result in step S220 that whether the target position is the same as the target position acquired at the previous time is yes, the area to be sounded at the previous time is used as the area to be sounded at the current time, step S250.
In the embodiment of the application, the target position determined this time is the same as the target position determined last time, and may be completely the same, that is, coordinate intervals in the screen are consistent; or the difference may be within a preset range, for example, the difference between the abscissa of the target position determined this time and the abscissa of the target position determined last time is within a preset range corresponding to the abscissa, and the difference between the ordinate is within a preset range corresponding to the ordinate.
It is understood that step S220 is an optional execution step, and step S230 may be directly executed after step S210, or the execution step may be determined after the determination is made according to step S220.
Step S230: and determining a sound production area corresponding to the target position in the screen as a target sound production area.
And after the target position is determined, determining the sounding area corresponding to the target position as a target sounding area. For example, as shown in fig. 8, if the sound emission area corresponding to the target position of the portrait P on the screen is a, the sound emission area a is determined as the target sound emission area.
Optionally, if only one sounding area is corresponding to the position of the portrait, the sounding area is used as a target sounding area; if the image corresponds to a plurality of sound emission areas, such as the image P shown in fig. 9 corresponding to the sound emission area a and the sound emission area c, the sound emission area can be selected as the target sound emission area.
As an embodiment, the determined target sound-emitting area may be a sound-emitting area whose coordinate interval is completely within the range of the target position coordinate interval in the screen.
In one embodiment, the determined target sound emission area may be a sound emission area in which the coordinate section overlaps with the coordinate section of the target position, or a sound emission area selected from sound emission areas in which the coordinate section overlaps with the coordinate section of the target position.
Step S250: and driving the target sound production area to produce sound in a vibration mode through an exciter according to the sound signal of the video.
And driving the target sound production area to vibrate and produce sound through an exciter according to the sound signal of the video obtained when the target sound production area is obtained.
Optionally, the target position is acquired once every preset time, and then the sound signal of the video within the preset time or the preset time after the target sound production area is acquired can be driven by the exciter to produce sound in a vibration mode in the target sound production area.
Optionally, in this embodiment of the application, the target position is obtained once every preset time, and the target sound emitting area is determined according to the target position. If the target sound production area obtained at a certain time is the same as the target sound production area obtained at the previous time, and the sound production area required to produce sound in the screen is the same as the sound production area required to produce sound determined at the previous time, in this step, the target sound production area determined at the previous time is still used for vibration sound production, and the corresponding parameters of the target sound production area are not needed.
And if the target sounding area acquired at a certain time is different from the target sounding area acquired at the previous time, driving to sound according to the currently determined target sounding area. In this step, the parameters related to the sound emission area to be emitted by the driver are changed to the corresponding parameters of the target sound emission area determined this time.
In the embodiment of the application, the target position corresponding to the portrait in the video is determined at a certain time period, the target sounding area is determined according to the target position, and the target sounding area is driven to sound the sound signal of the video. If the target position determined this time is not changed, the target sounding area determined the previous time can be driven to sound, and if the target position determined this time is changed relative to the target position determined the last time, the target sounding area is determined again according to the target position determined this time, so that the target sounding area moves more humanly in the screen and moves synchronously. Moreover, the electronic device can generate sound through the screen without depending on a loudspeaker, so that the electronic device can conform to the direction of a thin design.
The embodiment of the application further provides a screen sounding device 300 applied to an electronic device, wherein the electronic device comprises a screen capable of vibrating and sounding and an exciter used for driving the screen to sound, the screen comprises a plurality of sounding areas, and different sounding areas are driven by different exciters to sound. Referring to fig. 10, the screen sound device 300 includes: the position obtaining module 310 is configured to obtain a position corresponding to a human image in a current video frame as a target position when the video content is displayed. And a target sound-emitting area obtaining module 320, configured to determine that a sound-emitting area corresponding to the target position in the screen is a target sound-emitting area. And the sound production module 330 is configured to drive the target sound production area to produce sound through vibration by an exciter according to the sound signal of the video.
Optionally, the location obtaining module 310 may include: the position acquisition unit is used for acquiring the position of the head of the portrait in the current video frame; and the position determining unit is used for determining that the position of the head is the target position.
Optionally, the screen sound emission device 300 may further include a first determining module, configured to determine whether the target position is the same as a target position obtained last time, and if not, the target sound emission area obtaining module 320 is configured to determine that a sound emission area corresponding to the target position in the screen is the target sound emission area.
Optionally, the screen sound emission device 300 may further include a second determining module, configured to determine whether the target sound emission area is the same as the previously determined target sound emission area. If the target sound production area is the same as the target sound production area, the sound production module 330 is used for driving the target sound production area determined at the previous time to produce sound through the exciter according to the sound signal of the video; if not, the sound generation module 330 is configured to drive the determined target sound generation area to generate sound through vibration by using the exciter according to the sound signal of the video.
The position obtaining module 310 may further include a portrait determining unit, configured to determine, when the video content includes multiple portraits, a portrait that is speaking in the multiple portraits according to the adjacent multiple frames of video content; and the target position determining unit is used for taking the position corresponding to the person who is speaking as the target position.
Optionally, in this embodiment of the present application, the video content may be content of a video call.
Based on the above-mentioned screen sounding method and device, the electronic device 100 according to the embodiment of the present application can perform the screen sounding method.
As an embodiment, as shown in fig. 11, the electronic device 100 may include a screen 120, an actuator 131 for driving the screen to generate sound, a memory 104, and a processor 102, wherein the screen 120, the actuator 131, and the memory 104 are coupled to the processor 102. The memory 104 stores instructions that, when executed by the processor, the processor performs the method described above.
As an embodiment, as shown in fig. 12, the electronic device 100 includes a screen 120, and an actuator 131, where the actuator 131 is connected to the screen 120, and the actuator 131 is used for driving the screen 120 to emit sound. The screen comprises a plurality of sounding areas, the exciters can also comprise a plurality of exciters (only one is shown in the figure), and different sounding areas are driven by different exciters to sound. The circuit 142 is connected with the exciter 131, the circuit 142 includes a detection circuit 143 and a driving circuit 135, the detection circuit 143 is configured to, when displaying video content, acquire a position corresponding to a portrait in a current video frame as a target position, determine a sound emitting area corresponding to the target position in a screen as a target sound emitting area, and the driving circuit 135 is configured to, according to a sound signal of the video, drive the target sound emitting area to vibrate and emit sound through the exciter.
By way of example, the electronic device 100 may be any of various types of computer system equipment (only one modality shown in FIG. 1 by way of example) that is mobile or portable and that performs wireless communications. Specifically, the electronic apparatus 100 may be a mobile phone or a smart phone (e.g., an iPhone (TM) based phone), a Portable game device (e.g., Nintendo DS (TM), PlayStation Portable (TM), game Advance (TM), iPhone (TM)), a laptop computer, a PDA, a Portable internet device, a music player, and a data storage device, other handheld devices, and a head-mounted device (HMD) such as a watch, a headset, a pendant, a headset, and the like, and the electronic apparatus 100 may also be other wearable devices (e.g., a head-mounted device (HMD) such as electronic glasses, electronic clothes, an electronic bracelet, an electronic necklace, an electronic tattoo, an electronic device, or a smart watch).
The electronic apparatus 100 may also be any of a number of electronic devices including, but not limited to, cellular phones, smart phones, other wireless communication devices, personal digital assistants, audio players, other media players, music recorders, video recorders, cameras, other media recorders, radios, medical devices, vehicle transportation equipment, calculators, programmable remote controllers, pagers, laptop computers, desktop computers, printers, netbook computers, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), moving Picture experts group (MPEG-1 or MPEG-2) Audio layer 3(MP3) players, portable medical devices, and digital cameras, and combinations thereof.
In some cases, electronic device 100 may perform multiple functions (e.g., playing music, displaying videos, storing pictures, and receiving and sending telephone calls). If desired, the electronic apparatus 100 may be a portable device such as a cellular telephone, media player, other handheld device, wrist watch device, pendant device, earpiece device, or other compact portable device.
The electronic device 100 shown in fig. 1 includes an electronic body 10, and the electronic body 10 includes a housing 12 and a screen 120 disposed on the housing 12. The housing 12 may be made of metal, such as steel or aluminum alloy. In this embodiment, the screen 120 generally includes a display panel 111, and may also include a circuit and the like for responding to a touch operation performed on the display panel 111. The Display panel 111 may be a Liquid Crystal Display (LCD) panel, and in some embodiments, the Display panel 111 is a touch screen 109.
Referring to fig. 13, in an actual application scenario, the electronic device 100 may be used as a smartphone terminal, in which case the electronic body 10 generally further includes one or more processors 102 (only one is shown in the figure), a memory 104, an RF (Radio Frequency) module 106, an audio circuit 110, a sensor 114, an input module 118, and a power module 122. It will be understood by those skilled in the art that the structure shown in fig. 13 is merely illustrative and is not intended to limit the structure of the electronic body 10. For example, the electronics body section 10 may also include more or fewer components than shown in FIG. 13, or have a different correspondence than shown in FIG. 13.
Those skilled in the art will appreciate that all other components are peripheral devices with respect to the processor 102, and the processor 102 is coupled to the peripheral devices through a plurality of peripheral interfaces 124. The peripheral interface 124 may be implemented based on the following criteria: universal Asynchronous Receiver/Transmitter (UART), General Purpose Input/Output (GPIO), Serial Peripheral Interface (SPI), and Inter-Integrated Circuit (I2C), but the present invention is not limited to these standards. In some examples, the peripheral interface 124 may comprise only a bus; in other examples, the peripheral interface 124 may also include other elements, such as one or more controllers, for example, a display controller for interfacing with the display panel 111 or a memory controller for interfacing with a memory. These controllers may also be separate from the peripheral interface 124 and integrated within the processor 102 or a corresponding peripheral.
The memory 104 may be used to store software programs and modules, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the electronics body portion 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The RF module 106 is configured to receive and transmit electromagnetic waves, and achieve interconversion between the electromagnetic waves and electrical signals, so as to communicate with a communication network or other devices. The RF module 106 may include various existing circuit elements for performing these functions, such as an antenna, a radio frequency transceiver, a digital signal processor, an encryption/decryption chip, a Subscriber Identity Module (SIM) card, memory, and so forth. The RF module 106 may communicate with various networks such as the internet, an intranet, a wireless network, or with other devices via a wireless network. The wireless network may comprise a cellular telephone network, a wireless local area network, or a metropolitan area network. The Wireless network may use various Communication standards, protocols, and technologies, including, but not limited to, Global System for Mobile Communication (GSM), Enhanced Mobile Communication (Enhanced Data GSM Environment, EDGE), wideband Code division multiple Access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Wireless Fidelity (WiFi) (e.g., Institute of Electrical and Electronics Engineers (IEEE) standard IEEE 802.10A, IEEE802.11 b, IEEE802.1 g, and/or IEEE802.11 n), Voice over internet protocol (VoIP), world wide mail Access (Microwave for Wireless Communication), Wi-11 Wireless Access (wimax), and any other suitable protocol for instant messaging, and may even include those protocols that have not yet been developed.
The audio circuitry 110, microphone 103, microphone 105 together provide an audio interface between a user and the electronic body portion 10.
A sensor 114 is disposed within the electronics body portion 10, examples of the sensor 114 include, but are not limited to: acceleration sensor 114F, gyroscope 114G, magnetometer 114H, and other sensors.
In this embodiment, the input module 118 may include the touch screen 109 disposed on the display screen, and the touch screen 109 may collect a touch operation of the user (for example, an operation of the user on or near the touch screen 109 using any suitable object or accessory such as a finger, a stylus pen, etc.) on or near the touch screen 109, so that the touch gesture of the user may be obtained and the corresponding connection device may be driven according to a preset program, and thus, the user may select the target area through a touch operation on the display screen. Optionally, the touch screen 109 may include a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 102, and can receive and execute commands sent by the processor 102. In addition, the touch detection function of the touch screen 109 may be implemented by various types, such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch screen 109, in other variations, the input module 118 may include other input devices, such as keys 107. The keys 107 may include, for example, character keys for inputting characters, and control keys for activating control functions. Examples of such control keys include a "back to home" key, a power on/off key, and the like.
The screen 120 is used to display information input by the user, information provided to the user, and various graphical user interfaces of the electronic body section 10, which may be composed of graphics, text, icons, numbers, video, and any combination thereof. In one example, the touch screen 109 may be disposed on the display panel 111 so as to be integral with the display panel 111.
The power module 122 is used to provide power supply to the processor 102 and other components. Specifically, the power module 122 may include a power management system, one or more power sources (e.g., batteries or ac power), a charging circuit, a power failure detection circuit, an inverter, a power status indicator light, and any other components related to the generation, management, and distribution of power within the electronic body 10 or the screen 120.
The electronic device 100 further comprises a locator 119, the locator 119 being configured to determine an actual location of the electronic device 100. In this embodiment, the locator 119 uses a positioning service to locate the electronic device 100, and the positioning service is understood to be a technology or a service for obtaining the position information (e.g. longitude and latitude coordinates) of the electronic device 100 by a specific positioning technology and marking the position of the located object on the electronic map.
It should be understood that the electronic apparatus 100 described above is not limited to the smartphone terminal, and it should refer to a computer device that can be used in a mobile. Specifically, the electronic device 100 refers to a mobile computer device equipped with an intelligent operating system, and the electronic device 100 includes, but is not limited to, a smart phone, a smart watch, a tablet computer, and the like.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. For any processing manner described in the method embodiment, all the processing manners may be implemented by corresponding processing modules in the apparatus embodiment, and details in the apparatus embodiment are not described again.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A screen sounding method applied to an electronic device, wherein the electronic device comprises a screen capable of vibrating to sound and an exciter used for driving the screen to sound, the screen comprises a plurality of sounding areas, and different sounding areas are driven by different exciters to sound, and the method comprises the following steps:
when video content is displayed, detecting whether a sound signal corresponding to the video content is a sound signal corresponding to sound made by a person, if so, detecting a target position corresponding to a portrait in a screen in the video content;
if the portrait is not obtained in the video content, obtaining a corresponding target position of the portrait in the screen every preset time length; if the portrait is not obtained in the video content for a plurality of times continuously, the time interval of obtaining the corresponding target position of the portrait in the screen is prolonged, and if the portrait is obtained again after the portrait is not obtained in the video content for a plurality of times continuously, the time interval is set as the initial value of the preset duration;
if the current video frame comprises a plurality of portraits, determining the portraits speaking in the plurality of portraits;
determining the corresponding position of the talking portrait in the screen as a target position;
determining a sound production area corresponding to the target position in the screen as a target sound production area;
and driving the target sound production area to produce sound in a vibration mode through an exciter according to the sound signal of the video.
2. The method of claim 1, wherein determining that the corresponding position of the person speaking in the plurality of persons in the screen is a target position comprises:
acquiring the position of the head of a talking portrait in a current video frame;
and determining that the head position is the target position.
3. The method according to claim 1, wherein before determining that the sound-emitting area corresponding to the target position in the screen is the target sound-emitting area, the method further comprises:
and judging whether the target position is the same as the target position obtained last time, if not, executing the step of determining that the sound production area corresponding to the target position in the screen is the target sound production area.
4. The method according to claim 1, wherein before driving the target sound-emitting area to vibrate and emit sound by an exciter according to the sound signal of the video, the method further comprises:
judging whether the target sound production area is the same as the target sound production area determined at the previous time;
if so, in the step of driving the target sound production area to produce sound through vibration by the exciter according to the sound signal of the video, driving the target sound production area determined at the previous time to produce sound through vibration by the exciter,
and if not, in the step of driving the target sound production area to produce sound in a vibration mode through the exciter according to the sound signal of the video, driving the target sound production area determined at this time to produce sound in a vibration mode through the exciter.
5. The method of claim 1, wherein determining the person of the plurality of persons who is speaking comprises:
the method comprises the steps of obtaining the form of lips of each portrait in adjacent multi-frame video content, and comparing the forms of the lips of each portrait in adjacent multi-frame video frames;
and judging whether each portrait has the opening and closing of the lips according to the lip forms so as to determine the portrait which is speaking in the plurality of portraits.
6. The method of claim 1, wherein the video content is content of a video call.
7. The utility model provides a screen sound production device, its characterized in that is applied to electronic device, electronic device is including the screen that can vibrate the sound production and be used for the drive the exciter of screen sound production, the screen includes a plurality of vocal regions, and different vocal regions are driven the sound production by different exciters, screen sound production device includes:
the position acquisition module is used for detecting whether a sound signal corresponding to the video content is a sound signal corresponding to sound made by a person or not when the video content is displayed, and if so, detecting a target position corresponding to a portrait in a screen in the video content; if the portrait is not obtained in the video content, obtaining a corresponding target position of the portrait in the screen every preset time length; if the portrait is not obtained in the video content for a plurality of times continuously, the time interval of obtaining the corresponding target position of the portrait in the screen is prolonged, and if the portrait is obtained again after the portrait is not obtained in the video content for a plurality of times continuously, the time interval is set as the initial value of the preset duration; if the current video frame comprises a plurality of portraits, determining the portraits speaking in the portraits, and determining the corresponding position of the portraits speaking in the portraits in a screen as a target position;
the target sound production area acquisition module is used for determining a sound production area corresponding to the target position in the screen as a target sound production area;
and the sound production module is used for driving the target sound production area to produce sound through vibration of the exciter according to the sound signal of the video.
8. An electronic device comprising a screen, an actuator for driving the screen to emit sound, a memory, and a processor, the screen, actuator, and memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-6.
9. A computer-readable storage medium having program code executable by a processor, the program code causing the processor to perform the method of any of claims 1 to 6.
10. An electronic device, comprising:
a screen including a plurality of sound emitting areas;
the exciter is connected with the sounding area of the screen and is used for driving the screen to sound;
the circuit is connected with the exciter and comprises a detection circuit and a driving circuit, wherein the detection circuit is used for detecting whether a sound signal corresponding to video content is a sound signal corresponding to sound made by a person when the video content is displayed, and if so, detecting a target position corresponding to a portrait in the video content in a screen; if the portrait is not obtained in the video content, obtaining a corresponding target position of the portrait in the screen every preset time length; if the portrait is not obtained in the video content for a plurality of times continuously, the time interval of obtaining the corresponding target position of the portrait in the screen is prolonged, and if the portrait is obtained again after the portrait is not obtained in the video content for a plurality of times continuously, the time interval is set as the initial value of the preset duration; if the current video frame comprises a plurality of portraits, determining the portraits speaking in the portraits, determining the position of the portraits speaking in the portraits in the screen as a target position, and determining the sound production area corresponding to the target position in the screen as a target sound production area; the driving circuit is used for driving the target sound production area to produce sound in a vibration mode through the exciter according to the sound signals of the video.
CN201810745830.5A 2018-07-09 2018-07-09 Screen sounding method and device, electronic device and storage medium Active CN109194796B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810745830.5A CN109194796B (en) 2018-07-09 2018-07-09 Screen sounding method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810745830.5A CN109194796B (en) 2018-07-09 2018-07-09 Screen sounding method and device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN109194796A CN109194796A (en) 2019-01-11
CN109194796B true CN109194796B (en) 2021-03-02

Family

ID=64936306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810745830.5A Active CN109194796B (en) 2018-07-09 2018-07-09 Screen sounding method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN109194796B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109862293B (en) * 2019-03-25 2021-01-12 深圳创维-Rgb电子有限公司 Control method and device for terminal loudspeaker and computer readable storage medium
CN109862163A (en) * 2019-03-25 2019-06-07 努比亚技术有限公司 A kind of screen sound-emanating areas optimization method and device, mobile terminal and storage medium
US10922047B2 (en) 2019-03-25 2021-02-16 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Method and device for controlling a terminal speaker and computer readable storage medium
CN110191303B (en) * 2019-06-21 2021-04-13 Oppo广东移动通信有限公司 Video call method, device and apparatus based on screen sound production and computer readable storage medium
CN112423205A (en) * 2019-08-22 2021-02-26 Oppo广东移动通信有限公司 Electronic device and control method thereof
CN110572502B (en) * 2019-09-05 2020-12-22 Oppo广东移动通信有限公司 Electronic equipment and sound production control method thereof
CN110780839B (en) * 2019-10-30 2021-07-06 Oppo广东移动通信有限公司 Electronic device and sound production control method
CN111491212A (en) * 2020-04-17 2020-08-04 维沃移动通信有限公司 Video processing method and electronic equipment
CN111641865B (en) * 2020-05-25 2023-03-24 惠州视维新技术有限公司 Playing control method of audio and video stream, television equipment and readable storage medium
CN111669689B (en) * 2020-06-12 2021-10-08 京东方科技集团股份有限公司 Screen sounding device, screen sounding method, computer equipment and medium
CN113810837B (en) * 2020-06-16 2023-06-06 京东方科技集团股份有限公司 Synchronous sounding control method of display device and related equipment
CN111741412B (en) * 2020-06-29 2022-07-26 京东方科技集团股份有限公司 Display device, sound emission control method, and sound emission control device
CN111836083B (en) * 2020-06-29 2022-07-08 海信视像科技股份有限公司 Display device and screen sounding method
CN112929739A (en) * 2021-01-27 2021-06-08 维沃移动通信有限公司 Sound production control method and device, electronic equipment and storage medium
CN115191120A (en) * 2021-02-07 2022-10-14 京东方科技集团股份有限公司 Display device, sound production control method, parameter determination method and device
CN114416014A (en) * 2022-01-05 2022-04-29 歌尔科技有限公司 Screen sounding method and device, display equipment and computer readable storage medium
CN116048448A (en) * 2022-07-26 2023-05-02 荣耀终端有限公司 Audio playing method and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202652283U (en) * 2012-01-12 2013-01-02 瑞声光电科技(常州)有限公司 Screen sounder
CN103139338A (en) * 2013-01-22 2013-06-05 瑞声科技(南京)有限公司 Screen sounding control system, method and mobile terminal
CN104036789A (en) * 2014-01-03 2014-09-10 北京智谷睿拓技术服务有限公司 Multimedia processing method and multimedia device
EP2941902A1 (en) * 2013-01-07 2015-11-11 Nokia Technologies OY A speaker apparatus
CN106909256A (en) * 2017-02-27 2017-06-30 北京小米移动软件有限公司 Screen control method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202652283U (en) * 2012-01-12 2013-01-02 瑞声光电科技(常州)有限公司 Screen sounder
EP2941902A1 (en) * 2013-01-07 2015-11-11 Nokia Technologies OY A speaker apparatus
CN103139338A (en) * 2013-01-22 2013-06-05 瑞声科技(南京)有限公司 Screen sounding control system, method and mobile terminal
CN104036789A (en) * 2014-01-03 2014-09-10 北京智谷睿拓技术服务有限公司 Multimedia processing method and multimedia device
CN106909256A (en) * 2017-02-27 2017-06-30 北京小米移动软件有限公司 Screen control method and device

Also Published As

Publication number Publication date
CN109194796A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109194796B (en) Screen sounding method and device, electronic device and storage medium
CN108833638B (en) Sound production method, sound production device, electronic device and storage medium
CN108881568B (en) Method and device for sounding display screen, electronic device and storage medium
CN108646971B (en) Screen sounding control method and device and electronic device
CN108683761B (en) Sound production control method and device, electronic device and computer readable medium
CN109032558B (en) Sound production control method and device, electronic device and computer readable medium
CN109189362B (en) Sound production control method and device, electronic equipment and storage medium
CN109086024B (en) Screen sounding method and device, electronic device and storage medium
CN109032556B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109062535B (en) Sound production control method and device, electronic device and computer readable medium
CN108958632B (en) Sound production control method and device, electronic equipment and storage medium
CN109040919B (en) Sound production method, sound production device, electronic device and computer readable medium
CN108900728B (en) Reminding method, reminding device, electronic device and computer readable medium
CN109144460B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109086023B (en) Sound production control method and device, electronic equipment and storage medium
CN108810198B (en) Sound production control method and device, electronic device and computer readable medium
CN108712706B (en) Sound production method, sound production device, electronic device and storage medium
CN109144249B (en) Screen sounding method and device, electronic device and storage medium
CN108958697B (en) Screen sounding control method and device and electronic device
CN109085985B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN109240413B (en) Screen sounding method and device, electronic device and storage medium
CN108810764B (en) Sound production control method and device and electronic device
CN109189360B (en) Screen sounding control method and device and electronic device
CN109062533B (en) Sound production control method, sound production control device, electronic device, and storage medium
CN110505335B (en) Sound production control method and device, electronic device and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant