CN113727021A - Shooting method and device and electronic equipment - Google Patents

Shooting method and device and electronic equipment Download PDF

Info

Publication number
CN113727021A
CN113727021A CN202110999017.2A CN202110999017A CN113727021A CN 113727021 A CN113727021 A CN 113727021A CN 202110999017 A CN202110999017 A CN 202110999017A CN 113727021 A CN113727021 A CN 113727021A
Authority
CN
China
Prior art keywords
voice signal
sound
preset
voice
magnification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110999017.2A
Other languages
Chinese (zh)
Other versions
CN113727021B (en
Inventor
陈明杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110999017.2A priority Critical patent/CN113727021B/en
Publication of CN113727021A publication Critical patent/CN113727021A/en
Application granted granted Critical
Publication of CN113727021B publication Critical patent/CN113727021B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image

Abstract

The application discloses a shooting method, a shooting device and electronic equipment, and belongs to the technical field of camera shooting. The method comprises the following steps: under the condition that a shooting preview interface is displayed, acquiring voice information of a first voice signal, wherein the voice information comprises first sound intensity and sound source information; under the condition that the sound source information indicates that a sound production object of the first voice signal is an object displayed in the shooting preview interface, executing zooming processing on a preview image in the shooting preview interface according to a first zooming magnification; wherein the first zoom magnification is associated with the first sound intensity.

Description

Shooting method and device and electronic equipment
Technical Field
The application belongs to the technical field of camera shooting, and particularly relates to a shooting method and device and electronic equipment.
Background
At present, the zooming mode of a mobile phone camera is completed through a gesture of two fingers, the two fingers are opened to zoom in and zoom out, and the two fingers are closed to zoom in and zoom out. However, when a user holds the mobile phone, the zoom mode requires two hands to be used simultaneously, that is, one hand holds the mobile phone, and the other hand uses fingers to zoom, and the operation mode is inconvenient.
Disclosure of Invention
The embodiment of the application aims to provide a shooting method, a shooting device and electronic equipment, and can solve the problem that an existing zooming operation mode is inconvenient.
In a first aspect, an embodiment of the present application provides a shooting method, where the method includes:
under the condition that a shooting preview interface is displayed, acquiring voice information of a first voice signal, wherein the voice information comprises first sound intensity and sound source information;
under the condition that the sound source information indicates that a sound production object of the first voice signal is an object displayed in the shooting preview interface, executing zooming processing on a preview image in the shooting preview interface according to a first zooming magnification;
wherein the first zoom magnification is associated with the first sound intensity.
In a second aspect, an embodiment of the present application provides a shooting device, including:
the acquisition module is used for acquiring voice information of a first voice signal under the condition that a shooting preview interface is displayed, wherein the voice information comprises first sound intensity and sound source information;
the processing module is used for executing zooming processing on a preview image in the shooting preview interface according to a first zooming magnification under the condition that the sound source information indicates that a sound production object of the first voice signal is an object displayed in the shooting preview interface;
wherein the first zoom magnification is associated with the first sound intensity.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the application, when the shooting preview interface is displayed, the first sound intensity and the sound source information of the first voice signal are acquired, and when the sound source information indicates that the sound emitting object of the voice signal is the object displayed in the shooting preview interface, the zoom processing is executed on the preview image in the shooting preview interface through the first zoom magnification associated with the first sound intensity. The method is used for automatically zooming the preview image in the shooting preview interface through the zooming magnification ratio associated with the sound intensity of the voice signal of the displayed object in the shooting preview interface, and does not need the user to manually zoom, so that the zooming operation of the user is simplified.
Drawings
Fig. 1 is a flowchart of a shooting method provided in an embodiment of the present application;
FIG. 2 is one of schematic interface display diagrams of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a second schematic view of an interface display of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a third schematic view of an interface display of an electronic device according to an embodiment of the present application;
FIG. 5 is a fourth schematic view of an interface display of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a fifth schematic view of an interface display of an electronic device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a shooting device provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 9 is a second schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image display method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a shooting method, which may be applied to an electronic device, where the electronic device may be a mobile phone, a tablet computer, a notebook computer, or the like. As shown in FIG. 1, the method may include steps 1100 through 1400, which are described in detail below.
Step 1100, acquiring voice information of the first voice signal under the condition that the shooting preview interface is displayed.
The shooting preview interface is an interface displayed after the shooting application program is entered. Wherein, the shooting object is displayed in the shooting preview interface.
The photographed person can interact with the electronic device based on the set voice, which may be a voice with directivity. For example, the set voice may be "see here", "i am there", or the like.
In this embodiment, before the step 1100 is executed to acquire the voice information of the first voice signal in the case of displaying the shooting preview interface, the shooting method of the present disclosure may further include: in the case of displaying a shooting preview interface, a configuration entry for performing voice configuration is provided, and a voice input through the configuration entry is acquired as a set voice.
As shown in fig. 2, in the case where the shooting preview interface is displayed, the photographer can click "setup", enter a page for voice entry, and click "start recording voice", and then record the set voice, for example, "see here", "i am there", and the like.
The first voice signal may be a sound emitted from an object displayed in the shooting preview interface, or may be a sound emitted from an object outside the shooting preview interface.
The voice information includes a first sound intensity and source information.
In this embodiment, the first sound intensity of the first voice signal can be determined by detecting the amplitude of the first voice signal.
In this embodiment, a phase difference between first voice signals picked up by two microphones in the electronic device may be obtained first, and the distance and the angle between the two microphones are combined to determine sound source information of the first voice signal.
Example 1, after the electronic device starts to run the photographing application, a photographing preview interface may be displayed on a display screen of the electronic device. In the case of displaying the shooting preview interface, the object displayed in the shooting preview interface may interact with the electronic device through the set voice, as shown in fig. 3 and 4, two objects are displayed in the shooting preview interface, wherein one object shouts "i am there".
When one of the objects in the shooting preview interface calls "i am at this", the electronic device takes "i am at this" as the first voice signal. The sound intensity of the first voice signal "i am here" is determined as the first sound intensity by the amplitude of the first voice signal "i am here". The sound source information of the first voice signal "i am at this" is determined by the phase difference of the first voice signal "i am at this" picked up by the two microphones and combining the distance and the angle between the two microphones. In both the sound source information of the first voice signal corresponding to fig. 3 and fig. 4, the sound source information of the first voice signal indicates that the sound generating object of the first voice signal is an object displayed in the shooting preview interface.
Under the condition that a shooting preview interface is displayed, after voice information of a first voice signal is acquired, the following steps are carried out:
step 1200, in the case that the sound source information indicates that the sound emitting object of the first voice signal is the object displayed in the shooting preview interface, executing zooming processing on the preview image in the shooting preview interface according to the first zooming magnification.
The first zoom magnification is associated with a first sound intensity. It can be known that, in the case where the first voice signal is derived from an object displayed in the shooting preview interface, the auto zoom processing is performed on the preview image in the shooting preview interface according to the first zoom magnification associated with the intensity of the first voice signal.
In an embodiment, before performing zoom processing on the preview image in the captured preview interface according to the first zoom magnification, the first zoom magnification may be determined according to the following steps 2100 to 2200, and then automatic zoom processing is performed on the preview image in the captured preview interface according to the first zoom magnification, in which the capturing method may further include the following steps 2100 to 2200:
in step 2100, in a case where the first sound intensity is less than or equal to a preset intensity threshold, a first preset magnification corresponding to the first sound intensity is determined as a first zoom magnification.
The preset intensity threshold may be a numerical value set according to an actual application scenario and an actual requirement.
In step 2100, first mapping data reflecting mapping relationships between different first sound intensities and different first preset magnifications is stored in the electronic device in advance, and after the first sound intensity is obtained, the first preset magnifications corresponding to the first sound intensity can be matched from the first mapping data.
It can be understood that, in the case that the first sound intensity is less than or equal to the preset intensity threshold, it indicates that the object indicated by the sound source information of the first voice signal is farther from the image pickup device, zoom magnification needs to be performed, that is, zoom magnification needs to be increased. The smaller the first sound intensity is, the higher the zooming magnification of the amplification is, so that the requirement of an actual scene can be met.
Continuing with example 1 in step 1100, if "i am here" in fig. 3 is a sound made by an object displayed in the shooting preview interface, the object is taken as an area that needs zooming, since the intensity of "i am here" is small, it indicates that the object is far away from the image capturing apparatus, at this time, zooming and magnifying the object is needed, and a first preset magnification 5x corresponding to the first sound intensity obtained from the above first mapping data is taken as a first zoom magnification, that is, the current zoom magnification 1x is adjusted to be the first zoom magnification 5 x.
In step 2200, when the first sound intensity is greater than the preset intensity threshold, determining a second preset magnification corresponding to the first sound intensity as the first zoom magnification.
The first preset multiplying power is larger than the second preset multiplying power.
In step 2100, second mapping data reflecting a mapping relationship between different first sound intensities and different second preset magnifications is stored in the electronic device in advance, and after the first sound intensity is obtained, the second preset magnifications corresponding to the first sound intensity can be matched from the second mapping data.
It can be understood that, when the first sound intensity is greater than the preset intensity threshold, indicating that the object indicated by the sound source information of the first voice signal is closer to the image capturing device, zooming-out needs to be performed, that is, zooming-out magnification needs to be adjusted. The zooming magnification of the zooming is higher as the first sound intensity is higher, so that the actual scene requirement can be met.
Continuing with example 1 in step 1100, if "i am here" in fig. 4 is a sound emitted by an object displayed in the shooting preview interface, the object is taken as an area that needs zooming, since the intensity of "i am here" is large, it indicates that the object is closer to the imaging apparatus, at this time, zooming and zooming out of the object are needed, and if a second preset magnification 1x corresponding to the first sound intensity obtained from the above second mapping data is taken as a first zoom magnification, the current zoom magnification 5x is adjusted to be the first zoom magnification 1 x.
In this embodiment, the above performing zoom processing on the preview image in the shooting preview interface according to the first zoom magnification may further include: and executing zooming processing on the preview image in the shooting preview interface by taking the object displayed in the shooting preview interface as a center according to the first zooming magnification.
As shown in fig. 3, the preview image in the shooting preview interface may be automatically zoomed to 5X centering on the object that shouts "i am there" displayed in the shooting preview interface.
As shown in fig. 4, the preview image in the shooting preview interface can be automatically zoomed to 1X centering on the object that shouts "i am there" displayed in the shooting preview interface.
According to the method of the embodiment, when the shooting preview interface is displayed, the first sound intensity and the sound source information of the first voice signal are acquired, and when the sound source information indicates that the sound emitting object of the voice signal is the object displayed in the shooting preview interface, the zooming processing is executed on the preview image in the shooting preview interface through the first zooming magnification associated with the first sound intensity. The method is used for automatically zooming the preview image in the shooting preview interface through the zooming magnification ratio related to the sound intensity of the voice signal of the object displayed in the shooting preview interface, and does not need the user to manually zoom, so that the zooming operation of the user is simplified.
In an embodiment, before performing zoom processing on the preview image in the captured preview interface according to the first zoom magnification, the first zoom magnification may be determined according to the following steps 3100 to 3400, and then automatic zoom processing may be performed on the preview image in the captured preview interface according to the first zoom magnification, in which the capturing method may further include the following steps 3100 to 3400:
step 3100, determining a first intermediate magnification based on a first sound intensity of the first speech signal.
In step 3100, first mapping data and second mapping data are pre-stored in the electronic device, where the first mapping data reflects a mapping relationship between different first sound intensities and different first preset magnifications, and the second mapping data reflects a mapping relationship between different first sound intensities and different second preset magnifications. In general, when the first sound intensity is less than or equal to the preset intensity threshold, a first preset multiplying factor corresponding to the first sound intensity is matched from the first mapping data as a first intermediate multiplying factor. When the first sound intensity is larger than the preset intensity threshold, a second preset multiplying factor corresponding to the first sound intensity is matched from the second mapping data to serve as a first intermediate multiplying factor. The first mapping data and the second mapping data in the embodiment of the present application may be stored in a storage form of a mapping table or other mapping relationships, which is not specifically limited herein.
In step 3100, the first sound intensity may be compared with a preset intensity threshold to match a first preset multiplying factor or a second preset multiplying factor corresponding to the first sound intensity from the first mapping data or the second mapping data of the first voice signal, and the first preset multiplying factor or the second preset multiplying factor is used as a first intermediate multiplying factor.
Continuing with example 1 of step 2100 above, when the first sound intensity is less than or equal to the preset intensity threshold, a first preset magnification 5x may be determined as a first intermediate magnification according to the first sound intensity of the first speech signal and the first mapping data.
Continuing with example 1 of step 2200 above, when the first sound intensity is greater than the preset intensity threshold, a second preset magnification 1x may be determined as the first intermediate magnification according to the first sound intensity of the first speech signal and the second mapping data.
Step 3200, obtain a second sound intensity of the interference signal in the first voice signal.
It can be understood that in an actual shooting scene, sound sources are often interfered by the outside world. The disturbing signal may be various noise signals such as a whistling sound of a car, a train, etc.
In this step 3200, third mapping data reflecting mapping relationships between different interference signals and different second sound intensities is stored in the electronic device in advance. In this step 3200, the amplitude of the interference signal in the first speech signal may be obtained first, so as to determine a second sound intensity corresponding to the interference signal according to the amplitude of the interference signal and the third mapping data.
And 3300, determining a second intermediate magnification according to the second sound intensity and the first intermediate magnification. And performs step 3400 or step 3500 as follows.
It will be appreciated that in the presence of interfering signals, the first intermediate magnification will typically be adjusted down. Wherein, the larger the second sound intensity is, the more the first intermediate multiplying power is turned down, and the smaller the obtained second intermediate multiplying power is.
In this step 3300, when it is received that an interfering signal is present in the first audio signal of the object displayed in the shooting preview interface, the first intermediate magnification obtained in the above step 3100 is adjusted to obtain the second intermediate magnification according to the second audio intensity of the interfering signal. Meanwhile, a prompt message 'no zooming operation is performed and voice feedback of a photographer' is displayed on a shooting preview interface.
And step 3400, determining the second intermediate magnification as the first zoom magnification under the condition that the second voice signal of the photographer is not received within the preset time.
In this step 3400, if the voice signal of the photographer is not received within the preset time, the second intermediate magnification is used as the first zoom magnification, and then zoom processing is performed on the preview image in the shooting preview interface according to the target zoom magnification.
Continuing with example 1 of step 3100 above, where 5x is determined as the first intermediate magnification, since the interfering signal is interfering, the first intermediate magnification may be adjusted down to 3x as the second intermediate magnification according to the strength of the interfering signal. Meanwhile, if the voice signal of the photographer is not received within the preset time, the second intermediate magnification ratio 3x is taken as a first zoom magnification ratio, and then the zoom processing is executed on the preview image in the shooting preview interface according to the first zoom magnification ratio.
Continuing with example 1 of step 3100 above, where 1x is determined as the first intermediate magnification, since the interfering signal is interfered, the first intermediate magnification may be adjusted down to 0.8x as the second intermediate magnification according to the strength of the interfering signal. Meanwhile, if the voice signal of the photographer is not received within the preset time, the second intermediate magnification of 0.8x is used as a first zoom magnification, and then the zoom processing is executed on the preview image in the shooting preview interface according to the first zoom magnification.
And 3500, acquiring the target information under the condition of receiving the second voice signal of the photographer within the preset time.
The target information includes at least one of: a third sound intensity of the second speech signal, a keyword in the second speech signal.
In this step 3500, if the second voice signal of the photographer is received within the preset time, the second voice signal of the photographer is recognized to obtain a third sound intensity of the second voice signal and a keyword in the second voice signal, and then a second intermediate magnification is adjusted according to the second sound intensity of the second voice signal and/or the keyword in the second voice signal to obtain a first zoom magnification, and zoom processing is performed on the preview image in the shooting preview interface based on the first zoom magnification.
Continuing with example 1 of step 3400, if the electronic device receives the second voice signal "a little bit larger" of the photographer within the preset time, the electronic device increases the second intermediate magnification, for example, increases the second intermediate magnification by 3x to 5x as the first zoom magnification, according to the second voice intensity and/or the keyword "a little bit larger" of the second voice signal, and then performs zoom processing on the preview image in the shooting preview interface according to the first zoom magnification.
According to the embodiment, when the sound emitted by the shooting object is interfered, the expected zoom magnification is provided to execute the zoom processing on the preview image in the shooting preview interface according to the strength of the interference signal, the user does not need to manually execute the zoom, and the zoom operation of the user is simplified. Meanwhile, the expected zooming magnification is adjusted by supporting voice feedback of a user, so that the obtained zooming magnification is more in line with the requirements of the user.
In one embodiment, after the above step 1100 is performed to acquire the voice information of the first voice signal, the photographing method of the embodiment of the present disclosure may further include the following steps 4100 to 4200:
step 4100, when the sound source information indicates that the sound source of the first speech signal is an object outside the imaging preview interface, specifies azimuth information of the sound source based on the sound source information.
In the present embodiment, when the sound source information of the first audio signal indicates that the sound source of the first audio signal is an object outside the imaging preview interface, the direction information of the sound source is determined based on the sound source information of the first audio signal.
Example 2, after the electronic device starts to run the photographing application, a photographing preview interface may be displayed on the display screen of the electronic device. In the case of displaying the shooting preview interface, the photographer may interact with the captured sound by the set voice, as shown in fig. 5, although the object is not displayed in the shooting preview interface, a voice source "i am here" exists outside the shooting preview interface, and at this time, the electronic device may determine the direction information of the sound-emitting object based on the sound source information of "i am here".
Step 4200, outputting a prompt message based on the orientation information.
The prompt message is used for indicating the direction of the shooting device rotated by the photographer so as to enable the sound-producing object to be displayed in the shooting preview interface.
In this embodiment, when the direction information of the sound-generating object is determined, prompt information for instructing a photographer to rotate the direction of the image capturing device is displayed on the display interface of the electronic device, so that the photographer rotates the direction of the image capturing device based on the prompt information, and the sound-generating object is displayed on the shooting preview interface.
Continuing with example 2 of step 4100, as shown in fig. 5, the display interface of the electronic device outputs a prompt message, which includes not only text information of "please turn the mobile phone", but also pointing information pointing to the source "i am there", which is an arrow in fig. 5.
According to the embodiment, prompting and interaction of voice sources other than the shooting preview picture are realized, so that a photographer can be helped to find the object needing to be shot faster in the case that the shooting object is lost.
In one embodiment, before the above step 1100 is performed to acquire the voice information of the first voice signal, the shooting method of the embodiment of the present disclosure may further include the following steps 5100 to 5200:
in step 5100, a fourth speech signal is obtained.
The fourth speech signal comprises sub-speech signals of at least one sound-emitting object.
Example 3, as shown in fig. 6, in the case where the photographing preview interface is displayed, if three objects displayed in the photographing preview interface are simultaneously uttered, the electronic device acquires three sub voice signals, i.e., a sub voice signal 1 of an object 1, a sub voice signal 2 of an object 2, and a sub voice signal 3 of an object 3.
In step 5200, a sub-speech signal of the target utterance object in the fourth speech signal is obtained.
The target sound-emitting object may be an object satisfying a preset condition, and the preset condition includes: the target sound-emitting object is located in a preset area in the shooting preview interface, or the object characteristics of the target sound-emitting object are matched with the preset object characteristics. That is, in the case where the sound of the sound-generating object satisfying the preset condition exists in the fourth speech signal, the sound of the sound-generating object satisfying the preset condition may be determined as the sub-speech signal of the target sound-generating object.
In one example, the preset condition includes that the object feature of the target sound emission object matches with the preset object feature.
The preset object features may be pre-stored face information and attribute information corresponding to each of the remarked face information, and the attribute information may include a name and a relationship with a photographer. Here, the photographing method of the present disclosure may further include: and receiving a first input, and responding to the first input to acquire the preset object characteristics.
In this example, in the case of displaying the shooting preview interface, the electronic device may acquire the sub voice signal of the at least one sound-generating object, and at the same time, the electronic device further identifies whether the sub voice information of the target sound-generating object exists in the sub voice signal of the at least one sound-generating object. For example, whether the object features of the sound-generating object are matched with preset object features is firstly identified, and if the object features are matched with the preset object features, the corresponding sound-generating object is taken as a target sound-generating object, and a sub-voice signal of the target sound-generating object is acquired.
Continuing with example 3 of the above step 5100, after the electronic device acquires the three sub-voice signals, the object features of the three sound-emitting objects are matched with the preset object features, and if the object feature of the object 1 in the three sound-emitting objects is successfully matched with the preset object features, the object 1 is taken as the target sound-emitting object, and the sub-voice signal of the object 1 is acquired.
In one example, the preset condition includes that the target sound-emitting object is located in a preset area in the shooting preview interface. The preset area may be a center area of the photographing preview interface.
In this example, in the case of displaying the shooting preview interface, the electronic device acquires a sub voice signal of at least one sound emission object, and at the same time, the electronic device further determines whether there is an object located in a central area of the shooting preview interface in the at least one sound emission object, and if there is an object located in the shooting preview interface, the object is taken as a subject object, i.e., a target sound emission object.
It is understood that the closer the face of the person is to the center area of the photographing preview interface, the greater the possibility of being the subject.
Continuing with example 3 of the above-mentioned step 5100, in the case where the capture preview interface is displayed, the electronic device acquires sub voice signals of at least one sound-emitting object, and at the same time, the electronic device further recognizes whether there is an object located in the center area of the capture preview interface among the three sound-emitting objects, and in the case where the object 1 among the three sound-emitting objects is located in the center area of the capture preview interface, takes the object 1 as a target sound-emitting object, and acquires sub voice signals of the target sound-emitting object.
In step 5300, a sub-speech signal of the target sound emission object is determined as the first speech signal.
Continuing with example 3 of step 5200, after the sub-voice signal of object 1, which is the target sound generation object, is determined, the sub-voice signal of object 1 may be determined as the first voice signal, and then the zoom process may be performed on the preview image in the captured preview interface with object 1 in the captured preview interface as the center according to the first zoom magnification associated with the first sound intensity of the first voice signal.
According to the embodiment, when a plurality of objects in the shooting preview screen simultaneously make sounds, the sounds made by the target sound-making object, which is the main object in the shooting preview screen, are determined, so that the preview image in the shooting preview interface can be zoomed by taking the target sound-making object as the center.
In one embodiment, after the above step 1100 is executed to acquire the voice information of the first voice signal, the shooting method of the embodiment of the present disclosure may further include the following steps 6100 to 6200:
step 6100, obtain a third speech signal.
The third speech signal comprises M sub-speech signals of M sound-emitting objects. M is a positive integer greater than or equal to 2.
Example 4, in the case of displaying the photographing preview interface, if two objects displayed in the photographing preview interface and one object outside the photographing preview interface are simultaneously sounded, wherein object 1 and object 2 are displayed in the photographing preview interface and object 2 is located outside the photographing preview screen. The electronic device acquires three sub-voice signals, i.e., sub-voice signal 1 of object 1, sub-voice signal 2 of object 2, and sub-voice signal 3 of object 3.
In step 6200, at least one sub voice signal of the M sub voice signals is determined as a first voice signal according to the priority of each sub voice signal.
In this step 6200, at least one sub-voice signal may be obtained from the M sub-voice signals according to the descending order of the priority of each sub-voice signal, and determined as the first voice signal.
In this step 6200, in the case of obtaining M sub-voice signals, the priorities of the M sub-voice signals may be sorted from large to small to obtain a descending sort order of each sub-voice signal, where the sort principle is as follows: the priority of photographing the sub voice signal of the object in the preview interface is greater than the priority of photographing the sub voice signal outside the preview interface. The priority of the sub voice signal having a large sound intensity is higher than that of the sub voice signal having a small sound intensity. The priority of the sub voice signal closer to the preset area in the shooting preview picture interface is greater than the priority of the sub voice signal farther from the preset area in the shooting preview picture interface. The preset area may be a central area of the photographing preview interface.
In one example, the sub-voice signal with the highest priority may be directly selected to be determined as the first voice signal based on the descending order of the priority of each sub-voice signal.
Continuing with example 4 of step 6100, the electronic device obtains three sub-voice signals, where sub-voice signal 1 and sub-voice signal 2 are voice signals sent by objects displayed in the shooting preview interface, and sub-voice signal 3 is a voice signal sent by objects other than the shooting preview interface. The priority of the sub voice signal 1 and the priority of the sub voice signal 2 are higher than the priority of the sub voice signal 3. Meanwhile, since the sound intensity of the sub voice signal 1 is greater than that of the sub voice signal 2, the priority of the sub voice signal 1 is greater than that of the sub voice signal 2. Here, the sub voice signal 1 may be directly selected as the first voice signal.
In one example, the sub-voice signals having the priorities of the first and second may be selected as the first voice signal based on the descending order of the priorities of each of the sub-voice signals.
Continuing with example 4 of step 6100, the electronic device obtains three sub-voice signals, where sub-voice signal 1 and sub-voice signal 2 are voice signals sent by objects displayed in the shooting preview interface, and sub-voice signal 3 is a voice signal sent by objects other than the shooting preview interface. The priority of the sub voice signal 1 and the priority of the sub voice signal 2 are higher than the priority of the sub voice signal 3, the sub voice signal 1 and the sub voice signal 2 are selected as the first voice signal.
It is understood that, in the case where the sub voice signal 1 and the sub voice signal 2 can be selected as the first voice signal at the same time, it may be that the zoom magnification 1 associated with the sound intensity of the sub voice signal 1 is acquired, and the zoom magnification 2 associated with the sound intensity of the sub voice signal 2 is acquired. Meanwhile, the zoom magnification ratio 1 and the zoom magnification ratio 2 are compared, if the zoom magnification ratio 1 and the zoom magnification ratio 2 are close, the zoom magnification ratio 1 and the zoom magnification ratio 2 are fused to obtain a fusion magnification ratio, the fusion magnification ratio is used as a first zoom magnification ratio, and then a zoom operation is performed on the preview image in the shooting preview interface according to the first zoom magnification ratio, for example, a zoom process is performed on the preview image in the shooting preview interface with the object 1 and the object 2 as centers.
If the zoom magnification ratio 1 and the zoom magnification ratio 2 are not close, and at this time, as can be seen from the above analysis, the priority of the sub voice signal 1 of the object 1 is higher than the priority of the sub voice signal 2 of the object 2, the zoom magnification ratio 1 associated with the sub voice signal 1 is taken as the first zoom magnification ratio, and a zoom operation is performed on the preview image in the shooting preview interface at the first zoom magnification ratio, for example, zoom processing is performed on the preview image in the shooting preview interface with the object 1 as the center.
According to the present embodiment, when a plurality of objects in the shooting preview screen simultaneously make sounds, the object corresponding to the sound with a high priority is zoomed based on the priority ranking of the sounds.
Corresponding to the above embodiments, as shown in fig. 7, an embodiment of the present application further provides a camera 700, including:
the obtaining module 710 is configured to obtain voice information of the first voice signal under the condition that the shooting preview interface is displayed, where the voice information includes the first sound intensity and the sound source information.
And a processing module 720, configured to, in a case that the sound source information indicates that a sound generating object of the first voice signal is an object displayed in the shooting preview interface, perform zoom processing on a preview image in the shooting preview interface according to a first zoom magnification.
Wherein the first zoom magnification is associated with the first sound intensity.
In one embodiment, the processing module 720 is further configured to: determining a first preset magnification corresponding to the first sound intensity as a first zooming magnification when the first sound intensity is less than or equal to a preset intensity threshold; and under the condition that the first sound intensity is larger than the preset intensity threshold value, determining a second preset multiplying factor corresponding to the first sound intensity as a first zooming multiplying factor.
Wherein the first preset multiplying power is larger than the second preset multiplying power.
In one embodiment, the processing module 720 is further configured to: determining a first intermediate multiplying power according to the first sound intensity of the first voice signal; acquiring second sound intensity of an interference signal in the first voice signal; determining a second intermediate multiplying power according to the second sound intensity and the first intermediate multiplying power; and determining the second intermediate magnification as the first zoom magnification when a second voice signal of the photographer is not received within a preset time period.
In one embodiment, the processing module 720 is further configured to: under the condition that a second voice signal of a photographer is received within a preset time period, target information is acquired, and the target information comprises at least one of the following items: a third sound intensity of the second speech signal, a keyword in the second speech signal; and determining the first zooming magnification according to the target information and the second intermediate magnification.
In one embodiment, the processing module 720 is further configured to: determining azimuth information of the sound production object according to the sound source information in the case where the sound source information indicates that the sound production object of the first voice signal is an object outside the shooting preview interface; and outputting prompt information based on the azimuth information, wherein the prompt information is used for indicating the direction of a photographer for rotating the shooting device so as to enable the sound-producing object to be displayed in the shooting preview interface.
In one embodiment, the obtaining module 710 is further configured to obtain a third speech signal, where the third speech signal includes M sub-speech signals of M sound-producing objects.
The processing module 720 is further configured to determine at least one sub voice signal of the M sub voice signals as a first voice signal according to the priority of each sub voice signal.
Wherein M is an integer greater than or equal to 2.
In one embodiment, the obtaining module 710 is further configured to obtain a fourth speech signal, where the fourth speech signal includes a sub-speech signal of at least one sound object; and acquiring a sub voice signal of a target voice production object in the fourth voice signal.
The processing module 720 is further configured to determine the sub-speech signal of the target sound-generating object as the first speech signal.
Wherein, the target sound production object meets the preset conditions, and the preset conditions comprise: the target sound-producing object is located in a preset area in the shooting preview interface, or the object characteristics of the target sound-producing object are matched with the preset object characteristics.
The shooting device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The photographing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The shooting device provided by the embodiment of the application can realize each process realized by the method embodiment, and is not repeated here for avoiding repetition.
Corresponding to the foregoing embodiments, optionally, as shown in fig. 8, an electronic device 800 is further provided in this embodiment of the present application, and includes a processor 801, a memory 802, and a program or an instruction stored in the memory 802 and capable of running on the processor 801, where the program or the instruction is executed by the processor 801 to implement each process of the foregoing shooting method embodiment, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 900 includes, but is not limited to: a radio frequency unit 901, a network module 902, an audio output unit 903, an input unit 904, a sensor 905, a display unit 906, a user input unit 907, an interface unit 908, a memory 909, and a processor 910.
Those skilled in the art will appreciate that the electronic device 900 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 910 through a power management system, so as to manage charging, discharging, and power consumption management functions through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 910 is configured to, in a case that the display unit 906 displays a shooting preview interface, acquire voice information of a first voice signal, where the voice information includes a first sound intensity and source information; under the condition that the sound source information indicates that a sound production object of the first voice signal is an object displayed in the shooting preview interface, executing zooming processing on a preview image in the shooting preview interface according to a first zooming magnification; wherein the first zoom magnification is associated with the first sound intensity.
In one embodiment, the processor 910 is further configured to determine a first preset magnification corresponding to the first sound intensity as a first zoom magnification if the first sound intensity is less than or equal to a preset intensity threshold; determining a second preset multiplying power corresponding to the first sound intensity as a first zooming multiplying power under the condition that the first sound intensity is larger than the preset intensity threshold value; wherein the first preset multiplying power is larger than the second preset multiplying power.
In one embodiment, the processor 910 is further configured to determine a first intermediate magnification according to a first sound intensity of the first speech signal; acquiring second sound intensity of an interference signal in the first voice signal; determining a second intermediate multiplying power according to the second sound intensity and the first intermediate multiplying power; in a case where a second voice signal of the photographer is not received through the user input unit 907 within a preset time period, the second intermediate magnification is determined as the first zoom magnification.
In one embodiment, the processor 910 is further configured to obtain target information in a case where the second voice signal of the photographer is received through the user input unit 807 within a preset time period, where the target information includes at least one of: a third sound intensity of the second speech signal, a keyword in the second speech signal; and determining the first zooming magnification according to the target information and the second intermediate magnification.
In one embodiment, the processor 910 is further configured to determine, according to the sound source information, azimuth information of a sound emission object of the first voice signal in a case where the sound source information indicates that the sound emission object is an object outside the shooting preview interface; based on the orientation information, prompt information for instructing a photographer to turn the direction of the photographing apparatus so that the sound-emitting object is displayed in the photographing preview interface is output through the display unit 906.
In one embodiment, the processor 910 is further configured to obtain a third speech signal, where the third speech signal includes M sub-speech signals of M sound-producing objects; determining at least one sub voice signal of the M sub voice signals as a first voice signal according to the priority of each sub voice signal; wherein M is an integer greater than or equal to 2.
In one embodiment, the processor 910 is further configured to obtain a fourth speech signal, where the fourth speech signal includes a sub-speech signal of at least one sound object; acquiring a sub voice signal of a target voice-emitting object in the fourth voice signal; determining a sub voice signal of a target voice-emitting object as a first voice signal; wherein, the target sound production object meets the preset conditions, and the preset conditions comprise: the target sound-producing object is located in a preset area in the shooting preview interface, or the object characteristics of the target sound-producing object are matched with the preset object characteristics.
It should be understood that, in the embodiment of the present application, the input Unit 904 may include a Graphics Processing Unit (GPU) 9041 and a microphone 9042, and the Graphics Processing Unit 9041 processes image data of a still picture or a video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 906 may include a display panel 9061, and the display panel 9061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 907 includes a touch panel 9071 and other input devices 9072. A touch panel 9071 also referred to as a touch screen. The touch panel 9071 may include two parts, a touch detection device and a touch controller. Other input devices 9072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 909 can be used to store software programs as well as various data including, but not limited to, application programs and operating systems. The processor 910 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 910.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above shooting method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A photographing method, characterized by comprising:
under the condition that a shooting preview interface is displayed, acquiring voice information of a first voice signal, wherein the voice information comprises first sound intensity and sound source information;
under the condition that the sound source information indicates that a sound production object of the first voice signal is an object displayed in the shooting preview interface, executing zooming processing on a preview image in the shooting preview interface according to a first zooming magnification;
wherein the first zoom magnification is associated with the first sound intensity.
2. The method according to claim 1, wherein before performing zoom processing on the preview image in the shooting preview interface according to the first zoom magnification, the method further comprises:
determining a first preset magnification corresponding to the first sound intensity as a first zooming magnification when the first sound intensity is less than or equal to a preset intensity threshold;
determining a second preset multiplying power corresponding to the first sound intensity as a first zooming multiplying power under the condition that the first sound intensity is larger than the preset intensity threshold value;
wherein the first preset multiplying power is larger than the second preset multiplying power.
3. The method according to claim 1, wherein before performing zoom processing on the preview image in the shooting preview interface according to the first zoom magnification, the method further comprises:
determining a first intermediate multiplying power according to the first sound intensity of the first voice signal;
acquiring second sound intensity of an interference signal in the first voice signal;
determining a second intermediate multiplying power according to the second sound intensity and the first intermediate multiplying power;
and determining the second intermediate magnification as the first zoom magnification when a second voice signal of the photographer is not received within a preset time period.
4. The method of claim 3, wherein after determining a second intermediate magnification according to the second sound intensity and the first intermediate magnification, further comprising:
under the condition that a second voice signal of a photographer is received within a preset time period, target information is acquired, and the target information comprises at least one of the following items: a third sound intensity of the second speech signal, a keyword in the second speech signal;
and determining the first zooming magnification according to the target information and the second intermediate magnification.
5. The method of claim 1, wherein after obtaining the voice information of the first voice signal, further comprising:
determining azimuth information of the sound production object according to the sound source information in the case where the sound source information indicates that the sound production object of the first voice signal is an object outside the shooting preview interface;
and outputting prompt information based on the azimuth information, wherein the prompt information is used for indicating the direction of a photographer for rotating the shooting device so as to enable the sound-producing object to be displayed in the shooting preview interface.
6. The method of claim 1, wherein before obtaining the speech information of the first speech signal, further comprising:
acquiring a third voice signal, wherein the third voice signal comprises M sub voice signals of M voice objects;
determining at least one sub voice signal of the M sub voice signals as a first voice signal according to the priority of each sub voice signal;
wherein M is an integer greater than or equal to 2.
7. The method of claim 1, wherein before obtaining the speech information of the first speech signal, further comprising:
acquiring a fourth voice signal, wherein the fourth voice signal comprises a sub voice signal of at least one voice-producing object;
acquiring a sub voice signal of a target voice-emitting object in the fourth voice signal;
determining a sub voice signal of a target voice-emitting object as a first voice signal;
wherein, the target sound production object meets the preset conditions, and the preset conditions comprise: the target sound-producing object is located in a preset area in the shooting preview interface, or the object characteristics of the target sound-producing object are matched with the preset object characteristics.
8. A camera, comprising:
the acquisition module is used for acquiring voice information of a first voice signal under the condition that a shooting preview interface is displayed, wherein the voice information comprises first sound intensity and sound source information;
the processing module is used for executing zooming processing on a preview image in the shooting preview interface according to a first zooming magnification under the condition that the sound source information indicates that a sound production object of the first voice signal is an object displayed in the shooting preview interface;
wherein the first zoom magnification is associated with the first sound intensity.
9. The apparatus of claim 8, wherein the processing module is further configured to:
determining a first preset magnification corresponding to the first sound intensity as a first zooming magnification when the first sound intensity is less than or equal to a preset intensity threshold;
determining a second preset multiplying power corresponding to the first sound intensity as a first zooming multiplying power under the condition that the first sound intensity is larger than the preset intensity threshold value;
wherein the first preset multiplying power is larger than the second preset multiplying power.
10. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, carry out the steps of the photographing method according to any one of claims 1-7.
CN202110999017.2A 2021-08-27 2021-08-27 Shooting method and device and electronic equipment Active CN113727021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110999017.2A CN113727021B (en) 2021-08-27 2021-08-27 Shooting method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110999017.2A CN113727021B (en) 2021-08-27 2021-08-27 Shooting method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113727021A true CN113727021A (en) 2021-11-30
CN113727021B CN113727021B (en) 2023-07-11

Family

ID=78678765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110999017.2A Active CN113727021B (en) 2021-08-27 2021-08-27 Shooting method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113727021B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629869A (en) * 2022-03-18 2022-06-14 维沃移动通信有限公司 Information generation method and device, electronic equipment and storage medium
CN115550559A (en) * 2022-04-13 2022-12-30 荣耀终端有限公司 Video picture display method, device, equipment and storage medium
CN116055869A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Video processing method and terminal

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007306250A (en) * 2006-05-10 2007-11-22 Ricoh Co Ltd Imaging device, photography-time warning method, and computer-readable recording medium
JP2011120165A (en) * 2009-12-07 2011-06-16 Sanyo Electric Co Ltd Imaging apparatus
CN103780843A (en) * 2014-03-03 2014-05-07 联想(北京)有限公司 Image processing method and electronic device
CN105100635A (en) * 2015-07-23 2015-11-25 深圳乐行天下科技有限公司 Camera apparatus and camera control method
CN105227849A (en) * 2015-10-29 2016-01-06 维沃移动通信有限公司 A kind of method of front-facing camera auto-focusing and electronic equipment
CN106662723A (en) * 2014-07-02 2017-05-10 索尼公司 Zoom control device, zoom control method, and program
CN107847800A (en) * 2015-09-15 2018-03-27 喀普康有限公司 Games system, the control method of games system and non-volatile memory medium
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device
CN109640032A (en) * 2018-04-13 2019-04-16 河北德冠隆电子科技有限公司 Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence
CN111464752A (en) * 2020-05-18 2020-07-28 Oppo广东移动通信有限公司 Zoom control method of electronic device and electronic device
CN111641794A (en) * 2020-05-25 2020-09-08 维沃移动通信有限公司 Sound signal acquisition method and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007306250A (en) * 2006-05-10 2007-11-22 Ricoh Co Ltd Imaging device, photography-time warning method, and computer-readable recording medium
JP2011120165A (en) * 2009-12-07 2011-06-16 Sanyo Electric Co Ltd Imaging apparatus
CN103780843A (en) * 2014-03-03 2014-05-07 联想(北京)有限公司 Image processing method and electronic device
CN106662723A (en) * 2014-07-02 2017-05-10 索尼公司 Zoom control device, zoom control method, and program
CN105100635A (en) * 2015-07-23 2015-11-25 深圳乐行天下科技有限公司 Camera apparatus and camera control method
CN107847800A (en) * 2015-09-15 2018-03-27 喀普康有限公司 Games system, the control method of games system and non-volatile memory medium
CN105227849A (en) * 2015-10-29 2016-01-06 维沃移动通信有限公司 A kind of method of front-facing camera auto-focusing and electronic equipment
CN108668099A (en) * 2017-03-31 2018-10-16 鸿富锦精密工业(深圳)有限公司 video conference control method and device
CN109640032A (en) * 2018-04-13 2019-04-16 河北德冠隆电子科技有限公司 Based on the more five dimension early warning systems of element overall view monitoring detection of artificial intelligence
CN111464752A (en) * 2020-05-18 2020-07-28 Oppo广东移动通信有限公司 Zoom control method of electronic device and electronic device
CN111641794A (en) * 2020-05-25 2020-09-08 维沃移动通信有限公司 Sound signal acquisition method and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114629869A (en) * 2022-03-18 2022-06-14 维沃移动通信有限公司 Information generation method and device, electronic equipment and storage medium
CN114629869B (en) * 2022-03-18 2024-04-16 维沃移动通信有限公司 Information generation method, device, electronic equipment and storage medium
CN115550559A (en) * 2022-04-13 2022-12-30 荣耀终端有限公司 Video picture display method, device, equipment and storage medium
CN115550559B (en) * 2022-04-13 2023-07-25 荣耀终端有限公司 Video picture display method, device, equipment and storage medium
CN116055869A (en) * 2022-05-30 2023-05-02 荣耀终端有限公司 Video processing method and terminal
CN116055869B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Video processing method and terminal

Also Published As

Publication number Publication date
CN113727021B (en) 2023-07-11

Similar Documents

Publication Publication Date Title
US11030987B2 (en) Method for selecting background music and capturing video, device, terminal apparatus, and medium
US9031847B2 (en) Voice-controlled camera operations
CN113727021B (en) Shooting method and device and electronic equipment
US20110320949A1 (en) Gesture Recognition Apparatus, Gesture Recognition Method and Program
CN111641794B (en) Sound signal acquisition method and electronic equipment
CN111866392B (en) Shooting prompting method and device, storage medium and electronic equipment
CN110572716B (en) Multimedia data playing method, device and storage medium
RU2663709C2 (en) Method and device for data processing
WO2020103353A1 (en) Multi-beam selection method and device
US11222223B2 (en) Collecting fingerprints
CN112954199A (en) Video recording method and device
CN111242303A (en) Network training method and device, and image processing method and device
CN110798327B (en) Message processing method, device and storage medium
CN112637495B (en) Shooting method, shooting device, electronic equipment and readable storage medium
CN111312207B (en) Text-to-audio method, text-to-audio device, computer equipment and storage medium
US20150006173A1 (en) System and Method for Processing a Keyword Identifier
CN113936697A (en) Voice processing method and device for voice processing
CN113301444B (en) Video processing method and device, electronic equipment and storage medium
CN111611414A (en) Vehicle retrieval method, device and storage medium
CN113873165A (en) Photographing method and device and electronic equipment
CN111145723B (en) Method, device, equipment and storage medium for converting audio
CN112002313B (en) Interaction method and device, sound box, electronic equipment and storage medium
CN112151017A (en) Voice processing method, device, system, equipment and storage medium
CN112311652A (en) Message sending method, device, terminal and storage medium
CN111061918A (en) Graph data processing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant