CN110913120A - Image shooting method and device, electronic equipment and storage medium - Google Patents
Image shooting method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110913120A CN110913120A CN201811084098.8A CN201811084098A CN110913120A CN 110913120 A CN110913120 A CN 110913120A CN 201811084098 A CN201811084098 A CN 201811084098A CN 110913120 A CN110913120 A CN 110913120A
- Authority
- CN
- China
- Prior art keywords
- shooting
- image
- intelligent
- picture
- algorithm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The present disclosure relates to an image capturing method and apparatus, an electronic device, and a storage medium, and the method may include: identifying a framing picture of the camera module; selecting at least one part of picture content in the view-finding picture through an intelligent shooting algorithm, wherein the intelligent shooting algorithm is obtained by training image samples according with a preset shooting level; and generating an intelligent shooting image according to the selected picture content.
Description
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to an image capturing method and apparatus, an electronic device, and a storage medium.
Background
In the related art, the electronic device basically integrates a camera module to meet the shooting requirements of users. With the continuous development of the related art, the performance of the camera module is stronger and stronger, but the shooting level of the user is often inconsistent. However, the improvement of the shooting level has a high learning threshold and a long learning time, and is not suitable for most users.
Disclosure of Invention
The present disclosure provides an image capturing method and apparatus, an electronic device, and a storage medium to solve the disadvantages of the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided an image capturing method including:
identifying a framing picture of the camera module;
selecting at least one part of picture content in the view-finding picture through an intelligent shooting algorithm, wherein the intelligent shooting algorithm is obtained by training image samples according with a preset shooting level;
and generating an intelligent shooting image according to the selected picture content.
Optionally, the preset shooting level is related to at least one of the following determination parameters: composition, color, shadow.
Optionally, the selecting at least a part of the picture content in the view-finding picture through the intelligent shooting algorithm includes:
and respectively selecting corresponding picture contents from the view-finding pictures according to each set shooting style so as to be respectively used for generating intelligent shooting images corresponding to each shooting style.
Optionally, the generating an intelligent shooting image according to the selected picture content includes:
zooming the view finding picture to generate the intelligent shooting image according to the zoomed view finding picture;
or cutting the initial shot image generated by the view finding picture to generate the intelligent shot image according to the cut shot image.
Alternatively to this, the first and second parts may,
the generating of the intelligent shooting image according to the selected picture content comprises: generating the intelligent shooting image according to a shooting instruction sent by a home terminal user;
the method further comprises the following steps: and generating an artificial shooting image corresponding to the view finding picture according to the shooting instruction.
Optionally, the method further includes:
and adding a matched decorative element in the intelligent shooting image.
Optionally, the method further includes:
determining preference information of the home terminal user for the intelligent shooting image according to the processing mode of the home terminal user for the intelligent shooting image;
and adaptively adjusting the intelligent shooting algorithm according to the preference information.
According to a second aspect of the embodiments of the present disclosure, there is provided an image capturing apparatus including:
an identification unit configured to identify a framing picture of the camera module;
the selecting unit is configured to select at least a part of picture content in the view-finding picture through an intelligent shooting algorithm, and the intelligent shooting algorithm is obtained by training image samples which accord with a preset shooting level;
and the first generation unit is configured to generate the intelligent shooting image according to the selected picture content.
Optionally, the preset shooting level is related to at least one of the following determination parameters: composition, color, shadow.
Optionally, the selecting unit includes:
and the selecting subunit is configured to select corresponding picture content from the view-finding pictures according to each set shooting style respectively, so as to generate intelligent shooting images corresponding to each shooting style respectively.
Optionally, the first generating unit includes: a zoom subunit or a crop subunit;
the zooming subunit is configured to zoom the framing picture to generate the smart shot image according to the zoomed framing picture;
the cropping subunit is configured to crop the initial captured image generated by the viewfinder screen to generate the smart captured image from the cropped captured image.
Alternatively to this, the first and second parts may,
the first generation unit includes: the generating subunit is configured to generate the intelligent shooting image according to a shooting instruction sent by a home terminal user;
the device further comprises: a second generation unit configured to generate an artificially captured image corresponding to the finder screen according to the capturing instruction.
Optionally, the method further includes:
an adding unit configured to add a conforming decoration element in the smart shot image.
Optionally, the method further includes:
the determining unit is configured to determine preference information of the local user for the intelligent shooting image according to the processing mode of the local user for the intelligent shooting image;
an adjusting unit configured to adaptively adjust the smart photographing algorithm according to the preference information.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method as in any of the above embodiments by executing the executable instructions.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer instructions, wherein the instructions, when executed by a processor, implement the steps of the method as in any one of the above embodiments.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the embodiment, the electronic equipment has intelligent aesthetic consciousness and shooting capability through the pre-trained intelligent shooting algorithm, and as long as the user carries out primary framing operation, the intelligent shooting algorithm can automatically generate the intelligent shooting image which accords with the preset shooting level based on the framing picture, so that the threshold of shooting a good picture by a common user is greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating an image capturing method according to an exemplary embodiment.
FIG. 2 is a schematic diagram illustrating a method of training an AI photography algorithm in accordance with an exemplary embodiment.
FIG. 3 is a flowchart illustrating an AI-based photographing algorithm for enabling intelligent photographing according to an exemplary embodiment.
FIG. 4 is a diagram illustrating a viewing interface in accordance with an exemplary embodiment.
FIG. 5 is a schematic diagram illustrating a presentation of an artificial photograph in accordance with an exemplary embodiment.
Fig. 6 is a schematic diagram illustrating an AI photograph in accordance with an exemplary embodiment.
Fig. 7-12 are block diagrams illustrating an image capture device according to an exemplary embodiment.
Fig. 13 is a schematic diagram illustrating a configuration of an apparatus for image capture according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flowchart illustrating an image capturing method according to an exemplary embodiment, which is applied to any type of electronic device such as a mobile phone and a tablet, as shown in fig. 1, and may include the following steps:
in step 102, a framing picture of the camera module is identified.
In an embodiment, the electronic device is equipped with a camera module, for example, the camera module may be a rear camera module or a front camera module, for example, the camera module may be a monocular camera module or a binocular camera module, for example, the camera module may be an RGB camera module or a depth camera module, which is not limited in this disclosure.
In an embodiment, when the electronic device receives a shooting request of a user, a shooting function of the camera module can be started, so that the camera module conducts framing, and the electronic device can recognize a corresponding framing picture. In another embodiment, even if the electronic device does not receive a shooting request of a user, the electronic device can start a shooting function of the camera module and perform framing to identify a framing picture, so that capturing is realized in an unconscious state of the user; of course, the electronic device may provide the user with different modes, and the above-described unintentional snap-shot operation is performed when the user selects the unintentional photographing mode.
In step 104, at least a portion of the content of the view-finding frame is selected by an intelligent shooting algorithm, wherein the intelligent shooting algorithm is obtained by training image samples meeting a preset shooting level.
In an embodiment, the existing image may be subjected to shooting level recognition and determination in advance to obtain an image sample meeting a preset shooting level, and then the above-mentioned intelligent shooting algorithm is obtained based on the image sample training. The training mode for the image sample may be a supervised or unsupervised type, which is not limited by the present disclosure; for example, the intelligent shooting algorithm may be obtained based on supervised machine learning training by providing a positive sample meeting a preset shooting level and a negative sample not meeting the preset shooting level, and is used for selecting at least a part of pictures from the scene and generating an intelligent shooting image meeting the preset shooting level, wherein the intelligent shooting image is generated by the electronic device based on the intelligent shooting algorithm and is different from an artificial shooting image manually shot by a user.
In an embodiment, the shot level may be evaluated from one or more dimensions, for example, the preset shot level may be related to at least one of the following determination parameters: composition (e.g., composition and combination of dot line surface elements), color (e.g., matching of multiple colors), light shadow (e.g., matching of light and shade effects), etc., which is not limited by the disclosure.
In an embodiment, the intelligent shooting algorithm can train and learn one or more shooting styles, and can obtain intelligent shooting images respectively conforming to various shooting styles on the premise of ensuring that the obtained intelligent shooting images conform to a preset shooting level. The electronic equipment can default to select one or all shooting styles, or can be preset by a user to adopt one or more shooting styles, so that the electronic equipment can respectively select corresponding picture contents from the view-finding pictures through an intelligent shooting algorithm according to each set shooting style so as to respectively generate intelligent shooting images corresponding to each shooting style; in other words, the smart photographing algorithm may generate the corresponding smart photographing images of each set photographing style through the processing procedures of steps 102 to 106, which are not described herein.
In step 106, a smart shot image is generated based on the selected picture content.
In an embodiment, the smart-shooting algorithm may zoom the framed view and then perform a shooting operation on the zoomed framed view to generate the smart-shot image from the zoomed framed view. The zooming operation can comprise optical zooming, digital zooming and both optical zooming and digital zooming, which depends on the performance condition of the camera module and the actually required zooming multiple, and optical zooming and digital zooming should be preferentially adopted and avoided as much as possible to ensure that the intelligently shot image has a higher quality level.
In one embodiment, the smart photographing algorithm may crop an initial photographed image generated from the finder screen to generate the smart photographed image from the cropped photographed image. In other words, the initial shot image is obtained by performing a shooting operation on the viewfinder image, and then the initial shot image is cropped according to the image content selected by the smart shooting algorithm, so that the cropped shot image includes the selected image content, thereby obtaining the smart shot image.
In an embodiment, the intelligent shooting algorithm may be used to cooperate with the shooting operation of the home terminal user to complete the shooting operation synchronously with the home terminal user. For example, when receiving a shooting instruction sent by a home terminal user, the electronic device may generate the intelligent shooting image through an intelligent shooting algorithm, and simultaneously generate an artificial shooting image corresponding to the view-finding picture according to the shooting instruction, so that the home terminal user may compare the artificial shooting image with the intelligent shooting image, thereby continuously improving the shooting level of the home terminal user.
In an embodiment, a matching decoration element, such as a virtual photo frame, a text introduction, a cartoon pattern, etc., may be added to the smart shot image, which is not limited by the disclosure.
In an embodiment, preference information of the home terminal user for the intelligent shooting image may be determined according to a processing mode of the home terminal user for the intelligent shooting image; and then, adaptively adjusting the intelligent shooting algorithm according to the preference information, so that the intelligent shooting algorithm is more and more in line with the preference and habit of a home terminal user. For example, when the home terminal user performs a sharing operation after viewing the smart camera image, it indicates that the home terminal user likes the smart camera image, and when the home terminal user performs a deletion operation after viewing the smart camera image, it indicates that the home terminal user is likely to dislike the smart camera image. For another example, when the home terminal user deletes the manually shot image after viewing the intelligently shot image and the manually shot image, it indicates that the home terminal user likes the intelligently shot image, and when the home terminal user deletes the intelligently shot image after viewing the intelligently shot image and the manually shot image, it indicates that the home terminal user probably does not like the intelligently shot image. Of course, the preference of the home terminal user can be determined in other various manners and dimensions, which is not limited by the present disclosure.
According to the embodiment, the electronic equipment has intelligent aesthetic consciousness and shooting capability through the pre-trained intelligent shooting algorithm, and as long as the user carries out primary framing operation, the intelligent shooting algorithm can automatically generate the intelligent shooting image which accords with the preset shooting level based on the framing picture, so that the threshold of shooting a good picture by a common user is greatly reduced.
To implement the image capture scheme of the present disclosure, two processing stages need to be implemented: the first stage of training obtains an AI photography algorithm (corresponding to the intelligent photography algorithm shown in fig. 1), and the second stage of taking photos through the AI photography algorithm. These two phases are described in detail below.
FIG. 2 is a schematic diagram illustrating a method of training an AI photography algorithm in accordance with an exemplary embodiment. As shown in fig. 2, the training process for the AI photography algorithm may include the following steps:
in step ①, feature creation is performed from the photos in the photo library, forming a corresponding feature set.
In one embodiment, the photo library may include a collection of photos that may be from a network or provided by a user, and the present disclosure does not limit the source of the photos.
In one embodiment, by processing the photographs, corresponding features can be generated for the dimensions that are desired to be trained. Assuming that it is desirable for the AI photographing algorithm to be able to take an aesthetically pleasing, artistic photograph that can be evaluated from one or more dimensions of composition, color, shading, etc., corresponding features can be extracted from the photograph for those dimensions.
In step ②, the features in the feature set are marked to form positive and negative sample features.
In one embodiment, based on the above evaluation criteria, the features in the feature set may be marked, and features that meet the evaluation criteria and have certain aesthetics and artistry may be marked as positive sample features, and otherwise, as negative sample features, so that supervised type algorithm training is implemented based on the positive sample features and the negative sample features.
In step ③, algorithm training is performed on the positive sample features and the negative sample features to obtain an AI photographing algorithm.
In an embodiment, the training may be performed based on any type of machine learning algorithm in the related art, such as a neural network algorithm and its derivative algorithm, and the disclosure is not limited thereto.
In an embodiment, based on the AI photographing algorithm trained in the above manner, a given finder frame can be identified, for example, each constituent element included in the finder frame is identified, and at least one portion of the finder frame is selected, so that a combination of the constituent elements included in the at least one portion can meet the above evaluation criterion, and thus, an AI photo generated based on the AI photo can have aesthetic value and artistic value, which are different from photos randomly taken by an ordinary user.
FIG. 3 is a flowchart illustrating an AI-based photographing algorithm for enabling intelligent photographing according to an exemplary embodiment. As shown in fig. 3, the process applied to an electronic device such as a mobile phone used by a user may include the following steps:
in step 302, a camera application is launched.
In an embodiment, a user may start a camera APP (application) on a cell phone by triggering the camera APP. Or, the user may trigger the shooting function in the process of using other APPs, thereby calling the camera APP.
In step 304, determining whether an AI photographing function on the mobile phone is already turned on; if so, proceed to step 306.
In one embodiment, the AI photographing function may be provided to the user as an additional function, and the user may actively turn on or off the AI photographing function according to actual conditions. For example, as shown in fig. 4, after the camera APP on the mobile phone 40 is turned on, a shooting interface 41 may be shown, and an "AI composition" character corresponding to the AI photographing function may be displayed on the upper left corner of the shooting interface 41, and when "AI composition" is displayed on the right side as on (turned on), it indicates that the AI photographing function is in an on state; and when the "AI composition" right side is displayed as off, it indicates that the AI photographing function is in the off state.
In step 306, a framing picture is identified.
In an embodiment, the shooting interface 41 shown in fig. 4 may show a view frame 42, where the view frame 42 is a frame captured by a camera module on the mobile phone 40 for the user to estimate the actual effect of the shot photo.
In an embodiment, the AI photographing function may perform a recognition operation on the viewfinder frame 42 based on an AI photographing algorithm trained according to the embodiment shown in fig. 3 to determine the frame contents included in the viewfinder frame 42, and analyze the colors, shadows, composition relationships between the frame contents, and the like of the frame contents.
In step 308, at least a portion of the content of the viewfinder frame is selected based on the recognition of the content of the frame.
In one embodiment, the content of the frame in the viewfinder often does not satisfy the evaluation criterion of the excellent photo, and the AI photographing function may select a part of the content of the frame from the viewfinder that satisfies the evaluation criterion based on the recognition result.
In one embodiment, it is assumed that the partial screen content selected by the AI photographing function is located at the area 43 in the finder screen 42. For convenience of explanation, the area 43 is indicated by a rectangular frame in fig. 4, but in practice, the mobile phone 40 may select to display the rectangular frame or to hide the rectangular frame, and the disclosure does not limit this.
In step 310A, an AI photo is generated in response to a user's trigger operation of the shoot button.
In step 310B, an artificial photograph is generated in response to a user's trigger operation of the shoot button.
In an embodiment, after the user performs the trigger operation on the shooting button, on one hand, the framing picture may be directly generated as the corresponding photo based on the shooting operation in the related art, that is, the artificial photo obtained by the user through manual shooting is not different from the shooting process in the related art, and on the other hand, the content of the picture selected in step 308 may be generated as the corresponding photo based on the AI shooting function, that is, the AI photo.
For example, when the user triggers the shooting button on the basis of fig. 4, on one hand, the mobile phone 40 may generate the framing picture 42 into an artificial photo according to the shooting logic in the related art, for example, when the user views the album interface 51 as shown in fig. 5, the corresponding photo 52 may be viewed, and the content of the photo 52 is substantially identical to the framing picture 42; on the other hand, the cell phone 40 can generate the area 43 in the viewfinder screen 42 as an AI photo through the shooting logic of the AI photographing function, for example, when the user views the album interface 51 shown in fig. 6, the corresponding photo 53 can be viewed, and the content of the photo 53 is substantially consistent with the area 43 in the viewfinder screen 42.
Since the area 43 is a part of the view finding frame 42, when the photo 53 is taken, the camera module on the mobile phone 40 can be preferentially optically zoomed, so as to avoid the reduction of the quality of the photo; when the optical zoom is not satisfactory, digital zoom or clipping after shooting may be further performed, thereby finally obtaining the photograph 53.
In an embodiment, the AI photographing function may also perform post-processing on the AI photograph, such as adding filters, adding decorative content, and the like, which is not limited by the present disclosure. For example, as shown in fig. 6, the AI photographing function may add text 530 or other graphics, patterns, etc. to the photograph 53, which is not limited by the present disclosure. Of course, the user may choose to retain or remove post-processing effects such as the text 530.
In an embodiment, the album interface 51 shown in fig. 6 includes several options such as editing, sharing, deleting, etc., so that the user can perform corresponding processing operations on the photos 53; meanwhile, the mobile phone 40 may determine the user's preference for the photograph 53 according to the processing operation performed by the user. For example, when the user selects to perform a sharing operation on the photo 53, it may be determined that the user may like the photo 53, and when the user selects to perform a deletion operation on the photo 53, it may be determined that the user may not like the photo 53; accordingly, the cell phone 40 may perform an adjustment on the AI photographing algorithm to make the adjusted AI photographing algorithm better conform to the preference of the user, for example, when the user likes the photo 53, the adjusted AI photographing algorithm may continue to generate a similar photo in the subsequent photographing process, and when the user does not like the photo 53, the adjusted AI photographing algorithm may avoid generating a similar photo in the subsequent photographing process.
Corresponding to the embodiment of the image shooting method, the disclosure also provides an embodiment of the image shooting device.
Fig. 7 is a block diagram illustrating an image capture device according to an exemplary embodiment. Referring to fig. 7, the apparatus includes:
an identification unit 71 configured to identify a framing screen of the camera module;
a selecting unit 72 configured to select at least a part of the contents of the viewfinder frame by a smart shooting algorithm, the smart shooting algorithm being trained from image samples that meet a preset shooting level;
a first generating unit 73 configured to generate the smart shot image according to the selected screen content.
Optionally, the preset shooting level is related to at least one of the following determination parameters: composition, color, shadow.
As shown in fig. 8, fig. 8 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 7, and the selecting unit 72 includes:
a selecting sub-unit 721 configured to select corresponding screen contents from the framed screen according to each of the set photographing styles, respectively, for generating a smart photographed image corresponding to each of the photographing styles, respectively.
As shown in fig. 9, fig. 9 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 7, and the first generation unit 73 includes: a zoom subunit 731 or a crop subunit 732;
the zoom sub-unit 731 is configured to zoom the finder screen to generate the smart camera image from the zoomed finder screen;
the cropping subunit 732 is configured to crop the initial captured image generated by the finder screen to generate the smart shot image from the cropped captured image.
It should be noted that the structure of the zooming subunit 731 or the cropping subunit 732 in the apparatus embodiment shown in fig. 9 may also be included in the apparatus embodiment described in fig. 8, and the disclosure is not limited thereto.
As shown in fig. 10, fig. 10 is a block diagram of another image photographing device according to an exemplary embodiment, which is based on the aforementioned embodiment shown in fig. 7,
the first generation unit 73 includes: a generation subunit 733 configured to generate the smart shot image according to a shooting instruction issued by a home-end user;
the device further comprises: a second generating unit 74 configured to generate an artificially captured image corresponding to the finder screen in accordance with the capturing instruction.
It should be noted that the structures of the generation sub-unit 733 and the second generation unit 74 in the apparatus embodiment shown in fig. 10 may be included in the apparatus embodiment shown in fig. 8 or fig. 9, and the present disclosure is not limited thereto.
As shown in fig. 11, fig. 11 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 7, and further includes:
an adding unit 75 configured to add a conforming decoration element in the smart shot image.
It should be noted that the structure of the adding unit 75 in the device embodiment shown in fig. 11 may also be included in the device embodiment described in any one of fig. 8 to 10, and the disclosure is not limited thereto.
As shown in fig. 12, fig. 12 is a block diagram of another image capturing apparatus according to an exemplary embodiment, which is based on the foregoing embodiment shown in fig. 7, and further includes:
a determining unit 76 configured to determine preference information of the home user for the intelligently captured image according to a processing manner of the home user for the intelligently captured image;
an adjusting unit 77 configured to adaptively adjust the smart photographing algorithm according to the preference information.
It should be noted that the structures of the determining unit 76 and the adjusting unit 77 in the device embodiment shown in fig. 12 may also be included in the device embodiment described in any one of fig. 8 to 11, and the disclosure is not limited thereto.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the present disclosure also provides an image capturing apparatus, comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to implement the image capturing method as in any one of the above embodiments, such as the method may include: identifying a framing picture of the camera module; selecting at least one part of picture content in the view-finding picture through an intelligent shooting algorithm, wherein the intelligent shooting algorithm is obtained by training image samples according with a preset shooting level; and generating an intelligent shooting image according to the selected picture content.
Accordingly, the present disclosure also provides a terminal comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory, and configured to be executed by one or more processors, the one or more programs including instructions for implementing the image capturing method according to any of the above embodiments, such as the method may include: identifying a framing picture of the camera module; selecting at least one part of picture content in the view-finding picture through an intelligent shooting algorithm, wherein the intelligent shooting algorithm is obtained by training image samples according with a preset shooting level; and generating an intelligent shooting image according to the selected picture content.
Fig. 13 is a block diagram illustrating an apparatus 1300 for image capture according to an exemplary embodiment. For example, apparatus 1300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 13, the apparatus 1300 may include one or more of the following components: a processing component 1302, a memory 1304, a power component 1306, a multimedia component 1308, an audio component 1310, an input/output (I/O) interface 1312, a sensor component 1314, and a communication component 1316.
The processing component 1302 generally controls overall operation of the device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1302 may include one or more processors 1320 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1302 can include one or more modules that facilitate interaction between the processing component 1302 and other components. For example, the processing component 1302 may include a multimedia module to facilitate interaction between the multimedia component 1308 and the processing component 1302.
The memory 1304 is configured to store various types of data to support operations at the apparatus 1300. Examples of such data include instructions for any application or method operating on device 1300, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1304 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The multimedia component 1308 includes a screen between the device 1300 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1300 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1310 is configured to output and/or input audio signals. For example, the audio component 1310 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 1304 or transmitted via the communication component 1316. In some embodiments, the audio component 1310 also includes a speaker for outputting audio signals.
The I/O interface 1312 provides an interface between the processing component 1302 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1314 includes one or more sensors for providing various aspects of state assessment for the device 1300. For example, the sensor assembly 1314 may detect the open/closed state of the device 1300, the relative positioning of components, such as a display and keypad of the device 1300, the sensor assembly 1314 may also detect a change in the position of the device 1300 or a component of the device 1300, the presence or absence of user contact with the device 1300, orientation or acceleration/deceleration of the device 1300, and a change in the temperature of the device 1300. The sensor assembly 1314 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1316 is configured to facilitate communications between the apparatus 1300 and other devices in a wired or wireless manner. The apparatus 1300 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1316 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1316 also includes a Near Field Communications (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1304 comprising instructions, executable by the processor 1320 of the apparatus 1300 to perform the method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (16)
1. An image capturing method, characterized by comprising:
identifying a framing picture of the camera module;
selecting at least one part of picture content in the view-finding picture through an intelligent shooting algorithm, wherein the intelligent shooting algorithm is obtained by training image samples according with a preset shooting level;
and generating an intelligent shooting image according to the selected picture content.
2. The method of claim 1, wherein the preset shot level is associated with at least one of the following decision parameters: composition, color, shadow.
3. The method of claim 1, wherein said selecting at least a portion of the content of the viewfinder frame via a smart-shooting algorithm comprises:
and respectively selecting corresponding picture contents from the view-finding pictures according to each set shooting style so as to be respectively used for generating intelligent shooting images corresponding to each shooting style.
4. The method of claim 1, wherein generating the smart shot image based on the selected frame content comprises:
zooming the view finding picture to generate the intelligent shooting image according to the zoomed view finding picture;
or cutting the initial shot image generated by the view finding picture to generate the intelligent shot image according to the cut shot image.
5. The method of claim 1,
the generating of the intelligent shooting image according to the selected picture content comprises: generating the intelligent shooting image according to a shooting instruction sent by a home terminal user;
the method further comprises the following steps: and generating an artificial shooting image corresponding to the view finding picture according to the shooting instruction.
6. The method of claim 1, further comprising:
and adding a matched decorative element in the intelligent shooting image.
7. The method of claim 1, further comprising:
determining preference information of the home terminal user for the intelligent shooting image according to the processing mode of the home terminal user for the intelligent shooting image;
and adaptively adjusting the intelligent shooting algorithm according to the preference information.
8. An image capturing apparatus, characterized by comprising:
an identification unit configured to identify a framing picture of the camera module;
the selecting unit is configured to select at least a part of picture content in the view-finding picture through an intelligent shooting algorithm, and the intelligent shooting algorithm is obtained by training image samples which accord with a preset shooting level;
and the first generation unit is configured to generate the intelligent shooting image according to the selected picture content.
9. The apparatus of claim 8, wherein the preset shot level is associated with at least one of the following decision parameters: composition, color, shadow.
10. The apparatus of claim 8, wherein the selecting unit comprises:
and the selecting subunit is configured to select corresponding picture content from the view-finding pictures according to each set shooting style respectively, so as to generate intelligent shooting images corresponding to each shooting style respectively.
11. The apparatus of claim 8, wherein the first generating unit comprises: a zoom subunit or a crop subunit;
the zooming subunit is configured to zoom the framing picture to generate the smart shot image according to the zoomed framing picture;
the cropping subunit is configured to crop the initial captured image generated by the viewfinder screen to generate the smart captured image from the cropped captured image.
12. The apparatus of claim 8,
the first generation unit includes: the generating subunit is configured to generate the intelligent shooting image according to a shooting instruction sent by a home terminal user;
the device further comprises: a second generation unit configured to generate an artificially captured image corresponding to the finder screen according to the capturing instruction.
13. The apparatus of claim 8, further comprising:
an adding unit configured to add a conforming decoration element in the smart shot image.
14. The apparatus of claim 8, further comprising:
the determining unit is configured to determine preference information of the local user for the intelligent shooting image according to the processing mode of the local user for the intelligent shooting image;
an adjusting unit configured to adaptively adjust the smart photographing algorithm according to the preference information.
15. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-7 by executing the executable instructions.
16. A computer-readable storage medium having stored thereon computer instructions, which when executed by a processor, perform the steps of the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811084098.8A CN110913120B (en) | 2018-09-17 | 2018-09-17 | Image shooting method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811084098.8A CN110913120B (en) | 2018-09-17 | 2018-09-17 | Image shooting method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110913120A true CN110913120A (en) | 2020-03-24 |
CN110913120B CN110913120B (en) | 2021-11-30 |
Family
ID=69812698
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811084098.8A Active CN110913120B (en) | 2018-09-17 | 2018-09-17 | Image shooting method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110913120B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113873177A (en) * | 2020-06-30 | 2021-12-31 | 北京小米移动软件有限公司 | Multi-view shooting method and device, electronic equipment and storage medium |
CN114697530A (en) * | 2020-12-31 | 2022-07-01 | 华为技术有限公司 | Photographing method and device for intelligent framing recommendation |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100321536A1 (en) * | 2009-06-23 | 2010-12-23 | Lg Electronics Inc. | Mobile terminal and controlling method of a mobile terminal |
CN104346801A (en) * | 2013-08-02 | 2015-02-11 | 佳能株式会社 | Image-composition evaluating device, information processing device and method thereof |
CN105915801A (en) * | 2016-06-12 | 2016-08-31 | 北京光年无限科技有限公司 | Self-learning method and device capable of improving snap shot effect |
US20160295104A1 (en) * | 2013-12-20 | 2016-10-06 | Lg Electronics Inc. | Mobile terminal and controlling method therefor |
CN107317962A (en) * | 2017-05-12 | 2017-11-03 | 广东网金控股股份有限公司 | A kind of intelligence, which is taken pictures, cuts patterning system and application method |
CN107566725A (en) * | 2017-09-15 | 2018-01-09 | 广东小天才科技有限公司 | Photographing control method and device |
CN107635095A (en) * | 2017-09-20 | 2018-01-26 | 广东欧珀移动通信有限公司 | Shoot method, apparatus, storage medium and the capture apparatus of photo |
CN108174081A (en) * | 2017-11-29 | 2018-06-15 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN108229369A (en) * | 2017-12-28 | 2018-06-29 | 广东欧珀移动通信有限公司 | Image capturing method, device, storage medium and electronic equipment |
CN108513073A (en) * | 2018-04-13 | 2018-09-07 | 朱钢 | A kind of implementation method for the mobile phone photograph function having photographer's composition consciousness |
-
2018
- 2018-09-17 CN CN201811084098.8A patent/CN110913120B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100321536A1 (en) * | 2009-06-23 | 2010-12-23 | Lg Electronics Inc. | Mobile terminal and controlling method of a mobile terminal |
CN104346801A (en) * | 2013-08-02 | 2015-02-11 | 佳能株式会社 | Image-composition evaluating device, information processing device and method thereof |
US20160295104A1 (en) * | 2013-12-20 | 2016-10-06 | Lg Electronics Inc. | Mobile terminal and controlling method therefor |
CN105915801A (en) * | 2016-06-12 | 2016-08-31 | 北京光年无限科技有限公司 | Self-learning method and device capable of improving snap shot effect |
CN107317962A (en) * | 2017-05-12 | 2017-11-03 | 广东网金控股股份有限公司 | A kind of intelligence, which is taken pictures, cuts patterning system and application method |
CN107566725A (en) * | 2017-09-15 | 2018-01-09 | 广东小天才科技有限公司 | Photographing control method and device |
CN107635095A (en) * | 2017-09-20 | 2018-01-26 | 广东欧珀移动通信有限公司 | Shoot method, apparatus, storage medium and the capture apparatus of photo |
CN108174081A (en) * | 2017-11-29 | 2018-06-15 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN108229369A (en) * | 2017-12-28 | 2018-06-29 | 广东欧珀移动通信有限公司 | Image capturing method, device, storage medium and electronic equipment |
CN108513073A (en) * | 2018-04-13 | 2018-09-07 | 朱钢 | A kind of implementation method for the mobile phone photograph function having photographer's composition consciousness |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113873177A (en) * | 2020-06-30 | 2021-12-31 | 北京小米移动软件有限公司 | Multi-view shooting method and device, electronic equipment and storage medium |
CN114697530A (en) * | 2020-12-31 | 2022-07-01 | 华为技术有限公司 | Photographing method and device for intelligent framing recommendation |
CN114697530B (en) * | 2020-12-31 | 2023-11-10 | 华为技术有限公司 | Photographing method and device for intelligent view finding recommendation |
Also Published As
Publication number | Publication date |
---|---|
CN110913120B (en) | 2021-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108419016B (en) | Shooting method and device and terminal | |
CN107426502B (en) | Shooting method and device, electronic equipment and storage medium | |
CN107347135B (en) | Photographing processing method and device and terminal equipment | |
CN105282441B (en) | Photographing method and device | |
WO2017016069A1 (en) | Photographing method and terminal | |
CN109922252B (en) | Short video generation method and device and electronic equipment | |
WO2022077970A1 (en) | Method and apparatus for adding special effects | |
CN104869308A (en) | Picture taking method and device | |
CN111586296B (en) | Image capturing method, image capturing apparatus, and storage medium | |
CN107426489A (en) | Processing method, device and terminal during shooting image | |
CN106506948A (en) | Flash lamp control method and device | |
KR20080109519A (en) | Device and method for image photographing | |
CN110913120B (en) | Image shooting method and device, electronic equipment and storage medium | |
CN107122697B (en) | Automatic photo obtaining method and device and electronic equipment | |
CN110995993A (en) | Star track video shooting method, star track video shooting device and storage medium | |
CN110430356A (en) | One kind repairing drawing method and electronic equipment | |
CN111355879B (en) | Image acquisition method and device containing special effect pattern and electronic equipment | |
CN114079724B (en) | Taking-off snapshot method, device and storage medium | |
CN116939351A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
EP3846447A1 (en) | Image acquisition method, image acquisition device, electronic device and storage medium | |
CN113315903B (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN108206910B (en) | Image shooting method and device | |
CN114697515A (en) | Method and device for collecting image and readable storage medium | |
CN112346606A (en) | Picture processing method and device and storage medium | |
WO2023231009A1 (en) | Focusing method and apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |