CN108989666A - Image pickup method, device, mobile terminal and computer-readable storage medium - Google Patents
Image pickup method, device, mobile terminal and computer-readable storage medium Download PDFInfo
- Publication number
- CN108989666A CN108989666A CN201810673339.6A CN201810673339A CN108989666A CN 108989666 A CN108989666 A CN 108989666A CN 201810673339 A CN201810673339 A CN 201810673339A CN 108989666 A CN108989666 A CN 108989666A
- Authority
- CN
- China
- Prior art keywords
- photographing position
- composition
- current picture
- optimum photographing
- type
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
Abstract
This application discloses a kind of image pickup method, device, mobile terminal and computer-readable storage mediums, this method comprises: obtaining the optimum photographing position of current picture based on convolutional neural networks;According to the optimum photographing position, obtain from the current location of the current picture to the guidance information of the optimum photographing position.This method is trained convolutional neural networks as sample set by the common camera site of acquisition and disposes in the terminal, when the user takes a, trained convolutional neural networks can export the corresponding optimum photographing position of Current camera picture, user is guided simultaneously towards the mobile mobile terminal of optimum photographing position, is improved user and is taken pictures experience.
Description
Technical field
This application involves technical field of mobile terminals, more particularly, to a kind of image pickup method, device, mobile terminal and
Computer-readable storage medium.
Background technique
For people when using mobile phone photograph, the photo for not terminating in shooting becomes machine-made picture record, it is also desirable to clap
The photo taken out has aesthetic feeling, improves the reserve value of shooting photo.But existing mobile phone photograph is concentrated mainly on to stabilization
The promotion functionally such as dynamic or image rendering, for amateur photographer, it is desirable to take the photographic effects of profession still
It is highly difficult.
Summary of the invention
In view of the above problems, present applicant proposes a kind of image pickup method, device, mobile terminal and computer-readable storages
Medium, to solve the above problems.
In a first aspect, the embodiment of the present application provides a kind of image pickup method, this method comprises: being obtained based on convolutional neural networks
Take the optimum photographing position of current picture;According to the optimum photographing position, obtain from the current location of the current picture to
The guidance information of the optimum photographing position.
Second aspect, the embodiment of the present application provide a kind of filming apparatus, and described device includes: acquisition module, are used for base
The optimum photographing position of current picture is obtained in convolutional neural networks;Guiding module, for obtaining according to the optimum photographing position
Take the guidance information from the current location of the current picture to the optimum photographing position.
The third aspect, the embodiment of the present application provide a kind of mobile terminal comprising display, memory and processing
Device, the display and the memory are couple to the processor, the memory store instruction, when described instruction is by described
When processor executes, the processor executes method described in above-mentioned first aspect.
Fourth aspect, the embodiment of the present application provide it is a kind of with processor can be performed program code it is computer-readable
Storage medium is taken, said program code makes the processor execute method described in above-mentioned first aspect.
Compared with the existing technology, image pickup method provided by the embodiments of the present application, device, mobile terminal and computer-readable
Storage medium first obtains the optimum photographing position of current picture based on convolutional neural networks, further according to the optimum photographing position,
It obtains from the current location of the current picture to the guidance information of the optimum photographing position.The opposite and prior art, this Shen
Please embodiment acquire common camera site convolutional neural networks be trained as sample set and are disposed in the terminal,
When user takes pictures, using the corresponding optimum photographing position of trained convolutional neural networks output Current camera picture, simultaneously
It guides user towards the mobile mobile terminal of optimum photographing position, the quality of shooting photo can be effectively improved, promote taking pictures for user
Experience.
These aspects or other aspects of the application can more straightforward in the following description.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 shows the flow diagram of the image pickup method of the application first embodiment offer;
Fig. 2 shows the flow diagrams for the image pickup method that the application second embodiment provides;
Fig. 3 shows the module frame chart of the filming apparatus of the application 3rd embodiment offer;
Fig. 4 shows the module frame chart of the filming apparatus of the application fourth embodiment offer;
Fig. 5 shows a kind of structural block diagram of mobile terminal provided by the embodiments of the present application;
Fig. 6 shows the block diagram of the mobile terminal for executing the image pickup method according to the embodiment of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.It is based on
Embodiment in the application, it is obtained by those of ordinary skill in the art without making creative efforts every other
Embodiment shall fall in the protection scope of this application.
With the continuous development of machine learning and deep learning, identification point is carried out to image scene using machine learning model
The method of class has been widely applied in every field.
People do not terminate in and the photo of shooting are allowed to become machine-made picture record when using mobile phone photograph, it is also desirable to
The photo shot has aesthetic feeling, improves the reserve value of shooting photo.The various of mobile terminal are widely used at present to take pictures
Using, by the stabilization under screening-mode, and to shooting image render, can make shoot photo quality mention significantly
It rises, is liked by numerous shutterbugs.
However, inventor is collected in the demand of taking pictures to people, and sent out after analyzing existing application of taking pictures
Existing, existing mobile phone photograph is functionally optimized from stabilization or promotion shooting effect etc., although these optimizations can
The quality of photo is promoted to a certain extent, but for the amateur photographer numerous for group, it is desirable to take profession
Photo with composition or aesthetic feeling of color, it is but still highly difficult, because these shooting skills need for a long time to shooting angle, position
The training set can be grasped.
In the course of the study, what kind of mode inventor, which has studied through, guides user to take pictures, and research
How the Real-time Feedback of information guided according to the current shooting picture of camera, and proposes the bat in the embodiment of the present application
Take the photograph method, apparatus, mobile terminal and computer-readable storage medium.
It to image pickup method provided by the embodiments of the present application, device, mobile terminal and will be deposited by specific embodiment below
Storage media is described in detail.
First embodiment
Referring to Fig. 1, Fig. 1 shows the flow diagram of the image pickup method of the application first embodiment offer.The bat
The method of taking the photograph first passes through the common camera site of acquisition and is trained and is deployed in mobile whole to convolutional neural networks as sample set
It is corresponding using trained convolutional neural networks output Current camera picture when user's hand-held mobile terminal is taken pictures in end
Optimum photographing position, then export from the current location of current picture to the guidance information of optimum photographing position, finally guide user
Towards the mobile mobile terminal of the optimum photographing position, the quality of shooting photo can be effectively improved, the experience of taking pictures of user is promoted.
In the particular embodiment, the image pickup method can be applied to filming apparatus 300 as shown in Figure 3 and configured with filming apparatus
300 mobile terminal 100 (Fig. 5), the image pickup method is for improving experience of the user when taking pictures.To be with mobile phone below
Example, is explained in detail for process shown in FIG. 1.Above-mentioned image pickup method specifically may comprise steps of:
Step S101: the optimum photographing position of current picture is obtained based on convolutional neural networks.
In the embodiment of the present application, the current picture, can be mobile phone camera when shooting by camera lens or other light
Electrical part acquired image;It can be two-dimensional image, be also possible to three-dimensional image.
The optimum photographing position can be the hobby of individual subscriber, or generally recognize in photography art evaluation or masses
Ibid, make current picture that there is the camera site of Aesthetic Characteristics.For example, optimum photographing position can be with optimal pattern features
Camera site, or the camera site with optimal color character, wherein composition can be understood as lines in image, target
The distribution of the shapes such as edge, position feature, color can be understood as the distribution of the features such as brightness, coloration, the contrast of image.It can
With understanding, when mobile phone is located at the optimum photographing position, the photo shot is that tool is artistic, which can pass through
Hobby, photography art or the common cognition of masses of individual subscriber quantify.It should be noted that the best shooting position
It sets, may be embodied in when can take optimal current picture, space coordinate and angle where mobile terminal (camera).
In the present embodiment, before executing step S101, multiclass can be acquired in advance with different Aesthetic Characteristics (shooting position
Set) image as sample set, such as acquire the images of a variety of different composition types in advance as sample set, or acquire it is a variety of not
Image with color topological classes is as sample set;Convolutional neural networks such as ResNet etc. is instructed by the sample set again
Practice;Finally the convolutional neural networks that training is completed are deployed in mobile terminal.
In the present embodiment, current picture when camera is shot inputs trained convolutional neural networks, can be by this
Convolutional neural networks export the optimum photographing position being adapted to camera current picture.For example, scheming by the way that mobile phone camera shooting one is secondary
Picture is likely to occur sky and the two target subjects of grassland in the image that camera lens obtain at this time, however is now placed in sky
Horizon between grassland is inclined, and contains only lines level in the training set of training convolutional neural networks in advance
Composition does not include streak-inclined composition, at this time by the way that the inclined current picture in above-mentioned horizon is inputted convolutional Neural net
Network, the optimum photographing position of acquisition can be the composition with the immediate horizon level of current picture, and the best shooting
Position is when can take the current picture of horizon level, the location of mobile terminal (including space coordinate and angle
Degree).
Step S102: it according to the optimum photographing position, obtains from the current location of the current picture to described best
The guidance information of camera site.
In the present embodiment, after obtaining optimum photographing position, can be generated from the current location of current picture to it is described most
The guidance information of good camera site.
When mobile terminal is taken pictures by user's manual operation, from the current location of current picture to optimum photographing position
Guidance information can be the prompt information that mobile terminal is transmitted to user, which can be from vision, the sense of hearing or tactile etc.
It sensuously prompts and user is guided to move the position (including space coordinate and angle) of mobile terminal.For example, in vision
On, it can be by the way that on the display screen of mobile terminal, display be moved to arrow, the song of optimum photographing position from current location
Line, or display interface partial region show optimum photographing position when camera view analog image thumbnail, with guidance
User is towards the direction cell phone prompted on display interface;Acoustically, can persistently be played by the loudspeaker of mobile terminal
The prompt tone of certain frequency, according to the distance change of current location and optimum photographing position, the frequency that prompt tone plays also can be with
Variation, the orientation of user's optimum photographing position is prompted with this;In tactile, it can prompt to use by modes such as mobile phone vibrations
The moving direction of family mistake, guidance client edge are correctly oriented cell phone.
When mobile terminal is to be automatically performed to take pictures by Mechanical course (such as robot is taken pictures by camera), from current
The current location of picture can be mobile terminal and send out to the positioner of mobile terminal to the guidance information of optimum photographing position
Send control signal (such as robot camera obtain image, guidance information is calculated by CPU, then the guidance information is sent out
Give the MCU that the camera of control robot moves in three-dimensional space), which can control mobile terminal and be moved to
Space coordinate where optimum photographing position, and control rotation of mobile terminal to optimum photographing position demand space angle.
The image pickup method that the application first embodiment provides has artistic camera site as sample set pair by acquisition
Convolutional neural networks are trained and dispose in the terminal, when user takes pictures, utilize trained convolutional neural networks
The corresponding optimum photographing position of Current camera picture is exported, and guides user towards the mobile mobile terminal of optimum photographing position, it can
The quality for effectively improving shooting photo, promotes the experience of taking pictures of user.
Second embodiment
Referring to Fig. 2, the flow diagram of the image pickup method provided Fig. 2 shows the application second embodiment.Below will
It takes the mobile phone as an example, is explained in detail for process shown in Fig. 2.Above-mentioned image pickup method specifically may include following step
It is rapid:
Step S201: the composition of the current picture is obtained relative to the similar general of default composition based on convolutional neural networks
Rate.
In the present embodiment, the current picture be can be in the image acquisition modality in mobile phone camera shooting, via hand
The image that the photoelectric subassemblys such as machine camera obtain, the composition of the current picture can be the subject on the image
Lines distribution.The method for obtaining the composition of the current picture can be multiplicity, for example, can be by acquiring current picture figure
As the gray value of interior each pixel, if the gray value of some pixel is significantly lower than neighbouring pixel, which may be
Any in current picture on some lines can determine before deserving by obtaining the intensity profile of entire current picture image
The composition of picture.Under in such a way that gray scale is patterned analysis, the composition of the current picture is relative to default composition
The similarity of likelihood probability, as current picture image relative to the intensity profile of default patterned image.
In the present embodiment, the default composition be can be as the common of the training set training convolutional neural networks
One of composition, for example, it is upper trichotomy composition, lower trichotomy composition, left trichotomy composition, right trichotomy composition, three points other
Method composition, diagonal line composition, guide line composition, S molded line composition, triangle composition or other type compositions.It is understood that
The number of species of default composition in training set are more, and convolutional neural networks composition judgement most suitable for current picture is more
Refinement.
In the present embodiment, the composition of current picture can be successively compared with each default composition, it is current to obtain
The likelihood probability of the composition of picture and each default composition.
Step S202: judge whether the likelihood probability is greater than preset threshold.
If the likelihood probability is greater than the preset threshold, step S203 is executed;If the likelihood probability is no more than described
Preset threshold then returns to step S201.
In the present embodiment, the preset threshold can be the mark of the composition for judging current picture and default composition likelihood probability
Standard, the preset threshold need to be set according to the number of species of default composition.For example, default composition number of species compared with
When few, since the similarity between each default composition type is inherently lower, the preset threshold can be set as 60%;If pre-
If the number of species of composition are more, the similarity itself between each default composition type is higher, the differentiation for needing more to refine
Most suitable pairing could be carried out to the composition of current picture, the threshold value can be set at this time as 80%.Particularly, in current picture
Composition and likelihood probability of some default composition when being 100%, it is believed that the composition of current picture meets the default composition
Standard, in this case, i.e., instruction user takes pictures, and obtains the photo with the default composition.
Step S203: it is exported the type of the default composition as the composition type of the current picture.
In the present embodiment, when the likelihood probability is greater than the preset threshold, it is believed that the composition kind of current picture
This, can be preset composition type of the type as the current picture of composition by the type matching of class and the default composition at this time
Output.
In the present embodiment, if the likelihood probability for occurring multiple default compositions simultaneously is greater than preset threshold, phase can be chosen
It is exported like a default composition of maximum probability as the composition type of current picture;If occurring the phase of multiple default compositions simultaneously
Be all larger than preset threshold and identical like probability, then it can be from multiple the default of (likelihood probability is greater than preset threshold) that meet the requirements
One is randomly selected in composition as the composition type of current picture and generates corresponding optimum photographing position, it can also be defeated simultaneously
The identical default composition of multiple likelihood probabilities out carries out the screening of step S204 to step S205.
Step S204: judge in the composition type with the presence or absence of preference type.
In the present embodiment, the preference type can be that user selects setting in the default composition of multiple types
The composition type of people's hobby is also possible to mobile terminal and is taken pictures after data analyze according to the history of user, according to user's
The composition type for the user preference that use habit obtains.
In the present embodiment, if thening follow the steps S205 there are preference type in the composition type of step S203 output;If no
There are preference types, then illustrate that the composition type currently exported does not meet the hobby of user, can return to step S201, etc.
New current picture to be collected, can also execute step S206.Preference type if it does not exist can also be user or mobile terminal
Preference type is not set, at this point, no matter step S203 outputs one or multiple composition types, executable step
S206。
Step S205: according to the preference type, the optimum photographing position of the current picture is generated.
In the present embodiment, if step S203 outputs structure of the identical default composition of multiple likelihood probabilities as current picture
Figure, and there are multiple preference types in these default compositions, then can randomly select one in the composition of this multiple preference type,
The corresponding optimum photographing position for generating current picture;If in these default compositions, there is only a preference type, preferential bases
The optimum photographing position of preference type generation current picture.
Step S206: the optimum photographing position of the current picture is generated according to any one in the composition type.
If the composition type only one, then generating the best of current picture according to the unique composition type
Camera site, and execute step S207;If the composition type have it is multiple, then randomly selecting one in multiple composition types
The optimum photographing position of current picture is generated, and executes step S207.
Step S207: the position coordinates of the optimum photographing position and the current location are obtained.
In the present embodiment, the position coordinates of current location can be by pixel each on current picture in space coordinates
Interior position is obtained;The position coordinates of optimum photographing position, can be according to certain on the corresponding default composition of optimum photographing position
A pixel most preferably shoots position with the displacement difference of pixel corresponding on current picture in space coordinates to calculate
The space coordinate for the pixel set.
Step S208: be based on the position coordinates, obtain the current location to the optimum photographing position vector.
In the present embodiment, after the coordinate for obtaining each pixel on optimum photographing position and current location
Based on the coordinate obtain the current location to the optimum photographing position vector.
Step S209: be based on the vector, generate the current location to the optimum photographing position guidance information.
In the present embodiment, the vector of the current location to the optimum photographing position, direction is current location to most
The direction of good camera site, the length is the distances of current location to optimum photographing position.
In the present embodiment, before step S204, following steps can also be carried out.
Step S210: the composition type that frequency of occurrence is most in history photographed data is obtained.
In the present embodiment, the history photographed data can be the shooting recorded when mobile terminal camera is in each shooting
The composition type of photo, or the composition type of local photograph album or the photo of cloud preservation.
Step S211: it is described pre- to judge whether frequency of occurrence is most in the history photographed data composition type is subordinated to
If composition type.
If the most composition type of frequency of occurrence is subordinated to the default composition type in the history photographed data, hold
Row step S212 to S213;If the most composition type of frequency of occurrence is not subordinated to the default structure in the history photographed data
Figure type, thens follow the steps S214.
Step S212: based on the most composition type of frequency of occurrence in the history photographed data, user preference letter is generated
Breath.
The most composition type of frequency of occurrence in history photographed data, it is believed that it is the composition type of user preference, it can
User preference information is generated according to the most composition type of the frequency of occurrence.
Step S213: being based on the user preference information, and preference type is set in default composition type.
In the present embodiment, after preference type is completed in step S213 setting, step S204 can be carried out.
Step S214: using the most composition type of frequency of occurrence in the history photographed data as sample set, to described
Convolutional neural networks are trained.
It, can be according to user's relative to the image pickup method that the application first embodiment, the application second embodiment provide
Hobby or use habit set preference type, and automatically select the composition type output for meeting user preferences or habit;It can also
It is enough that newly-increased composition type is obtained according to the use habit of user, expansion training is carried out to convolutional neural networks, makes the program
Using more intelligent, humanized.
3rd embodiment
Referring to Fig. 3, Fig. 3 shows the module frame chart of the filming apparatus 300 of the application 3rd embodiment offer.Below will
It being illustrated for module frame chart shown in Fig. 3, the filming apparatus 300 includes: to obtain module 310 and guiding module 320,
Wherein:
Module 310 is obtained, for obtaining the optimum photographing position of current picture based on convolutional neural networks.
Guiding module 320, for obtaining from the current location of the current picture to institute according to the optimum photographing position
State the guidance information of optimum photographing position.
The filming apparatus that the application 3rd embodiment provides, by utilizing trained convolutional Neural when user takes pictures
Network exports the corresponding optimum photographing position of Current camera picture, while guiding user whole towards the mobile movement of optimum photographing position
End can effectively improve the quality of shooting photo, promote the experience of taking pictures of user.
Fourth embodiment
Referring to Fig. 4, Fig. 4 shows the module frame chart of the filming apparatus 400 of the application fourth embodiment offer.Below will
Be illustrated for module frame chart shown in Fig. 4, the filming apparatus 400 include: obtain module 410, guiding module 420, partially
Good module 430, setting module 440, judgment module 450 and training module 460, in which:
Module 410 is obtained, for obtaining the optimum photographing position of current picture based on convolutional neural networks.Further,
The acquisition module 410 includes: acquiring unit 411 and generation unit 412, in which:
Acquiring unit 411, for obtaining the composition type of the current picture based on convolutional neural networks.Further,
The acquiring unit 411 includes: probability subelement, threshold value subelement and composition subelement, in which:
Probability subelement, for obtaining the composition of the current picture based on convolutional neural networks relative to default composition
Likelihood probability.
Threshold value subelement, for judging whether the likelihood probability is greater than preset threshold.
Composition subelement is used for when the likelihood probability is greater than the preset threshold, by the type of the default composition
Composition type as the current picture exports.
Generation unit 412, for generating the optimum photographing position of the current picture according to the composition type.Into one
Step, the generation unit 412 includes: preference subelement and location subunit, in which:
Preference subelement, for judging in the composition type with the presence or absence of preference type;
Location subunit, for, there are when preference type, according to the preference type, generating institute in the composition type
State the optimum photographing position of current picture.
Guiding module 420, for obtaining from the current location of the current picture to institute according to the optimum photographing position
State the guidance information of optimum photographing position.Further, the guiding module 420 includes: coordinate unit 421, vector location 422
And guidance unit 423, in which:
Coordinate unit 421, for obtaining the position coordinates of the optimum photographing position and the current location.
Vector location 422 obtains the current location to the optimum photographing position for being based on the position coordinates
Vector.
Guidance unit 423 generates the guidance of the current location to the optimum photographing position for being based on the vector
Information.
Preference module 430, for obtaining user preference information.Further, the preference module 430 includes: history list
Member 431 and counting unit 432, in which:
History unit 431, for obtaining the most composition type of frequency of occurrence in history photographed data.
Counting unit 432 generates user for the composition type most based on frequency of occurrence in the history photographed data
Preference information.
Setting module 440 sets preference type for being based on the user preference information in default composition type.
Judgment module 450, for judge frequency of occurrence is most in the history photographed data composition type whether subordinate
In the default composition type.
Training module 460, the composition type most for the frequency of occurrence in the history photographed data are not subordinated to institute
When stating default composition type, using the most composition type of frequency of occurrence in the history photographed data as sample set, to described
Convolutional neural networks are trained.
It, can be according to user's relative to the filming apparatus that the application 3rd embodiment, the application fourth embodiment provide
Hobby or use habit set preference type, and automatically select the composition type output for meeting user preferences or habit;It can also
It is enough that newly-increased composition type is obtained according to the use habit of user, expansion training is carried out to convolutional neural networks, makes the program
Using more intelligent, humanized.
5th embodiment
The 5th embodiment of the application provides a kind of mobile terminal comprising display, memory and processor, it is described
Display and the memory are couple to the processor, the memory store instruction, when described instruction is by the processor
It is executed when execution:
The optimum photographing position of current picture is obtained based on convolutional neural networks;
According to the optimum photographing position, obtain from the current location of the current picture to the optimum photographing position
Guidance information.
Sixth embodiment
The application sixth embodiment provide it is a kind of with processor can be performed the computer-readable of program code deposit
Storage media, said program code execute the processor:
The optimum photographing position of current picture is obtained based on convolutional neural networks;
According to the optimum photographing position, obtain from the current location of the current picture to the optimum photographing position
Guidance information.
In conclusion image pickup method provided by the present application, device, mobile terminal and computer-readable storage medium, first
The optimum photographing position that current picture is obtained based on convolutional neural networks is obtained further according to the optimum photographing position from described
Guidance information of the current location of current picture to the optimum photographing position.Opposite and the prior art, the embodiment of the present application are adopted
Collect common camera site to be trained convolutional neural networks as sample set and dispose in the terminal, take pictures in user
When, using the corresponding optimum photographing position of trained convolutional neural networks output Current camera picture, while guiding user court
Optimum photographing position moves the mobile terminal, can effectively improve the quality of shooting photo, promote the experience of taking pictures of user.
It should be noted that all the embodiments in this specification are described in a progressive manner, each embodiment weight
Point explanation is the difference from other embodiments, and the same or similar parts between the embodiments can be referred to each other.
For device class embodiment, since it is basically similar to the method embodiment, so being described relatively simple, related place ginseng
See the part explanation of embodiment of the method.For arbitrary processing mode described in embodiment of the method, in device reality
Apply in example can no longer be repeated in Installation practice by corresponding processing modules implement one by one.
Referring to Fig. 5, the embodiment of the present application also provides a kind of mobile terminal 100 based on above-mentioned image pickup method, device,
It includes electronic body portion 10, and the electronic body portion 10 includes shell 12 and the main display being arranged on the shell 12
120.Metal can be used in the shell 12, such as steel, aluminium alloy are made.In the present embodiment, the main display 120 is generally included
Display panel 111 may also comprise for responding the circuit etc. for carrying out touch control operation to the display panel 111.The display surface
Plate 111 can be a liquid crystal display panel (Liquid Crystal Display, LCD), in some embodiments, described aobvious
Show panel 111 while being a touch screen 109.
Please refer to Fig. 6, in actual application scenarios, the mobile terminal 100 can be used as intelligent mobile phone terminal into
It exercises and uses, the electronic body portion 10 also typically includes one or more (only showing one in figure) processors in this case
102, memory 104, RF (Radio Frequency, radio frequency) module 106, voicefrequency circuit 110, sensor 114, input module
118, power module 122.It will appreciated by the skilled person that structure shown in fig. 5 is only to illustrate, not to described
The structure in electronic body portion 10 causes to limit.For example, the electronic body portion 10 may also include than shown in Fig. 5 more or more
Few component, or with the configuration different from shown in Fig. 5.
It will appreciated by the skilled person that every other component belongs to for the processor 102
It is coupled between peripheral hardware, the processor 102 and these peripheral hardwares by multiple Peripheral Interfaces 124.The Peripheral Interface 124 can
Based on following standard implementation: Universal Asynchronous Receive/sending device (Universal Asynchronous Receiver/
Transmitter, UART), universal input/output (General Purpose Input Output, GPIO), serial peripheral connect
Mouthful (Serial Peripheral Interface, SPI), internal integrated circuit (Inter-Integrated Circuit,
I2C), but it is not limited to above-mentioned standard.In some instances, the Peripheral Interface 124 can only include bus;In other examples
In, the Peripheral Interface 124 may also include other elements, such as one or more controller, such as connecting the display
The display controller of panel 111 or storage control for connecting memory.In addition, these controllers can also be from described
It detaches, and is integrated in the processor 102 or in corresponding peripheral hardware in Peripheral Interface 124.
The memory 104 can be used for storing software program and module, and the processor 102 is stored in institute by operation
The software program and module in memory 104 are stated, thereby executing various function application and data processing.The memory
104 may include high speed random access memory, may also include nonvolatile memory, and such as one or more magnetic storage device dodges
It deposits or other non-volatile solid state memories.In some instances, the memory 104 can further comprise relative to institute
The remotely located memory of processor 102 is stated, these remote memories can pass through network connection to the electronic body portion 10
Or the main display 120.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile communication
Net and combinations thereof.
The RF module 106 is used to receive and transmit electromagnetic wave, realizes the mutual conversion of electromagnetic wave and electric signal, thus
It is communicated with communication network or other equipment.The RF module 106 may include various existing for executing these functions
Circuit element, for example, antenna, RF transceiver, digital signal processor, encryption/deciphering chip, subscriber identity module
(SIM) card, memory etc..The RF module 106 can be carried out with various networks such as internet, intranet, wireless network
Communication is communicated by wireless network and other equipment.Above-mentioned wireless network may include cellular telephone networks, wireless
Local area network or Metropolitan Area Network (MAN).Various communication standards, agreement and technology can be used in above-mentioned wireless network, including but not limited to
Global system for mobile communications (Global System for Mobile Communication, GSM), enhanced mobile communication skill
Art (Enhanced Data GSM Environment, EDGE), Wideband CDMA Technology (wideband code
Division multiple access, W-CDMA), Code Division Multiple Access (Code division access, CDMA), time-division
Multiple access technology (time division multiple access, TDMA), adopting wireless fidelity technology (Wireless, Fidelity,
WiFi) (such as American Institute of Electrical and Electronics Engineers's standard IEEE 802.10A, IEEE 802.11b, IEEE802.11g and/or
IEEE 802.11n), the networking telephone (Voice over internet protocal, VoIP), worldwide interoperability for microwave accesses
(Worldwide Interoperability for Microwave Access, Wi-Max), other be used for mail, Instant Messenger
The agreement and any other suitable communications protocol of news and short message, or even may include that those are not developed currently yet
Agreement.
Voicefrequency circuit 110, earpiece 101, sound jack 103, microphone 105 provide user and the electronic body portion jointly
Audio interface between 10 or the main display 120.Specifically, the voicefrequency circuit 110 receives from the processor 102
Voice data is converted to electric signal by voice data, by electric signal transmission to the earpiece 101.The earpiece 101 is by electric signal
Be converted to the sound wave that human ear can be heard.The voicefrequency circuit 110 receives electric signal also from the microphone 105, by electric signal
Voice data is converted to, and gives the processor 102 to be further processed data transmission in network telephony.Audio data can be with
It is obtained from the memory 104 or through the RF module 106.In addition, audio data also can store to the storage
It is sent in device 104 or by the RF module 106.
The setting of sensor 114 is in the electronic body portion 10 or in the main display 120, the sensor
114 example includes but is not limited to: optical sensor, operation sensor, pressure sensor, gravity accelerometer and
Other sensors.
Specifically, the optical sensor may include light sensor 114F, pressure sensor 114G.Wherein, pressure sensing
Device 114G can detecte the sensor by pressing the pressure generated in mobile terminal 100.That is, pressure sensor 114G detection by with
The pressure that contact between family and mobile terminal or pressing generate, for example, by between the ear and mobile terminal of user contact or
Press the pressure generated.Therefore, whether pressure sensor 114G may be used to determine occurs between user and mobile terminal 100
The size of contact or pressing and pressure.
Referring to Fig. 5, specifically in the embodiment shown in fig. 5, the light sensor 114F and the pressure
Sensor 114G is arranged adjacent to the display panel 111.The light sensor 114F can have object close to the main display
When shielding 120, such as when the electronic body portion 10 is moved in one's ear, the processor 102 closes display output.
As a kind of motion sensor, gravity accelerometer can detect in all directions (generally three axis) and accelerate
The size of degree can detect that size and the direction of gravity when static, can be used to identify the application of 100 posture of mobile terminal
(such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion) etc..
In addition, the electronic body portion 10 can also configure the other sensors such as gyroscope, barometer, hygrometer, thermometer, herein no longer
It repeats,
In the present embodiment, the input module 118 may include the touch screen being arranged on the main display 120
109, the touch screen 109 collects the touch operation of user on it or nearby, and (for example user is any using finger, stylus etc.
Operation of the suitable object or attachment on the touch screen 109 or near the touch screen 109), and according to presetting
The corresponding attachment device of driven by program.Optionally, the touch screen 109 may include touch detecting apparatus and touch controller.
Wherein, the touch orientation of the touch detecting apparatus detection user, and touch operation bring signal is detected, it transmits a signal to
The touch controller;The touch controller receives touch information from the touch detecting apparatus, and by the touch information
It is converted into contact coordinate, then gives the processor 102, and order that the processor 102 is sent can be received and executed.
Furthermore, it is possible to realize the touching of the touch screen 109 using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves
Touch detection function.In addition to the touch screen 109, in other change embodiments, the input module 118 can also include it
His input equipment, such as key 107.The key 107 for example may include the character keys for inputting character, and for triggering
The control button of control function.The example of the control button includes " returning to main screen " key, power on/off key etc..
The information and the electronics that the main display 120 is used to show information input by user, is supplied to user
The various graphical user interface of body part 10, these graphical user interface can by figure, text, icon, number, video and its
Any combination is constituted, in an example, the touch screen 109 may be disposed on the display panel 111 to it is described
Display panel 111 constitutes an entirety.
The power module 122 is used to provide power supply to the processor 102 and other each components.Specifically,
The power module 122 may include power-supply management system, one or more power supply (such as battery or alternating current), charging circuit,
Power-fail detection circuit, inverter, indicator of the power supply status and any other and the electronic body portion 10 or the master
The generation, management of electric power and the relevant component of distribution in display screen 120.
The mobile terminal 100 further includes locator 119, and the locator 119 is for determining 100 institute of mobile terminal
The physical location at place.In the present embodiment, the locator 119 realizes the positioning of the mobile terminal 100 using positioning service,
The positioning service, it should be understood that the location information of the mobile terminal 100 is obtained by specific location technology (as passed through
Latitude coordinate), it is marked on the electronic map by the technology or service of the position of positioning object.
It should be understood that above-mentioned mobile terminal 100 is not limited to intelligent mobile phone terminal, should refer to can moved
Computer equipment used in dynamic.Specifically, mobile terminal 100, refers to the mobile computer for being equipped with intelligent operating system
Equipment, mobile terminal 100 include but is not limited to smart phone, smartwatch, tablet computer, etc..
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is contained at least one embodiment or example of the application.In the present specification, schematic expression of the above terms are not
It must be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described can be in office
It can be combined in any suitable manner in one or more embodiment or examples.In addition, without conflicting with each other, the skill of this field
Art personnel can tie the feature of different embodiments or examples described in this specification and different embodiments or examples
It closes and combines.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include at least one this feature.In the description of the present application, the meaning of " plurality " is at least two, such as two, three
It is a etc., unless otherwise specifically defined.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the application includes other realization, wherein can not press shown or discussed suitable
Sequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be by the application
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings
Interconnecting piece (mobile terminal), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits
Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable
Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media
His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the application can be realized with hardware, software, firmware or their combination.Above-mentioned
In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage
Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware
Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal
Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries
It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium
In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.In addition, in each embodiment of the application
In each functional unit can integrate in a processing module, be also possible to each unit and physically exist alone, can also two
A or more than two units are integrated in a module.Above-mentioned integrated module both can take the form of hardware realization, can also
It is realized in the form of using software function module.If the integrated module realized in the form of software function module and as
Independent product when selling or using, also can store in a computer readable storage medium.
Storage medium mentioned above can be read-only memory, disk or CD etc..Although having been shown and retouching above
Embodiments herein is stated, it is to be understood that above-described embodiment is exemplary, and should not be understood as the limit to the application
System, those skilled in the art can be changed above-described embodiment, modify, replace and become within the scope of application
Type.
Finally, it should be noted that above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although
The application is described in detail with reference to the foregoing embodiments, those skilled in the art are when understanding: it still can be with
It modifies the technical solutions described in the foregoing embodiments or equivalent replacement of some of the technical features;And
These are modified or replaceed, do not drive corresponding technical solution essence be detached from each embodiment technical solution of the application spirit and
Range.
Claims (11)
1. a kind of image pickup method, which is characterized in that the described method includes:
The optimum photographing position of current picture is obtained based on convolutional neural networks;
According to the optimum photographing position, obtain from the current location of the current picture to the guidance of the optimum photographing position
Information.
2. the method according to claim 1, wherein obtaining the best bat of current picture based on convolutional neural networks
It acts as regent and sets, comprising:
The composition type of the current picture is obtained based on convolutional neural networks;
According to the composition type, the optimum photographing position of the current picture is generated.
3. according to the method described in claim 2, it is characterized in that, obtaining the structure of the current picture based on convolutional neural networks
Figure type, comprising:
Likelihood probability of the composition of the current picture relative to default composition is obtained based on convolutional neural networks;
Judge whether the likelihood probability is greater than preset threshold;
If the likelihood probability is greater than the preset threshold, using the type of the default composition as the composition of the current picture
Type output.
4. according to the method described in claim 2, it is characterized in that, generating the current picture according to the composition type
Optimum photographing position, comprising:
Judge in the composition type with the presence or absence of preference type;
If there are preference types in the composition type generates the best shooting of the current picture according to the preference type
Position.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
Obtain user preference information;
Based on the user preference information, preference type is set in default composition type.
6. according to the method described in claim 5, it is characterized in that, obtaining user preference information, comprising:
Obtain the composition type that frequency of occurrence is most in history photographed data;
Based on the most composition type of frequency of occurrence in the history photographed data, user preference information is generated.
7. according to the method described in claim 6, it is characterized in that, in obtaining history photographed data the most structure of frequency of occurrence
After figure type, the method also includes:
Judge whether the composition type that frequency of occurrence is most in the history photographed data is subordinated to the default composition type;
It, will be described if the most composition type of frequency of occurrence is not subordinated to the default composition type in the history photographed data
The most composition type of frequency of occurrence is trained the convolutional neural networks as sample set in history photographed data.
8. the method according to claim 1, wherein being obtained according to the optimum photographing position from described current
Guidance information of the current location of picture to the optimum photographing position, comprising:
Obtain the position coordinates of the optimum photographing position and the current location;
Based on the position coordinates, obtain the current location to the optimum photographing position vector;
Based on the vector, generate the current location to the optimum photographing position guidance information.
9. a kind of filming apparatus, which is characterized in that described device includes:
Module is obtained, for obtaining the optimum photographing position of current picture based on convolutional neural networks;
Guiding module, for according to the optimum photographing position, obtaining from the current location of the current picture to described best
The guidance information of camera site.
10. a kind of mobile terminal, which is characterized in that including display, memory and processor, the display and described deposit
Reservoir is couple to the processor, the memory store instruction, when executed by the processor, the processing
Device executes the method according to claim 1.
11. a kind of computer-readable storage medium for the program code that can be performed with processor, which is characterized in that the journey
Sequence code makes the processor execute the method according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810673339.6A CN108989666A (en) | 2018-06-26 | 2018-06-26 | Image pickup method, device, mobile terminal and computer-readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810673339.6A CN108989666A (en) | 2018-06-26 | 2018-06-26 | Image pickup method, device, mobile terminal and computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108989666A true CN108989666A (en) | 2018-12-11 |
Family
ID=64538934
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810673339.6A Pending CN108989666A (en) | 2018-06-26 | 2018-06-26 | Image pickup method, device, mobile terminal and computer-readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108989666A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871811A (en) * | 2019-02-22 | 2019-06-11 | 中控智慧科技股份有限公司 | A kind of living object detection method based on image, apparatus and system |
CN111182212A (en) * | 2019-12-31 | 2020-05-19 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, storage medium, and electronic device |
CN111586284A (en) * | 2019-02-19 | 2020-08-25 | 北京小米移动软件有限公司 | Scene recognition prompting method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040189849A1 (en) * | 2003-03-31 | 2004-09-30 | Hofer Gregory V. | Panoramic sequence guide |
CN105933529A (en) * | 2016-04-20 | 2016-09-07 | 努比亚技术有限公司 | Shooting picture display method and device |
CN105991925A (en) * | 2015-03-06 | 2016-10-05 | 联想(北京)有限公司 | Scene composition indicating method and indicating device |
CN107169148A (en) * | 2017-06-21 | 2017-09-15 | 北京百度网讯科技有限公司 | Image search method, device, equipment and storage medium |
CN107592451A (en) * | 2017-08-31 | 2018-01-16 | 努比亚技术有限公司 | A kind of multi-mode auxiliary photo-taking method, apparatus and computer-readable recording medium |
CN107835364A (en) * | 2017-10-30 | 2018-03-23 | 维沃移动通信有限公司 | One kind is taken pictures householder method and mobile terminal |
-
2018
- 2018-06-26 CN CN201810673339.6A patent/CN108989666A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040189849A1 (en) * | 2003-03-31 | 2004-09-30 | Hofer Gregory V. | Panoramic sequence guide |
CN105991925A (en) * | 2015-03-06 | 2016-10-05 | 联想(北京)有限公司 | Scene composition indicating method and indicating device |
CN105933529A (en) * | 2016-04-20 | 2016-09-07 | 努比亚技术有限公司 | Shooting picture display method and device |
CN107169148A (en) * | 2017-06-21 | 2017-09-15 | 北京百度网讯科技有限公司 | Image search method, device, equipment and storage medium |
CN107592451A (en) * | 2017-08-31 | 2018-01-16 | 努比亚技术有限公司 | A kind of multi-mode auxiliary photo-taking method, apparatus and computer-readable recording medium |
CN107835364A (en) * | 2017-10-30 | 2018-03-23 | 维沃移动通信有限公司 | One kind is taken pictures householder method and mobile terminal |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111586284A (en) * | 2019-02-19 | 2020-08-25 | 北京小米移动软件有限公司 | Scene recognition prompting method and device |
CN111586284B (en) * | 2019-02-19 | 2021-11-30 | 北京小米移动软件有限公司 | Scene recognition prompting method and device |
CN109871811A (en) * | 2019-02-22 | 2019-06-11 | 中控智慧科技股份有限公司 | A kind of living object detection method based on image, apparatus and system |
CN111182212A (en) * | 2019-12-31 | 2020-05-19 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, storage medium, and electronic device |
WO2021135945A1 (en) * | 2019-12-31 | 2021-07-08 | Oppo广东移动通信有限公司 | Image processing method and apparatus, storage medium, and electronic device |
CN111182212B (en) * | 2019-12-31 | 2021-08-24 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, storage medium, and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109086709B (en) | Feature extraction model training method and device and storage medium | |
CN107592466B (en) | Photographing method and mobile terminal | |
CN105809704B (en) | Identify the method and device of image definition | |
CN110059661A (en) | Action identification method, man-machine interaction method, device and storage medium | |
CN110263213B (en) | Video pushing method, device, computer equipment and storage medium | |
CN108989672B (en) | Shooting method and mobile terminal | |
CN109348135A (en) | Photographic method, device, storage medium and terminal device | |
CN108848313B (en) | Multi-person photographing method, terminal and storage medium | |
CN108234875A (en) | Shoot display methods, device, mobile terminal and storage medium | |
CN108833769A (en) | Shoot display methods, device, mobile terminal and storage medium | |
CN105323372A (en) | Mobile terminal and method for controlling the same | |
CN111246106B (en) | Image processing method, electronic device, and computer-readable storage medium | |
CN111432245B (en) | Multimedia information playing control method, device, equipment and storage medium | |
CN110266957B (en) | Image shooting method and mobile terminal | |
CN109672830A (en) | Image processing method, device, electronic equipment and storage medium | |
CN109978996B (en) | Method, device, terminal and storage medium for generating expression three-dimensional model | |
CN109190648A (en) | Simulated environment generation method, device, mobile terminal and computer-readable storage medium | |
CN108881544A (en) | A kind of method taken pictures and mobile terminal | |
CN107357500A (en) | A kind of picture-adjusting method, terminal and storage medium | |
CN108989666A (en) | Image pickup method, device, mobile terminal and computer-readable storage medium | |
CN112581358A (en) | Training method of image processing model, image processing method and device | |
CN110807769B (en) | Image display control method and device | |
CN111182211B (en) | Shooting method, image processing method and electronic equipment | |
CN109639981B (en) | Image shooting method and mobile terminal | |
CN110675473A (en) | Method, device, electronic equipment and medium for generating GIF dynamic graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181211 |