CN108234870A - Image processing method, device, terminal and storage medium - Google Patents
Image processing method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN108234870A CN108234870A CN201711448081.1A CN201711448081A CN108234870A CN 108234870 A CN108234870 A CN 108234870A CN 201711448081 A CN201711448081 A CN 201711448081A CN 108234870 A CN108234870 A CN 108234870A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- terminal
- quality score
- shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
This application discloses a kind of image processing method, device, terminal and storage mediums, belong to field of terminal technology.The method includes:After shooting function unlatching, the first video is persistently shot;M frame images are extracted from the first video, m frames image is the image of target object shot from different perspectives;The corresponding quality score of m frame images is obtained, quality score is used to indicate the picture quality of image;It according to the corresponding quality score of m frame images, is chosen from m frame images and shows n candidate image, candidate image is the image that quality score is higher than predetermined threshold.The application exports the n candidate images including target object and high quality by terminal automatically, so that subsequent user can select image expected from satisfaction in n candidate image of display, avoiding user needs to manually adjust the situation of acquisition parameters every time when shooting image, greatly reduces the operation difficulty of shooting high quality graphic.
Description
Technical field
The invention relates to field of terminal technology, more particularly to a kind of image processing method, device, terminal and storage
Medium.
Background technology
At present, most terminal has shooting function, when user has the demand of shooting image, opens in terminal
Image taking can be carried out after shooting function.
In the related art, terminal acquires the image in view-finder after shooting function is opened, and is clapped when receiving image
When taking the photograph instruction, the image of acquisition is shot, obtains target image.
Invention content
The embodiment of the present application provides a kind of image processing method, device, terminal and storage medium, can be used for solving to clap
Take the photograph high quality graphic operation difficulty it is larger the problem of.The technical solution is as follows:
According to the embodiment of the present application in a first aspect, provide a kind of image processing method, the method includes:
After shooting function unlatching, the first video is persistently shot;
M frame images are extracted from first video, the m frames image is the target object shot from different perspectives
Image, the m be positive integer;
The corresponding quality score of m frames image is obtained, the quality score is used to indicate the image of described image
Quality;
According to the corresponding quality score of m frames image, chosen from the m frames image and show n candidate figure
Picture, the candidate image are the images that the quality score is higher than predetermined threshold, and the n is positive integer.
According to the second aspect of the embodiment of the present application, a kind of image processing apparatus is provided, described device includes:
Taking module after being opened in shooting function, persistently shoots the first video;
Extraction module, for extracting m frame images from first video, the m frames image is shoots from different perspectives
The target object image, the m be positive integer;
Acquisition module, for obtaining the corresponding quality score of m frames image, the quality score is used to indicate institute
State the picture quality of image;
Display module, for according to the corresponding quality score of m frames image, being chosen simultaneously from the m frames image
Show n candidate image, the candidate image is the image that the quality score is higher than predetermined threshold, and the n is positive integer.
According to the third aspect of the embodiment of the present application, a kind of terminal is provided, the terminal includes processor and the place
The device memory being connected and the program instruction being stored on the memory are managed, the processor performs described program instruction
Any image processing method of Shi Shixian such as the application first aspect and its alternative embodiment.
According to the fourth aspect of the embodiment of the present application, a kind of computer readable storage medium is provided, is stored thereon with journey
Sequence instructs, described program instruction realize when being executed by processor the application first aspect and its alternative embodiment it is any as described in
Image processing method.
The advantageous effect that technical solution provided by the embodiments of the present application is brought is:
By the first video of lasting shooting, m frame images are extracted from the first video, m frames image is shoots from different perspectives
Target object image.Terminal is chosen from m frame images according to the corresponding quality score of m frame images and shows quality
Scoring is higher than n candidate image of predetermined threshold so that terminal can export the n times including target object and high quality automatically
Select image so that subsequent user can be selected in n candidate image of display satisfaction expected from image, avoid use
Family needs to manually adjust the situation of acquisition parameters every time when shooting image, and the operation for greatly reducing shooting high quality graphic is difficult
Degree.
Description of the drawings
Fig. 1 is the structure diagram of terminal that one exemplary embodiment of the application provides;
Fig. 2 is the flow chart for the image processing method that the application one embodiment provides;
Fig. 3 is the interface schematic diagram that the image processing method that the application one embodiment provides is related to;
Fig. 4 is the flow chart of model training method that the image processing method that the application one embodiment provides is related to;
Fig. 5 is the flow chart of the image processing method of the application another embodiment offer;
Fig. 6 is the principle schematic that the image processing method of the application another embodiment offer is related to;
Fig. 7 is the interface schematic diagram that the image processing method of the application another embodiment offer is related to;
Fig. 8 is the structure diagram for the image processing apparatus that the application one embodiment provides.
Specific embodiment
Purpose, technical scheme and advantage to make the application are clearer, below in conjunction with attached drawing to the application embodiment party
Formula is described in further detail.
First, to this application involves to noun be introduced.
Picture appraisal model:It is a kind of mathematical model for the quality score for being used to determine image according to the input data.
Optionally, picture appraisal model includes but not limited to:Convolutional neural networks (Convolutional Neural
Network, CNN) model, deep neural network (Deep Neural Network, DNN) model, Recognition with Recurrent Neural Network
(Recurrent Neural Networks, RNN) model, insertion (embedding) model, gradient promote decision tree
(Gradient Boosting Decision Tree, GBDT) model, logistic regression (Logistic Regression, LR) mould
At least one of type.
DNN models are a kind of deep learning frames.DNN models include input layer, at least one layer of hidden layer (or middle layer)
And output layer.Optionally, input layer, at least one layer of hidden layer (or middle layer) and output layer include at least one neuron,
Neuron is used to handle the data received.Optionally, the quantity of the neuron between different layers can be identical;Or
Person can also be different.
RNN models are a kind of neural networks with feedback arrangement.In RNN models, the output of neuron can be under
One timestamp is applied directly to itself, that is, input of the i-th layer of neuron at the m moment, in addition to (i-1) layer neuron this when
Outside the output at quarter, its own output at (m-1) moment is further included.
Embedding models are shown based on entity and relationship distribution vector table, by the relationship in each triple example
Regard the translation from entity head to entity tail as.Wherein, triple example includes main body, relationship, object, and triple example can be with table
It is shown as (main body, relationship, object);Main body is entity head, and object is entity tail.Such as:The father of Xiao Zhang is big, then passes through three
Tuple example is expressed as (Xiao Zhang, father are big to open).
GBDT models are a kind of decision Tree algorithms of iteration, which is made of more decision trees, and the result of all trees is tired out
It adds up as final result.Each node of decision tree can obtain a predicted value, and by taking the age as an example, predicted value is belongs to
The average value at owner's age of age corresponding node.
LR models refer on the basis of linear regression, apply mechanically the model that a logical function is established.
At present, when user has the demand of shooting image, suitable reference object is first looked for, then adjustment shooting is high
The acquisition parameters such as degree, shooting direction and shooting distance when image shown in the view-finder of terminal meets expected, are clicked and are clapped
Button is taken the photograph, triggering terminal carries out image taking.But in the above-mentioned methods, the quality of the image captured by terminal is dependent on use
The quality of the acquisition parameters that family manually adjusts, i.e. image is by many factors such as shooting height, shooting direction and shooting distances
It influences, can not ensure the quality for the image shot.For this purpose, this application provides a kind of image processing method, device, terminal and
Storage medium, to solve the problems, such as above-mentioned the relevant technologies.In the technical solution that the application provides, pass through lasting shooting the
One video extracts m frame images from the first video, and m frames image is the image of target object shot from different perspectives.Terminal root
According to the corresponding quality score of m frame images, chosen from m frame images and show n time of the quality score higher than predetermined threshold
Select image so that terminal can export the n candidate images including target object and high quality automatically, so that subsequent user
Image expected from satisfaction can be selected in n candidate image of display, avoid user needs every time when shooting image
The situation of acquisition parameters is manually adjusted, greatly reduces the operation difficulty of shooting high quality graphic.Below using schematical real
Example is applied to illustrate.
Before the embodiment of the present application is explained, first the application scenarios of the embodiment of the present application are illustrated.
Fig. 1 shows the structure diagram of terminal 100 that one exemplary embodiment of the application is provided.
Terminal 100 is also referred to as portable terminal 100, which can be mobile phone, MP3 player (Moving
Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4
(Moving Picture Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) is broadcast
Put the equipment such as device.
Equipped with camera in terminal 100, there is image taking.
Optionally, picture appraisal model is stored in terminal 100, which is to CNN using sample image
The model being trained.
Optionally, which includes one or more such as lower component:Processor 110 and memory 120.
Processor 110 can include one or more processing core.Processor 110 utilizes various interfaces and connection
Various pieces in entire elevator dispatching equipment, by running or performing the instruction being stored in memory 120, program, code
Collection or instruction set and calling are stored in the data in memory 120, perform the various functions of elevator dispatching equipment and processing number
According to.Optionally, processor 110 Digital Signal Processing (Digital Signal Processing, DSP) may be used, scene can
Program gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable
Logic Array, PLA) at least one of example, in hardware realize.Processor 110 can integrating central processor (Central
Processing Unit, CPU) and one or more of modem etc. combination.Wherein, the main processing operation systems of CPU
System and application program etc.;Modem wirelessly communicates for handling.It is understood that above-mentioned modem can not also
It is integrated into processor 110, is realized separately through chip piece.
Optionally, realize that following each embodiment of the method provides when processor 110 performs the program instruction in memory 120
Image processing method.
Memory 120 can include random access memory (Random Access Memory, RAM), can also include read-only
Memory (Read-Only Memory).Optionally, which includes non-transient computer-readable medium (non-
transitory computer-readable storage medium).Memory 120 can be used for store instruction, program, generation
Code, code set or instruction set.Memory 120 may include storing program area and storage data field, wherein, storing program area can store
Be used to implement operating system instruction, at least one function instruction, be used to implement the finger of following each embodiment of the method
Enable etc.;Storage data field can store data arrived involved in following each embodiment of the method etc..
It please refers to Fig.2, it illustrates the flow charts for the image processing method that the application one embodiment provides.The application is real
Example is applied to illustrate applied to the terminal 100 that Fig. 1 is provided with the image processing method.The image processing method includes:
Step 201, after shooting function unlatching, the first video is persistently shot.
Shooting function refers to the function of terminal taking image or video.After shooting function unlatching, terminal is using high
Clear pattern persistently shoots the first video.
High definition pattern refers to that the clarity of the image of shooting is higher than the screening-mode of default clarity, and default clarity is pre-
The numerical value first set.The present embodiment is not limited this.
Optionally, terminal persistently shoots the first video, including:Terminal carries out multi-angle video centered on target object
It records, obtains the first video.
Optionally, the object type of target object includes one kind in personage, animal, still life and landscape.
Schematically, as shown in figure 3, the display shooting of terminal 100 interface 30.Shooting interface 30 includes shooting 31 He of control
View-finder 32.User clicks shooting control 31.Correspondingly, terminal 100 starts persistently to shoot the first video using high definition pattern, should
View-finder 32 includes target object 33.
User can open shooting function by shooting the shooting control in interface, can also be by clicking physical button group
It closes to open shooting function, can also shooting function be opened by voice control.For opening the mode of shooting function, this Shen
Please embodiment do not limit.
Step 202, m frame images are extracted from the first video, m frames image is the figure of target object shot from different perspectives
Picture, m are positive integer.
Terminal extracts m frame images from the first video, including but not limited to following two possible extracting modes.
In the first possible extracting mode, in the shooting process of the first video, terminal is every prefixed time interval
It obtains and m frame images is extracted from first video.
In second of possible extracting mode, after shooting function terminates, terminal obtains first video, from this
M frame images are extracted in one video.It is only illustrated by taking second of possible extracting mode as an example below.
Step 203, the corresponding quality score of m frame images is obtained, quality score is used to indicate the picture quality of image.
Terminal obtains picture appraisal model, and for every frame image, terminal calculates the quality of image using picture appraisal model
Scoring.Wherein, picture appraisal model is the model being trained using sample image to CNN.
Optionally, picture appraisal model is stored in terminal, which is according at least one set of sample data
Group training obtains, and sample data group includes described in every group:Sample image and the sample quality score marked in advance.
Wherein, the training process of picture appraisal model can refer to the associated description in the following examples, not be situated between first herein
It continues.
Quality score is used to indicate the picture quality of image, and picture quality includes image fidelity and image intelligibility.Its
In, image fidelity is the departure degree between the image and real image of extraction, and image intelligibility is behaved or machine is from image
In be drawn into the degree of characteristic information.
Optionally, quality score is used to indicate the picture quality of image, that is, is used to indicate composition ratio, the color pair of image
Than degree, color saturation and light and shade contrast.For example, the quality score of image is higher, then it represents that the picture quality of the image is got over
Good, i.e., the effect corresponding to the composition ratio of image, color contrast, color saturation and light and shade contrast is better.
Step 204, it according to the corresponding quality score of m frame images, is chosen from m frame images and shows n candidate figure
Picture, candidate image are the images that quality score is higher than predetermined threshold, and n is positive integer.
Terminal chooses quality score higher than predetermined threshold according to the corresponding quality score of m frame images from m frame images
N image, this n image is determined as n candidate image, and n candidate image is shown with default display mode.
Optionally, it presets display format and includes tiling graphic form and/or visual form.
According to the difference of display location, the display mode of n candidate image includes but not limited to following two display sides
Formula:
In the first possible display mode, terminal shows n in the global area at interface with default display format
Candidate image.
In second of possible display mode, terminal is shown in the default regional area at interface with default display format
N candidate image.
According to the difference of the form of expression, the display mode of n candidate image includes but not limited to following two display sides
Formula:
In the third possible display mode, terminal shows n candidate figure simultaneously in interface with default display format
Picture.
In the 4th kind of possible display mode, terminal shows n candidate figure successively in interface with default display format
Picture, the candidate image shown every time include a candidate image.
In the 5th kind of possible display mode, terminal shows n candidate figure in batches in interface with default display format
Picture, the candidate image of each batch displaying include at least two candidate images.
It should be noted that in the above-mentioned possible display mode of first two any one can with latter three kinds it is possible aobvious
Show any one combination implementation two-by-two in mode, this is those skilled in the art according to the easy full of beard of above-mentioned each display mode institute
And, the embodiment of the present invention does not combine the mode implemented to these types of display mode and repeats to repeat one by one.
The display order of candidate image is random or terminal is set.Optionally, target terminal uses cycle exhibition
The form shown shows n candidate image.
In conclusion the embodiment of the present application extracts m frame images, m frames by the first video of lasting shooting from the first video
Image is the image of target object shot from different perspectives.Terminal is according to the corresponding quality score of m frame images, from m frames
It is chosen in image and shows that quality score is higher than n candidate image of predetermined threshold so that terminal can export n and include automatically
The candidate image of target object and high quality, so that subsequent user can be selected completely in n candidate image of display
Image expected from foot, avoiding user needs to manually adjust the situation of acquisition parameters every time when shooting image, greatly reduces
Shoot the operation difficulty of high quality graphic.
It should be noted that before terminal obtains picture appraisal model, terminal needs that image is trained to obtain figure
As evaluation model.
Optionally, the training process of picture appraisal model includes:Terminal obtains training sample set, and training sample set is included extremely
Few one group of history data set, according at least one set of history data set, using error backpropagation algorithm to initial parameter model into
Row training, obtains picture appraisal model.
Wherein, every group of sample data group includes:Sample image and the sample quality score marked in advance.
Schematically, terminal is according at least one set of sample data group, using error backpropagation algorithm to initial parameter mould
Type is trained, and is obtained picture appraisal model, including but not limited to following steps, is please referred to Fig.4:
Step 401, for every group of sample data group at least one set of sample data group, sample is extracted from sample image
Characteristics of image.
Terminal extracts sample image feature according to pre-set image Processing Algorithm from sample image.
Optionally, pre-set image Processing Algorithm is perceives hash algorithm (Perceptual hash algorithm, pHash
Algorithm).Terminal calculates the corresponding perception cryptographic Hash of sample image by pHash algorithms, and the perception cryptographic Hash being calculated is true
It is set to sample image feature.
Step 402, sample image feature is inputted into initial parameter model, obtains training result.
Sample image feature is inputted initial parameter model by terminal, obtains training result.
Optionally, initial parameter model be according to Establishment of Neural Model, such as:Initial parameter model is basis
A kind of foundation in CNN models, DNN models and RNN models.
Schematically, for every group of history data set, terminal creates the corresponding inputoutput pair of this group of history data set, defeated
The input parameter for entering output pair is the sample image feature in this group of history data set, and output parameter is in this group of history data set
Sample quality score;Input parameter input prediction model is obtained training result by terminal.
For example, sample image is characterized as " sample image feature 1 ", sample quality score is " sample quality score 1 ", terminal
The inputoutput pair of establishment is:(sample image feature 1)->(sample quality score 1);Wherein, (sample image feature 1) is defeated
Enter parameter, (sample quality score 1) is output parameter.
Optionally, inputoutput pair is represented by feature vector.
Step 403, training result and sample quality score are compared, obtain counting loss, counting loss is used to refer to
Show the error between training result and sample quality score.
Optionally, counting loss is represented by cross entropy (cross-entropy),
Optionally, counting loss H (p, q) is calculated by following formula in terminal:
Wherein, p (x) and q (x) is the discrete distribution vector of equal length, and p (x) represents training result;Q (x) is represented
Output parameter;X is a vector in training result or output parameter.
Step 404, according at least one set of corresponding counting loss of sample data group, using error backpropagation algorithm
Training obtains picture appraisal model.
Optionally, terminal determines the gradient direction of picture appraisal model by back-propagation algorithm according to counting loss, from
The output layer of picture appraisal model successively updates forward the model parameter in picture appraisal model.
Picture appraisal model is obtained based on above-mentioned training, please refers to Fig. 5, another embodiment provides it illustrates the application
Image processing method flow chart.The present embodiment is with the image processing method applied to lifting in terminal 100 shown in figure 1
Example explanation.The image processing method includes:
Step 501, after shooting function unlatching, the first video is persistently shot.
Terminal persistently shoots the first video after shooting function unlatching.The process that terminal persistently shoots the first video can
With reference to the associated description in above-described embodiment, details are not described herein.
Step 502, m frame images are extracted from the first video, m frames image is the figure of target object shot from different perspectives
Picture, m are positive integer.
Terminal extracts m frame images from the first video.The process that terminal extracts m frame images from the first video can refer to
The associated description in embodiment is stated, details are not described herein.
Step 503, picture appraisal model is obtained, picture appraisal model is to convolutional neural networks CNN using sample image
The model being trained.
Optionally, terminal obtains picture appraisal model, including:The object type of target object in the first video is obtained, it is right
As type includes at least one of personage, animal, still life and landscape;According to default correspondence, obtain and target object
The corresponding picture appraisal model of object type presets correspondence and includes the corresponding pass between object type and picture appraisal model
System.
Terminal according to preset video processnig algorithms, analyze, and is calculated first by the first video obtained to shooting
The type identification of target object in video, the type mark is for unique mark object type.Terminal is according to the class being calculated
Type identifies, and determines and the corresponding object type of the type mark.
Default correspondence is stored in a kind of possible realization method, in terminal, default correspondence such as one institute of table
Show.In Table 1, when object type is " personage ", corresponding picture appraisal model is " picture appraisal model 1 ";Object type is
When " animal ", corresponding picture appraisal model is " picture appraisal model 2 ";When object type is " still life ", corresponding image is commented
Valency model is " picture appraisal model 3 ";When object type is " landscape ", corresponding picture appraisal model is " picture appraisal model
4”。
Table one
Object type | Picture appraisal model |
Personage | Picture appraisal model 1 |
Animal | Picture appraisal model 2 |
Still life | Picture appraisal model 3 |
Landscape | Picture appraisal model 4 |
Schematically, the default correspondence provided based on table one, when the object type for the target object that terminal is got
During for " personage ", picture appraisal model " picture appraisal model 1 " corresponding with the object type " personage " of target object is obtained.
Step 504, for every frame image, the quality score of image is calculated using picture appraisal model.
Optionally, for every frame image, terminal calculates the quality score of image using picture appraisal model, including:Terminal
Characteristics of image is extracted from image, characteristics of image is input in picture appraisal model, the quality score of image is calculated.
Wherein, picture appraisal model trains to obtain according at least one set of sample data group, every group of sample data group packet
It includes:Sample image and the sample quality score marked in advance.
Optionally, terminal calculates the corresponding perception cryptographic Hash of image, the perception Hash that will be calculated by pHash algorithms
Value is determined as the characteristics of image of the image.
Step 505, it according to the corresponding quality score of m frame images, is chosen from m frame images and shows n candidate figure
Picture, candidate image are the images that quality score is higher than predetermined threshold, and n is positive integer.
Terminal is chosen from m frame images according to the corresponding quality score of m frame images and shows n candidate image, wrap
It includes but is not limited to following two possible realization methods.
In the first possible implementation, terminal obtains predetermined threshold value, by the corresponding quality score of m frame images
It is compared successively with predetermined threshold value, image of the quality score higher than predetermined threshold is determined as candidate image, and show that n frames are waited
Select image.
For example, terminal obtains the 25 corresponding quality scores of frame image, predetermined threshold is " 70 points ", then terminal determines 25
It is 8 images that quality score, which is higher than " 70 points " of predetermined threshold, in frame image, this 8 images are determined as 8 candidate images, are shown
This 8 candidate images.
In second of possible realization method, terminal carries out m frames image according to the sequence of quality score from high to low
Sequence, obtains sequencing queue;The image of n preceding in sequencing queue is determined as n candidate image;According to quality score from
High to Low sequence shows n candidate image.The present embodiment is not limited this, below only with second of possible realization
It is illustrated for mode.
For example, terminal obtains the 25 corresponding quality scores of frame image, by this 25 frame image according to quality score from height
It is ranked up to low sequence, obtains sequencing queue, the image of first 3 in sequencing queue is determined as 3 candidate images.
Sequence of the terminal according to quality score from high to low, shows this 3 candidate images, and display order is followed successively by:Candidate image A7 (matter
Amount scores:88 points), (quality score is candidate image A17:85 points), (quality score is candidate image A5:80 points).
It should be noted that terminal after the mass fraction of image is determined, can add candidate image and mass fraction
Training sample set is added to, obtains updated training sample set;According to updated training sample set pair picture appraisal model into
Row training, obtains updated picture appraisal model.
Optionally, candidate image and mass fraction are added to training sample set by terminal, including:Candidate image is determined as
Mass fraction is determined as sample quality score by sample candidate image, by the sample candidate image determined and sample quality point
Number is added to training sample set as one group of history data set.
Wherein, it is trained according to updated training sample set pair picture appraisal model, obtains updated image and comment
The process of valency model can analogy with reference to the training process of picture appraisal model in above-described embodiment, details are not described herein.
Step 506, when receiving the default selection operation corresponding to candidate image, candidate image is determined as target figure
Picture, target image are a candidate image in n candidate image.
After terminal shows n candidate image, terminal detects whether to receive the default choosing corresponding to candidate image in real time
Operation is selected, if receiving the default selection operation corresponding to candidate image, to preset the corresponding candidate image of selection operation true by this
It is set to target image.
Default selection operation includes any one or more in clicking operation, slide, pressing operation, long-press operation
Combination.
In other possible realization methods, default selection operation can also speech form realization.For example, user is in terminal
In with speech form input candidate image presupposed information, after terminal gets voice signal, which is solved
Analysis obtains voice content, when there is the crucial words to match with the presupposed information of candidate image in voice content, i.e. terminal
Determine that the candidate image is chosen.
Step 507, the target photographing information of target image is obtained, the target that target photographing information includes target image is shot
Highly, at least one of target shooting direction and Target To Film Distance.
Optionally, terminal records the photographing information of every frame image in the first video when shooting the first video,
The corresponding photographing information of m frame images in the first video of terminal storage.
The corresponding photographing information of image is also referred to as information of shooting angles, shooting height of the photographing information including the image,
At least one of shooting direction and shooting distance.
Optionally, the terminal and the vertical range on ground when shooting height is the terminal taking image;Shooting direction includes
For terminal relative to the angle of target object, which is used to indicate the fuselage posture of terminal during the terminal taking image;It claps
Photographic range includes terminal during the terminal taking image, and, relative to the distance of target object, shooting distance is used to indicate terminal and target
The relative position of object.Schematically, shooting distance include horizontal distance between terminal and target object or include terminal with
Air line distance between target object.
It in a kind of possible realization method, is divided according to coarseness, the shooting height of image includes flat bat, takes a crane shot and face upward
These three shooting heights are clapped, shooting direction includes positive angle, flank angle, oblique side angle degree, back angle etc., shooting distance
Including feature distance, close shot distance, middle scape distance, panorama distance and distant view distance etc..
In alternatively possible realization method, according to partition by fine granularities, the shooting height of image includes 0 to H meter, shooting
Direction includes 0-360 degree, and shooting distance includes 0 to D meter, and k and r are positive integer.
In a kind of possible realization method, the 3 corresponding photographing informations of frame image are as shown in Table 2.
Table two
For example, the target image that terminal determines is " image 2 ", then terminal determines " image 2 " corresponding photographing information, will clap
It takes the photograph height " 1.7 meters " and is determined as target shooting height, shooting direction " 45 ° " is determined as target shooting direction, by shooting distance
" 0.8 meter " is determined as Target To Film Distance, so as to obtain the target photographing information of target image " image 2 ".
Step 508, according to the target photographing information of target image, the first prompt message is shown, the first prompt message is used for
Prompting is adjusted the acquisition parameters of terminal.
Optionally, terminal obtains the captured in real-time parameter of terminal, which includes the shooting height of terminal, claps
Take the photograph at least one of direction and shooting distance.Terminal is calculated according to the captured in real-time parameter of terminal and target photographing information
To relative value, which is used to indicate the error between captured in real-time parameter and target photographing information.Terminal is according to relative value
The first prompt message is generated, which includes the relative value.
In a schematical example, as shown in fig. 6, terminal persistently shoots the first video, extracted from the first video
30 frame images, 30 frame images are the image of user A shot from different perspectives.Terminal is according to the 30 corresponding quality of frame image
Scoring is chosen from 30 frame images and shows that quality score is higher than 5 candidate images of predetermined threshold, respectively candidate image
61st, candidate image 62, candidate image 63, candidate image 64 and candidate image 65.When terminal is received corresponding to candidate image 63
During default selection operation, candidate image 63 is determined as target image, obtains the target photographing information of target image 63, target is clapped
Take the photograph target shooting height " 1.7 meters ", target shooting direction " 45 ° " and Target To Film Distance " 0.8 that information includes target image 63
Rice ".The captured in real-time parameter of terminal includes shooting height " 1.2 meters ", shooting direction " 30 ° " and the shooting distance " 1 meter " of terminal,
The relative value that then terminal is calculated includes the corresponding relative value of shooting height " 0.5 meter ", the corresponding relative value of shooting direction
" 15 ° " and shooting distance " -0.2 meter ", the correspondence between above-mentioned target photographing information, captured in real-time parameter and relative value is such as
Shown in three.Based on the relative value that Fig. 6 is provided, as shown in fig. 7, terminal generates the first prompt message 71 according to the relative value, on boundary
Show that mobile phone " is please improved upwards 0.5 meter, inwardly tilts 15 ° again, and by mobile phone far from oneself by first prompt message in face
0.2 meter ".
Table three
Target photographing information | Captured in real-time parameter | Relative value | |
Shooting height | 1.7 rice | 1.2 rice | 0.5 meter |
Shooting direction | 45° | 30° | 15° |
Shooting distance | 0.8 meter | 1 meter | - 0.2 meter |
Step 509, when the acquisition parameters of terminal meet the target photographing information of target image, image taking is carried out.
Optionally, when terminal shows the first prompt message, user is according to the first prompt message to the acquisition parameters of terminal
It is adjusted, after the acquisition parameters of terminal are adjusted, when the target that the acquisition parameters of terminal meet target image shoots letter
During breath, image taking is carried out, including but not limited to following several possible realization methods.
In the first possible implementation, when the shooting height of terminal and the absolute difference of target shooting height are small
When the first predetermined threshold value, image taking is carried out.
Optionally, terminal judges whether the absolute difference of current shooting height and target shooting height is less than first in advance
If threshold value, if less than the first predetermined threshold value, image taking is carried out.
In second of possible realization method, when the absolute difference of the shooting direction and target shooting direction of terminal is small
When the second predetermined threshold value, image taking is carried out.
Optionally, terminal judges whether the absolute difference of current shooting direction and target shooting direction is less than second in advance
If threshold value, if less than the second predetermined threshold value, image taking is carried out.
In the third possible realization method, when the shooting distance of terminal and the absolute difference of Target To Film Distance are small
When third predetermined threshold value, image taking is carried out.
Optionally, terminal judge terminal shooting distance and Target To Film Distance absolute difference whether be less than third it is pre-
If threshold value, if less than third predetermined threshold value, image taking is carried out.
It should be noted is that arbitrary two kinds in the above-mentioned possible realization method of first three can combine and implement two-by-two
Or all three is combined and is implemented, this be those skilled in the art according to the easy full of beard of above-mentioned each realization method institute and, this hair
Bright embodiment does not combine the mode implemented to these types of display mode and repeats to repeat one by one.
It needs to illustrate on the other hand, above-mentioned predetermined threshold value is (pre- including the first predetermined threshold value, the second predetermined threshold value and third
If threshold value) numerical value that is set for terminal default setting or User Defined, the present embodiment to the concrete numerical value of predetermined threshold value not
It is limited.
Optionally, terminal carries out the including but not limited to following several possible realization methods of image taking.
In the first possible implementation, terminal shows the second prompt message, and the second prompt message is used to prompt end
End meets the information of shooting angles of target image.When terminal receives shooting instruction, image taking is carried out according to shooting instruction.
Optionally, terminal is when judging that acquisition parameters meet the target photographing information of target image, the second prompting of display
Information when terminal, which is detected, confirms the default trigger action of button corresponding to shooting, determines to receive shooting instruction, according to bat
It takes the photograph instruction and carries out image taking.
Wherein, shooting confirms that button is the operable control that triggering carries out image taking, presets trigger action and includes clicking
The combination of any one or more in operation, slide, pressing operation, long-press operation.
In second of possible realization method, terminal carries out image taking automatically.
Optionally, terminal confirms when judging that acquisition parameters meet the target photographing information of target image without user,
Terminal carries out image taking automatically.
In the embodiment of the present application, also by obtaining the object type of target object in the first video, according to default correspondence
Relationship obtains picture appraisal model corresponding with the object type of target object so that terminal can be according to different object class
Type selects different picture appraisal model, so that the quality score of image being calculated based on picture appraisal model is more
Accurately.
In the embodiment of the present application, also by the way that when receiving the default selection operation corresponding to candidate image, candidate is schemed
As being determined as target image, the target photographing information of target image is obtained, the target that target photographing information includes target image is clapped
Take the photograph at least one of height, target shooting direction and Target To Film Distance;According to the target photographing information of target image, display
First prompt message, the first prompt message are adjusted the captured in real-time parameter of terminal for prompting;When the real-time bat of terminal
When taking the photograph parameter and meeting the target photographing information of target image, image taking is carried out;So that in the process of subsequent user shooting image
In, terminal can be adjusted the captured in real-time parameter of terminal according to prompting user, so that can be according to photographic subjects
Camera site and fuselage posture during image shoot image again, improve shooting efficiency.
In the embodiment of the present application, also by being trained according to updated training sample set pair picture appraisal model,
Obtain updated picture appraisal model so that the essence of picture appraisal model can be continuously improved in terminal according to new training sample
Degree improves the accuracy that terminal determines the quality score of image.
Following is the application device embodiment, can be used for performing the application embodiment of the method.For the application device reality
The details not disclosed in example is applied, please refers to the application embodiment of the method.
Fig. 8 is please referred to, it illustrates the structure diagrams for the image processing apparatus that the application one embodiment provides.The figure
As processing unit can be by special hardware circuit, alternatively, the whole for being implemented in combination with becoming the terminal 100 in Fig. 1 of software and hardware
Or a part, the image processing apparatus include:Taking module 810, extraction module 820, acquisition module 830 and display module 840.
Taking module 810 after being opened in shooting function, persistently shoots the first video;
Extraction module 820, for extracting m frame images from the first video, m frames image is the target shot from different perspectives
The image of object, m are positive integer;
Acquisition module 830, for obtaining the corresponding quality score of m frame images, quality score is used to indicate image
Picture quality;
Display module 840, for according to the corresponding quality score of m frame images, being chosen from m frame images and showing n
A candidate image, candidate image are the images that quality score is higher than predetermined threshold, and n is positive integer.
Optionally, acquisition module 830, including:Acquiring unit and computing unit.
Acquiring unit, for obtaining picture appraisal model, picture appraisal model is to convolutional Neural net using sample image
The model that network CNN is trained;
Computing unit, for for every frame image, the quality score of image to be calculated using picture appraisal model.
Optionally, computing unit is additionally operable to, for every frame image, characteristics of image be extracted from image;Characteristics of image is inputted
Into picture appraisal model, the quality score of image is calculated;
Wherein, picture appraisal model trains to obtain according at least one set of sample data group, every group of sample data group packet
It includes:Sample image and the sample quality score marked in advance.
Optionally, acquiring unit, is additionally operable to obtain the object type of target object in the first video, and object type includes people
At least one of object, animal, still life and landscape;
According to default correspondence, picture appraisal model corresponding with the object type of target object is obtained, presets and corresponds to
Relationship includes the correspondence between object type and picture appraisal model.
Optionally, acquisition module 830 further include:Training unit.
Training unit, for obtaining training sample set, training sample set includes at least one set of sample data group;According at least
One group of sample data group is trained initial parameter model using error backpropagation algorithm, obtains picture appraisal model.
Optionally, training unit is additionally operable to for every group of sample data group at least one set of sample data group, from sample
Sample image feature is extracted in image;Sample image feature is inputted into initial parameter model, obtains training result;By training result
Be compared with sample quality score, obtain counting loss, counting loss be used to indicate training result and sample quality score it
Between error;According at least one set of corresponding counting loss of sample data group, trained using error backpropagation algorithm
To picture appraisal model.
Optionally, display module 840 for m frames image to be ranked up according to the sequence of quality score from high to low, obtain
To sequencing queue;
The image of n preceding in sequencing queue is determined as n candidate image;
According to the sequence of quality score from high to low, n candidate image is shown.
Optionally, the device, further includes:
Determining module, for when receiving the default selection operation corresponding to candidate image, candidate image to be determined as mesh
Logo image, target image are a candidate image in n candidate image;
Acquisition module 830, is additionally operable to obtain the target photographing information of target image, and target photographing information includes target image
At least one of target shooting height, target shooting direction and Target To Film Distance;
Display module 840 is additionally operable to the target photographing information according to target image, shows the first prompt message, first carries
Show information for prompt the captured in real-time parameter of terminal is adjusted;
Taking module 810 is additionally operable to when the captured in real-time parameter of terminal meets the target photographing information of target image, into
Row image taking.
Optionally, taking module 810 are additionally operable to when the absolute difference of the shooting height and target shooting height of terminal is small
When the first predetermined threshold value, image taking is carried out;And/or when the difference of the shooting direction and target shooting direction of terminal is absolute
When value is less than the second predetermined threshold value, image taking is carried out;And/or when the shooting distance of terminal and the difference of Target To Film Distance
When absolute value is less than third predetermined threshold value, image taking is carried out.
Correlative detail can be combined referring to figs. 2 to embodiment of the method shown in Fig. 7.Wherein, taking module 810 is additionally operable to realize
It is any other implicit or disclosed with shooting the relevant function of step in above method embodiment;Extraction module 820 is additionally operable to reality
Any other implicit or disclosed and relevant function of extraction step in existing above method embodiment;Acquisition module 830 is additionally operable to
Realize any other implicit or disclosed and relevant function of obtaining step in above method embodiment;Display module 840 is also used
It is any other implicit or disclosed with the relevant function of step display in above method embodiment in realizing.
It should be noted that the device that above-described embodiment provides, when realizing its function, only with above-mentioned each function module
It divides and for example, in practical application, can be completed as needed and by above-mentioned function distribution by different function modules,
The internal structure of equipment is divided into different function modules, to complete all or part of function described above.In addition,
The apparatus and method embodiment that above-described embodiment provides belongs to same design, and specific implementation process refers to embodiment of the method, this
In repeat no more.
The application also provides a kind of computer-readable medium, is stored thereon with program instruction, and program instruction is held by processor
The image processing method that above-mentioned each embodiment of the method provides is realized during row.
Present invention also provides it is a kind of comprising instruction computer program product, when run on a computer so that
Computer performs the image processing method described in above-mentioned each embodiment.
One of ordinary skill in the art will appreciate that hardware can be passed through by realizing all or part of step of above-described embodiment
It completes, relevant hardware can also be instructed to complete by program, the program can be stored in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the preferred embodiment of the application, not to limit the application, it is all in spirit herein and
Within principle, any modification, equivalent replacement, improvement and so on should be included within the protection domain of the application.
Claims (12)
1. a kind of image processing method, which is characterized in that the method includes:
After shooting function unlatching, the first video is persistently shot;
M frame images are extracted from first video, the m frames image is the figure of the target object shot from different perspectives
Picture, the m are positive integer;
The corresponding quality score of m frames image is obtained, the quality score is used to indicate the picture quality of described image;
According to the corresponding quality score of m frames image, chosen from the m frames image and show n candidate image, institute
It is the image that the quality score is higher than predetermined threshold to state candidate image, and the n is positive integer.
2. according to the method described in claim 1, it is characterized in that, the acquisition corresponding quality of m frames image is commented
Point, including:
Described image evaluation model is obtained, described image evaluation model is that convolutional neural networks CNN is carried out using sample image
The model that training obtains;
For every frame described image, the quality score of described image is calculated using described image evaluation model.
3. method according to claim 2, which is characterized in that it is described for every frame described image, using picture appraisal model meter
The quality score of described image is calculated, including:
For every frame described image, characteristics of image is extracted from described image;
Described image feature is input in described image evaluation model, the quality score of described image is calculated;
Wherein, described image evaluation model trains to obtain according at least one set of sample data group, sample data described in every group
Group includes:Sample image and the sample quality score marked in advance.
4. method according to claim 2, which is characterized in that the acquisition described image evaluation model, including:
Obtain the object type of target object described in first video, the object type include personage, animal, still life and
At least one of landscape;
According to default correspondence, picture appraisal model corresponding with the object type of the target object is obtained, it is described default
Correspondence includes the correspondence between the object type and described image evaluation model.
5. method according to claim 3, which is characterized in that before the acquisition described image evaluation model, further include:
Training sample set is obtained, training sample set includes at least one set of sample data group;
According at least one set of sample data group, initial parameter model is trained using error backpropagation algorithm, is obtained
To described image evaluation model.
It is 6. according to the method described in claim 5, it is characterized in that, described according at least one set of sample data group, use
Error backpropagation algorithm is trained initial parameter model, obtains described image evaluation model, including:
For every group of sample data group at least one set of sample data group, sample image is extracted from the sample image
Feature;
The sample image feature is inputted into the initial parameter model, obtains training result;
The training result and the sample quality score are compared, obtain counting loss, the counting loss is used to refer to
Show the error between the training result and the sample quality score;
According to the corresponding counting loss of at least one set of sample data group, trained using the error backpropagation algorithm
Obtain described image evaluation model.
It is described according to the corresponding quality score of m frames image 7. method according to any one of claims 1 to 6, from
It is chosen in the m frames image and shows n candidate image, including:
The m frames image according to the sequence of the quality score is from high to low ranked up, obtains sequencing queue;
The image of n preceding in the sequencing queue is determined as the n candidate image;
According to the sequence of the quality score from high to low, the n candidate image is shown.
8. method according to any one of claims 1 to 6, the method further include:
When receiving the default selection operation corresponding to the candidate image, the candidate image is determined as target image, institute
Target image is stated as a candidate image in the n candidate image;
The target photographing information of the target image is obtained, the target that the target photographing information includes the target image is shot
Highly, at least one of target shooting direction and Target To Film Distance;
According to the target photographing information of the target image, the first prompt message is shown, first prompt message is used to prompt
The captured in real-time parameter of terminal is adjusted;
When the captured in real-time parameter of the terminal meets the target photographing information of the target image, image taking is carried out.
9. the according to the method described in claim 8, target for meeting the target image when the acquisition parameters of the terminal
During photographing information, image taking is carried out, including:
When the shooting height of the terminal is less than the first predetermined threshold value with the absolute difference of the target shooting height, carry out
Image taking;And/or
When the shooting direction of the terminal is less than the second predetermined threshold value with the absolute difference of the target shooting direction, carry out
Image taking;And/or
When the shooting distance of the terminal is less than third predetermined threshold value with the absolute difference of the Target To Film Distance, carry out
Image taking.
10. a kind of image processing apparatus, which is characterized in that described device includes:
Taking module after being opened in shooting function, persistently shoots the first video;
Extraction module, for extracting m frame images from first video, the m frames image is the institute that shoots from different perspectives
The image of target object is stated, the m is positive integer;
Acquisition module, for obtaining the corresponding quality score of m frames image, the quality score is used to indicate the figure
The picture quality of picture;
Display module, for according to the corresponding quality score of m frames image, being chosen from the m frames image and showing n
A candidate image, the candidate image are the images that the quality score is higher than predetermined threshold, and the n is positive integer.
11. a kind of terminal, which is characterized in that the memory that the terminal includes processor, is connected with the processor, Yi Jicun
The program instruction on the memory is stored up, the processor is realized when performing described program instruction as claim 1 to 9 is any
The image processing method.
12. a kind of computer readable storage medium, which is characterized in that be stored thereon with program instruction, described program instruction is located
Manage the image processing method realized when device performs as described in claim 1 to 9 is any.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711448081.1A CN108234870B (en) | 2017-12-27 | 2017-12-27 | Image processing method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711448081.1A CN108234870B (en) | 2017-12-27 | 2017-12-27 | Image processing method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108234870A true CN108234870A (en) | 2018-06-29 |
CN108234870B CN108234870B (en) | 2019-10-11 |
Family
ID=62648138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711448081.1A Active CN108234870B (en) | 2017-12-27 | 2017-12-27 | Image processing method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108234870B (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108989683A (en) * | 2018-08-20 | 2018-12-11 | 崔跃 | A kind of automatic shooting system for children |
CN109104633A (en) * | 2018-08-30 | 2018-12-28 | Oppo广东移动通信有限公司 | Video interception method, apparatus, storage medium and mobile terminal |
CN109120862A (en) * | 2018-10-15 | 2019-01-01 | Oppo广东移动通信有限公司 | High-dynamic-range image acquisition method, device and mobile terminal |
CN109194880A (en) * | 2018-11-28 | 2019-01-11 | 努比亚技术有限公司 | Photographic method, terminal and computer readable storage medium |
CN109639973A (en) * | 2018-12-21 | 2019-04-16 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Shoot image methods of marking, scoring apparatus, electronic equipment and storage medium |
CN110062173A (en) * | 2019-03-15 | 2019-07-26 | 北京旷视科技有限公司 | Image processor and image processing method, equipment, storage medium and intelligent terminal |
CN110084317A (en) * | 2019-05-06 | 2019-08-02 | 北京字节跳动网络技术有限公司 | The method and apparatus of image for identification |
CN110099212A (en) * | 2019-04-25 | 2019-08-06 | 努比亚技术有限公司 | A kind of photographic method, mobile terminal and computer readable storage medium |
CN110326287A (en) * | 2019-05-14 | 2019-10-11 | 深圳市大疆创新科技有限公司 | Image pickup method and device |
CN110390263A (en) * | 2019-06-17 | 2019-10-29 | 宁波江丰智能科技有限公司 | A kind of method of video image processing and system |
CN110533015A (en) * | 2019-08-30 | 2019-12-03 | Oppo广东移动通信有限公司 | Verification method and verifying device, electronic equipment, computer readable storage medium |
EP3611915A1 (en) * | 2018-08-17 | 2020-02-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for image processing, and mobile terminal |
CN111027347A (en) * | 2018-10-09 | 2020-04-17 | 杭州海康威视数字技术股份有限公司 | Video identification method and device and computer equipment |
CN111247787A (en) * | 2018-12-29 | 2020-06-05 | 深圳市大疆创新科技有限公司 | Photographing method and photographing terminal |
CN111275683A (en) * | 2020-01-20 | 2020-06-12 | 上海云从汇临人工智能科技有限公司 | Image quality grading processing method, system, device and medium |
CN111340140A (en) * | 2020-03-30 | 2020-06-26 | 北京金山云网络技术有限公司 | Image data set acquisition method and device, electronic equipment and storage medium |
CN111415301A (en) * | 2019-01-07 | 2020-07-14 | 珠海金山办公软件有限公司 | Image processing method and device and computer readable storage medium |
CN111461249A (en) * | 2020-04-09 | 2020-07-28 | 上海城诗信息科技有限公司 | Photo scoring analysis method and system |
CN111524123A (en) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and apparatus for processing image |
CN111724437A (en) * | 2020-06-17 | 2020-09-29 | 深圳市商汤科技有限公司 | Visual positioning method and related device, equipment and storage medium |
CN112019739A (en) * | 2020-08-03 | 2020-12-01 | RealMe重庆移动通信有限公司 | Shooting control method and device, electronic equipment and storage medium |
CN112055847A (en) * | 2018-09-18 | 2020-12-08 | 谷歌有限责任公司 | Method and system for processing image |
CN112232978A (en) * | 2020-10-20 | 2021-01-15 | 青岛丰禾星普科技有限公司 | Aquatic product length and weight detection method, terminal equipment and storage medium |
CN112672008A (en) * | 2020-12-14 | 2021-04-16 | 杭州联吉技术有限公司 | Lens adjusting method and device |
CN112771612A (en) * | 2019-09-06 | 2021-05-07 | 华为技术有限公司 | Method and device for shooting image |
CN114520890A (en) * | 2020-11-19 | 2022-05-20 | 华为技术有限公司 | Image processing method and device |
CN115802148A (en) * | 2021-09-07 | 2023-03-14 | 荣耀终端有限公司 | Method for acquiring image and electronic equipment |
CN116150421A (en) * | 2023-04-23 | 2023-05-23 | 深圳竹云科技股份有限公司 | Image display method, device, computer equipment and storage medium |
CN117135444A (en) * | 2023-03-10 | 2023-11-28 | 荣耀终端有限公司 | Frame selection decision method and device based on reinforcement learning |
CN117651122A (en) * | 2024-01-25 | 2024-03-05 | 浙江华创视讯科技有限公司 | Method, equipment and storage medium for determining switching angle in image acquisition process |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104853096A (en) * | 2015-04-30 | 2015-08-19 | 广东欧珀移动通信有限公司 | Rotation camera-based shooting parameter determination method and terminal |
CN105260466A (en) * | 2015-10-16 | 2016-01-20 | 百度在线网络技术(北京)有限公司 | Picture pushing method and apparatus |
CN105528786A (en) * | 2015-12-04 | 2016-04-27 | 小米科技有限责任公司 | Image processing method and device |
CN105578027A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and device |
CN106250931A (en) * | 2016-08-03 | 2016-12-21 | 武汉大学 | A kind of high-definition picture scene classification method based on random convolutional neural networks |
CN107273510A (en) * | 2017-06-20 | 2017-10-20 | 广东欧珀移动通信有限公司 | Photo recommends method and Related product |
-
2017
- 2017-12-27 CN CN201711448081.1A patent/CN108234870B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104853096A (en) * | 2015-04-30 | 2015-08-19 | 广东欧珀移动通信有限公司 | Rotation camera-based shooting parameter determination method and terminal |
CN105578027A (en) * | 2015-07-28 | 2016-05-11 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and device |
CN105260466A (en) * | 2015-10-16 | 2016-01-20 | 百度在线网络技术(北京)有限公司 | Picture pushing method and apparatus |
CN105528786A (en) * | 2015-12-04 | 2016-04-27 | 小米科技有限责任公司 | Image processing method and device |
CN106250931A (en) * | 2016-08-03 | 2016-12-21 | 武汉大学 | A kind of high-definition picture scene classification method based on random convolutional neural networks |
CN107273510A (en) * | 2017-06-20 | 2017-10-20 | 广东欧珀移动通信有限公司 | Photo recommends method and Related product |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3611915A1 (en) * | 2018-08-17 | 2020-02-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for image processing, and mobile terminal |
US11006046B2 (en) | 2018-08-17 | 2021-05-11 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and mobile terminal |
CN108989683A (en) * | 2018-08-20 | 2018-12-11 | 崔跃 | A kind of automatic shooting system for children |
CN109104633A (en) * | 2018-08-30 | 2018-12-28 | Oppo广东移动通信有限公司 | Video interception method, apparatus, storage medium and mobile terminal |
CN109104633B (en) * | 2018-08-30 | 2021-09-28 | Oppo广东移动通信有限公司 | Video screenshot method and device, storage medium and mobile terminal |
CN112055847A (en) * | 2018-09-18 | 2020-12-08 | 谷歌有限责任公司 | Method and system for processing image |
CN112055847B (en) * | 2018-09-18 | 2024-03-12 | 谷歌有限责任公司 | Method and system for processing image |
US11947591B2 (en) | 2018-09-18 | 2024-04-02 | Google Llc | Methods and systems for processing imagery |
CN111027347A (en) * | 2018-10-09 | 2020-04-17 | 杭州海康威视数字技术股份有限公司 | Video identification method and device and computer equipment |
CN109120862A (en) * | 2018-10-15 | 2019-01-01 | Oppo广东移动通信有限公司 | High-dynamic-range image acquisition method, device and mobile terminal |
CN109194880B (en) * | 2018-11-28 | 2021-11-12 | 努比亚技术有限公司 | Photographing method, terminal and computer readable storage medium |
CN109194880A (en) * | 2018-11-28 | 2019-01-11 | 努比亚技术有限公司 | Photographic method, terminal and computer readable storage medium |
CN109639973A (en) * | 2018-12-21 | 2019-04-16 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Shoot image methods of marking, scoring apparatus, electronic equipment and storage medium |
CN111247787A (en) * | 2018-12-29 | 2020-06-05 | 深圳市大疆创新科技有限公司 | Photographing method and photographing terminal |
WO2020133409A1 (en) * | 2018-12-29 | 2020-07-02 | 深圳市大疆创新科技有限公司 | Photography method and photographic terminal |
CN111415301A (en) * | 2019-01-07 | 2020-07-14 | 珠海金山办公软件有限公司 | Image processing method and device and computer readable storage medium |
CN111415301B (en) * | 2019-01-07 | 2024-03-12 | 珠海金山办公软件有限公司 | Image processing method, device and computer readable storage medium |
CN110062173A (en) * | 2019-03-15 | 2019-07-26 | 北京旷视科技有限公司 | Image processor and image processing method, equipment, storage medium and intelligent terminal |
CN110099212A (en) * | 2019-04-25 | 2019-08-06 | 努比亚技术有限公司 | A kind of photographic method, mobile terminal and computer readable storage medium |
CN110084317A (en) * | 2019-05-06 | 2019-08-02 | 北京字节跳动网络技术有限公司 | The method and apparatus of image for identification |
CN110084317B (en) * | 2019-05-06 | 2023-04-21 | 北京字节跳动网络技术有限公司 | Method and device for recognizing images |
CN110326287A (en) * | 2019-05-14 | 2019-10-11 | 深圳市大疆创新科技有限公司 | Image pickup method and device |
CN110390263A (en) * | 2019-06-17 | 2019-10-29 | 宁波江丰智能科技有限公司 | A kind of method of video image processing and system |
CN110533015A (en) * | 2019-08-30 | 2019-12-03 | Oppo广东移动通信有限公司 | Verification method and verifying device, electronic equipment, computer readable storage medium |
CN112771612A (en) * | 2019-09-06 | 2021-05-07 | 华为技术有限公司 | Method and device for shooting image |
CN111275683A (en) * | 2020-01-20 | 2020-06-12 | 上海云从汇临人工智能科技有限公司 | Image quality grading processing method, system, device and medium |
CN111275683B (en) * | 2020-01-20 | 2021-01-29 | 上海云从汇临人工智能科技有限公司 | Image quality grading processing method, system, device and medium |
CN111340140A (en) * | 2020-03-30 | 2020-06-26 | 北京金山云网络技术有限公司 | Image data set acquisition method and device, electronic equipment and storage medium |
CN111461249A (en) * | 2020-04-09 | 2020-07-28 | 上海城诗信息科技有限公司 | Photo scoring analysis method and system |
CN111524123B (en) * | 2020-04-23 | 2023-08-08 | 北京百度网讯科技有限公司 | Method and apparatus for processing image |
CN111524123A (en) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | Method and apparatus for processing image |
US11734809B2 (en) | 2020-04-23 | 2023-08-22 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for processing image |
CN111724437A (en) * | 2020-06-17 | 2020-09-29 | 深圳市商汤科技有限公司 | Visual positioning method and related device, equipment and storage medium |
CN111724437B (en) * | 2020-06-17 | 2022-08-05 | 深圳市商汤科技有限公司 | Visual positioning method and related device, equipment and storage medium |
CN112019739A (en) * | 2020-08-03 | 2020-12-01 | RealMe重庆移动通信有限公司 | Shooting control method and device, electronic equipment and storage medium |
CN112232978B (en) * | 2020-10-20 | 2022-11-04 | 青岛丰禾星普科技有限公司 | Aquatic product length and weight detection method, terminal equipment and storage medium |
CN112232978A (en) * | 2020-10-20 | 2021-01-15 | 青岛丰禾星普科技有限公司 | Aquatic product length and weight detection method, terminal equipment and storage medium |
CN114520890A (en) * | 2020-11-19 | 2022-05-20 | 华为技术有限公司 | Image processing method and device |
CN112672008A (en) * | 2020-12-14 | 2021-04-16 | 杭州联吉技术有限公司 | Lens adjusting method and device |
CN115802148A (en) * | 2021-09-07 | 2023-03-14 | 荣耀终端有限公司 | Method for acquiring image and electronic equipment |
CN115802148B (en) * | 2021-09-07 | 2024-04-12 | 荣耀终端有限公司 | Method for acquiring image and electronic equipment |
CN117135444A (en) * | 2023-03-10 | 2023-11-28 | 荣耀终端有限公司 | Frame selection decision method and device based on reinforcement learning |
CN116150421A (en) * | 2023-04-23 | 2023-05-23 | 深圳竹云科技股份有限公司 | Image display method, device, computer equipment and storage medium |
CN117651122A (en) * | 2024-01-25 | 2024-03-05 | 浙江华创视讯科技有限公司 | Method, equipment and storage medium for determining switching angle in image acquisition process |
CN117651122B (en) * | 2024-01-25 | 2024-04-19 | 浙江华创视讯科技有限公司 | Method, equipment and storage medium for determining switching angle in image acquisition process |
Also Published As
Publication number | Publication date |
---|---|
CN108234870B (en) | 2019-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108234870B (en) | Image processing method, device, terminal and storage medium | |
CN108198177A (en) | Image acquiring method, device, terminal and storage medium | |
CN107481327B (en) | About the processing method of augmented reality scene, device, terminal device and system | |
CN106469302B (en) | A kind of face skin quality detection method based on artificial neural network | |
CN106295476B (en) | Face key point localization method and device | |
CN108121957A (en) | The method for pushing and device of U.S. face material | |
CN109902546A (en) | Face identification method, device and computer-readable medium | |
CN108710847A (en) | Scene recognition method, device and electronic equipment | |
KR101872635B1 (en) | Automatic make-up evaluation system and operating method thereof | |
CN109934300B (en) | Model compression method, device, computer equipment and storage medium | |
CN110490252B (en) | Indoor people number detection method and system based on deep learning | |
CN109977739A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108921782A (en) | A kind of image processing method, device and storage medium | |
CN108198130B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
US20170161553A1 (en) | Method and electronic device for capturing photo | |
CN110390705A (en) | A kind of method and device generating virtual image | |
CN102567716B (en) | Face synthetic system and implementation method | |
CN108156385A (en) | Image acquiring method and image acquiring device | |
CN108174096A (en) | Method, apparatus, terminal and the storage medium of acquisition parameters setting | |
CN111292262B (en) | Image processing method, device, electronic equipment and storage medium | |
CN110349232A (en) | Generation method, device, storage medium and the electronic equipment of image | |
CN107018330A (en) | A kind of guidance method and device of taking pictures in real time | |
CN104063686A (en) | System and method for performing interactive diagnosis on crop leaf segment disease images | |
CN108200337A (en) | It takes pictures method, apparatus, terminal and the storage medium of processing | |
CN108898591A (en) | Methods of marking and device, electronic equipment, the readable storage medium storing program for executing of picture quality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |