CN111800569A - Photographing processing method and device, storage medium and electronic equipment - Google Patents

Photographing processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111800569A
CN111800569A CN201910282456.4A CN201910282456A CN111800569A CN 111800569 A CN111800569 A CN 111800569A CN 201910282456 A CN201910282456 A CN 201910282456A CN 111800569 A CN111800569 A CN 111800569A
Authority
CN
China
Prior art keywords
data
photographing
matrix
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910282456.4A
Other languages
Chinese (zh)
Other versions
CN111800569B (en
Inventor
陈仲铭
何明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910282456.4A priority Critical patent/CN111800569B/en
Publication of CN111800569A publication Critical patent/CN111800569A/en
Application granted granted Critical
Publication of CN111800569B publication Critical patent/CN111800569B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Abstract

The embodiment of the application discloses a photographing processing method, a photographing processing device, a storage medium and electronic equipment, wherein when photographing operation is detected, panoramic data are collected to generate a panoramic feature vector; acquiring historical photographing data to generate historical feature vectors; generating a user feature matrix according to the historical feature vector and the panoramic feature vector; acquiring an original image obtained by photographing operation, and acquiring image adjustment parameters according to the original image, a user feature matrix and a pre-trained classification model; and adjusting the original image according to the image adjusting parameter. According to the scheme, the personalized image adjustment parameters are acquired in real time through the photo content and by referring to the historical photographing data of the user, a fine-grained photo optimization scheme is provided for the user, and a photo processing scheme is pertinently provided for the user.

Description

Photographing processing method and device, storage medium and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a photographing processing method and device, a storage medium and electronic equipment.
Background
Terminals such as mobile phones and tablet computers mainly have the following two optimization schemes for shot pictures. The first is to use a preset filter function of the camera, after a user selects a photographing mode, the terminal processes an original photo obtained by photographing by using a preset default filter scheme in the mode, for example, if the user selects a beauty mode, the terminal selects a default beauty filter to beautify the photo. The second mode is to use a neural network to detect the currently shot content, directly provide a default filter matched with the detected content for the user, and simultaneously display other filter preview effects on the terminal for the user to select.
The two schemes are realized through a plurality of specific filters arranged in the terminal, the granularity of the intelligent photographing optimization scheme is coarse, and a targeted photo processing scheme cannot be provided according to the photographing habit and preference of a user.
Disclosure of Invention
The embodiment of the application provides a photographing processing method and device, a storage medium and electronic equipment, which can be used for providing a photo processing scheme for a user in a targeted manner by combining photographing habits and preferences of the user.
In a first aspect, an embodiment of the present application provides a photographing processing method, including:
when the photographing operation is detected, collecting panoramic data, and generating a panoramic feature vector according to the panoramic data;
acquiring historical photographing data, and generating a historical feature vector according to the photographing data;
generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user characteristic matrix and a pre-trained classification model;
and adjusting the original image according to the image adjusting parameter.
In a second aspect, an embodiment of the present application provides a photographing processing apparatus, including:
the first feature extraction module is used for collecting panoramic data when a photographing operation is detected, and generating a panoramic feature vector according to the panoramic data;
the second feature extraction module is used for acquiring historical photographing data and generating a historical feature vector according to the photographing data;
the feature fusion module is used for generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
the parameter acquisition module is used for acquiring an original image obtained by the photographing operation when a photographing instruction is received, and acquiring image adjustment parameters according to the original image, the user feature matrix and a pre-trained classification model;
and the image adjusting module is used for adjusting the original image according to the image adjusting parameters.
In a third aspect, a storage medium is provided in this application, where a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute a photographing processing method according to any embodiment of the application.
In a fourth aspect, an embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory has a computer program, and the processor is configured to execute the photographing processing method according to any embodiment of the present application by calling the computer program.
According to the technical scheme, when the photographing operation is detected, panoramic data are collected, a panoramic feature vector is generated according to the panoramic data, historical photographing data of the user are obtained, a historical feature vector is generated according to the photographing data, a user feature matrix is generated according to the historical feature vector and the panoramic feature vector, an image obtained through the photographing operation is obtained, image adjusting parameters are obtained according to the image, the user feature matrix and a pre-trained classification model, and the original image is adjusted according to the image adjusting parameters. According to the scheme, the panoramic data collected when the user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user characteristic matrix in real time, the characteristic vector can reflect the current situation of the user, the photographing habit and the preference of the user are combined, the image adjusting parameters are determined according to the characteristic vector, and the targeted user picture processing scheme can be realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of a panoramic sensing architecture of a photographing processing method according to an embodiment of the present application.
Fig. 2 is a first flowchart of a photographing processing method according to an embodiment of the present application.
Fig. 3 is a second flowchart of the photographing processing method according to the embodiment of the present application.
Fig. 4 is a third flowchart illustrating a photographing processing method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of an image display manner of the photographing processing method according to the embodiment of the application.
Fig. 6 is a schematic structural diagram of a photographing processing device according to an embodiment of the present application.
Fig. 7 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without inventive step, are within the scope of the present application.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic view of a panoramic sensing architecture of a photographing processing method according to an embodiment of the present application. The photographing processing method is applied to electronic equipment. A panoramic perception framework is arranged in the electronic equipment. The panoramic perception framework is the integration of hardware and software used for realizing the photographing processing method in electronic equipment.
The panoramic perception architecture comprises an information perception layer, a data processing layer, a feature extraction layer, a scene modeling layer and an intelligent service layer.
The information perception layer is used for acquiring information of the electronic equipment or information in an external environment. The information-perceiving layer may include a plurality of sensors. For example, the information sensing layer includes a plurality of sensors such as a distance sensor, a magnetic field sensor, a light sensor, an acceleration sensor, a fingerprint sensor, a hall sensor, a position sensor, a gyroscope, an inertial sensor, an attitude sensor, a barometer, and a heart rate sensor.
Among other things, a distance sensor may be used to detect a distance between the electronic device and an external object. The magnetic field sensor may be used to detect magnetic field information of the environment in which the electronic device is located. The light sensor can be used for detecting light information of the environment where the electronic equipment is located. The acceleration sensor may be used to detect acceleration data of the electronic device. The fingerprint sensor may be used to collect fingerprint information of a user. The Hall sensor is a magnetic field sensor manufactured according to the Hall effect, and can be used for realizing automatic control of electronic equipment. The location sensor may be used to detect the geographic location where the electronic device is currently located. Gyroscopes may be used to detect angular velocity of an electronic device in various directions. Inertial sensors may be used to detect motion data of an electronic device. The gesture sensor may be used to sense gesture information of the electronic device. A barometer may be used to detect the barometric pressure of the environment in which the electronic device is located. The heart rate sensor may be used to detect heart rate information of the user.
And the data processing layer is used for processing the data acquired by the information perception layer. For example, the data processing layer may perform data cleaning, data integration, data transformation, data reduction, and the like on the data acquired by the information sensing layer.
The data cleaning refers to cleaning a large amount of data acquired by the information sensing layer to remove invalid data and repeated data. The data integration refers to integrating a plurality of single-dimensional data acquired by the information perception layer into a higher or more abstract dimension so as to comprehensively process the data of the plurality of single dimensions. The data transformation refers to performing data type conversion or format conversion on the data acquired by the information sensing layer so that the transformed data can meet the processing requirement. The data reduction means that the data volume is reduced to the maximum extent on the premise of keeping the original appearance of the data as much as possible.
The characteristic extraction layer is used for extracting characteristics of the data processed by the data processing layer so as to extract the characteristics included in the data. The extracted features may reflect the state of the electronic device itself or the state of the user or the environmental state of the environment in which the electronic device is located, etc.
The feature extraction layer may extract features or process the extracted features by a method such as a filtering method, a packing method, or an integration method.
The filtering method is to filter the extracted features to remove redundant feature data. Packaging methods are used to screen the extracted features. The integration method is to integrate a plurality of feature extraction methods together to construct a more efficient and more accurate feature extraction method for extracting features.
The scene modeling layer is used for building a model according to the features extracted by the feature extraction layer, and the obtained model can be used for representing the state of the electronic equipment, the state of a user, the environment state and the like. For example, the scenario modeling layer may construct a key value model, a pattern identification model, a graph model, an entity relation model, an object-oriented model, and the like according to the features extracted by the feature extraction layer.
The intelligent service layer is used for providing intelligent services for the user according to the model constructed by the scene modeling layer. For example, the intelligent service layer can provide basic application services for users, perform system intelligent optimization for electronic equipment, and provide personalized intelligent services for users.
In addition, the panoramic perception architecture can further comprise a plurality of algorithms, each algorithm can be used for analyzing and processing data, and the plurality of algorithms can form an algorithm library. For example, the algorithm library may include algorithms such as markov algorithm, hidden dirichlet distribution algorithm, bayesian classification algorithm, support vector machine, K-means clustering algorithm, K-nearest neighbor algorithm, conditional random field, residual network, long-short term memory network, convolutional neural network, cyclic neural network, and the like.
Based on the panoramic sensing architecture, when the electronic equipment detects a photographing operation, the information sensing layer collects panoramic data, the feature extraction layer generates panoramic feature vectors according to the panoramic data and then acquires historical photographing data of the user, the feature extraction layer generates historical feature vectors according to the photographing data and generates a user feature matrix according to the historical feature vectors and the panoramic feature vectors, the intelligent service layer acquires images obtained through the photographing operation, image adjustment parameters are acquired according to the images, the user feature matrices and a pre-trained classification model, and the original images are adjusted according to the image adjustment parameters. According to the scheme, panoramic data collected when a user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user feature matrix in real time, the feature vector not only can reflect the current situation of the user, but also combines the photographing habit and preference of the user, a fine-grained picture optimization scheme is provided for the user, personalized image adjustment parameters are generated, and a targeted user picture processing scheme can be realized.
An execution main body of the photographing processing method may be the photographing processing apparatus provided in the embodiment of the present application, or an electronic device integrated with the photographing processing apparatus, where the photographing processing apparatus may be implemented in a hardware or software manner. The electronic device may be a smart phone, a tablet computer, a palm computer, a notebook computer, or a desktop computer.
Referring to fig. 2, fig. 2 is a first flowchart of a photographing processing method according to an embodiment of the present disclosure. The specific flow of the photographing processing method provided by the embodiment of the application can be as follows:
step 101, when a photographing operation is detected, collecting panoramic data, and generating a panoramic feature vector according to the panoramic data.
In this embodiment of the application, the electronic device may monitor the photographing operation in real time, for example, detect whether to start the photographing operation by detecting whether a camera module of the electronic device is started. For example, it is detected that the user takes a picture through a camera APP (Application) of the electronic device, or it is detected that the user triggers a picture taking operation through a third party APP. Or, by monitoring the user instruction in real time, when an image shooting request is received, it is determined that the shooting operation is detected.
When the electronic equipment detects the photographing operation, panoramic data starts to be collected. The panoramic data includes, but is not limited to, the following data: terminal status data and sensor status data. The terminal operation data comprises operation modes of the electronic equipment in each time interval, wherein the operation modes comprise a game mode, an entertainment mode, a video mode and the like, the operation mode of the electronic equipment can be determined according to the type of the currently operated application program, and the type of the currently operated application program can be directly obtained from the classification information of the application program installation package; or, the terminal operation data may further include a remaining power, a display mode, a network state, a screen-off/lock state, and the like of the electronic device.
The sensor status data includes data collected by each sensor on the electronic device, for example, the following sensors are included on the electronic device: a plurality of sensors such as distance sensor, magnetic field sensor, light sensor, acceleration sensor, fingerprint sensor, hall sensor, position sensor, gyroscope, inertial sensor, gesture sensor, barometer, heart rate sensor. The method comprises the steps of obtaining sensor state data of the electronic equipment when the photographing operation is detected, or obtaining sensor state data of the electronic equipment for a period of time before the photographing operation is detected. In some embodiments, the status data of some sensors may be acquired in a targeted manner. For example, data collected by a position sensor and a light sensor are obtained, wherein current position information of the electronic device can be determined according to the data collected by the position sensor, and the light sensor can collect light intensity of the environment where the electronic device is currently located.
Referring to fig. 3, fig. 3 is a second flowchart of a photographing processing method according to an embodiment of the present disclosure. In some embodiments, the step 101, when the photographing operation is detected, acquiring panoramic data, and generating a panoramic feature vector according to the panoramic data may include:
step 1011, when the photographing operation is detected, acquiring current terminal state data and sensor state data;
step 1012, generating a terminal state characteristic according to the terminal state data, and generating a terminal scene characteristic according to the sensor state data;
and 1013, fusing the terminal state characteristics and the terminal scene characteristics to generate the panoramic feature vector.
Generating terminal state characteristic ys according to terminal state data1(ii) a Acquiring state data of a magnetometer, an accelerometer and a gyroscope according to the state data of the sensors, and processing the acquired state data of the three sensors through a Kalman filtering algorithm to obtain a four-dimensional terminal attitude characteristic ys2~ys5(ii) a Obtaining barometric characteristics ys through data collected by barometer6(ii) a Over a networkModule-determined WIFI connection state ys7(ii) a Positioning is carried out through data acquired by position sensing, the current position attribute (such as market, home, company, park and the like) of the user is obtained, and the characteristic ys is generated8(ii) a Furthermore, the method can be combined with magnetometer, acceleration sensor, gyroscope and barometer 10 axis information to obtain new multi-dimensional data by using a filtering algorithm or a principal component analysis algorithm to generate corresponding characteristic ys9. For the non-digital features, index numbers can be established and converted into digital representations, for example, for the feature of the current system terminal state operation mode, the index numbers are used to represent the current state mode, such as 1 is game mode, 2 is entertainment mode, and 3 is video mode. If the current operation mode is the game mode, determining the current system state ys11. After the characteristics represented by all numbers are obtained, the characteristic data are fused to obtain a long vector, and the long vector is normalized to obtain a panoramic characteristic vector s1
s1={ys1,ys2,…,ysn}
Wherein, the more kinds of the collected panoramic data are, the generated panoramic feature vector s1The longer the length of (a), the larger the value of n. Ys in the above1-ys9By way of example only, the present application is not limited to these features, and features with more dimensions may be obtained according to actual needs.
102, obtaining historical photographing data, and generating a historical feature vector according to the photographing data.
The electronic equipment records and regularly updates historical photographing data of a user, wherein the historical photographing data comprises a historical photo editing scheme, calling third-party photographing APP information, calling system self-contained photographing APP information, sharing picture information and the like. The third-party photographing APP information is beneficial to mining the favorite retouching types of the users, because the subdivision degree of the third-party photographing APP on the market is very high, for example, a budding camera mainly processes a self-photographing head portrait, and Prime mainly performs style migration on a landscape image or a figure image, so that the user retouching preference can be known by calling the system self-photographing APP information. Generally, the installation package information of the third-party photographing APP has multiple layers, wherein the installation package information comprises the image modifying software type to which the APP belongs, the third-party photographing APP information is obtained and called, the image modifying software type to which the called APP belongs is further obtained according to the information, and the image modifying software type reflects the image modifying preference of a user.
In addition, in some current electronic devices, the photographing APP of the system itself also has a picture-repairing function, so that information of calling the photographing APP of the system itself can be recorded. In addition, different photo sharing habits of the user can also reflect the photo style preferred by the user. For example, if a user shares a photo with an Instagram (a social application for picture sharing), it indicates that the user is biased to a photo in the european and american style, and if the user shares a photo with a QQ space, it indicates that the user is biased to a photo in the youthful style, so that the user-shared photo information can be used as a history photographing data.
Furthermore, when the user manually adjusts the parameters of the brightness, contrast, sharpness, saturation, color temperature, etc. of the photo to change the display effect of the photo, these parameters are recorded as a historical photo editing scheme, for example, after the user takes a food photo at ordinary times, the contrast (+10) and saturation (+5) of the photo are generally manually increased to make the picture have more beautiful colors but not oversaturated with colors, and such parameters as the contrast (+10) and saturation (+5) are recorded as a historical photo editing scheme. The above-mentioned historical photo editing scheme, calling third party to take photo APP information, calling system self-bring photo APP information, sharing picture information and the like are all historical data which are recorded and stored in a preset path when detecting corresponding operation of a user, and the longer the time that the user uses the electronic equipment, the more the recorded historical data are, the stronger the referential of the data is.
After the data are acquired, historical feature vectors s are generated according to historical photographing data of the data2
s2={ya1,ya2,…,yam}
And 103, generating a user characteristic matrix according to the historical characteristic vector and the panoramic characteristic vector.
According to the panoramic feature vector s generated in step 1011And the historical feature vector s generated in step 1022And fusing the two feature vectors to generate a user feature matrix. There are many ways to fuse feature vectors. For example, in the first mode, s1And s2And adding to generate a long vector with the length of m + n, namely a matrix with m + n columns and 1 row. The user characteristic matrix is then as follows:
{ys1,ys2,…,ysn,ya1,ya2,…,yam}。
in a second mode, the step of generating the user feature matrix according to the historical feature vector and the panoramic feature vector comprises: and performing matrix superposition processing on the historical characteristic vector and the panoramic characteristic vector to generate a user characteristic matrix. I.e. s1And s2And (5) superposing to generate a matrix with m (or n) columns and 2 rows. If n is less than m, adopting a zero filling mode to carry out panoramic eigenvector s1Is extended by m. If n is more than m, adopting a zero filling mode to fill the historical characteristic vector s2Is extended by n. And when n is greater than m, obtaining a user feature matrix through superposition as follows:
Figure BDA0002022117910000091
and 104, when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user feature matrix and a pre-trained classification model.
After the photographing operation is completed and the original image is obtained, the image adjusting parameters suitable for the current original image are generated by combining the obtained user characteristic vector, the original image and the pre-trained classification model. Referring to fig. 4, fig. 4 is a third schematic flow chart of the photographing processing method according to the embodiment of the present application. In some embodiments, the step 104, when receiving a photographing instruction, acquiring an original image obtained by the photographing operation, and acquiring an image adjustment parameter according to the original image, the user feature matrix, and a pre-trained classification model includes the following refining steps:
step 1041, compressing the image according to a preset length and a preset width, and generating a pixel matrix according to the compressed image;
1042, converting the user characteristic matrix into a first characteristic matrix, wherein the row number of the first characteristic matrix is matched with the preset width;
step 1043, merging the pixel matrix and the first feature matrix to generate a second feature matrix;
and step 1044, acquiring image adjustment parameters according to the second feature matrix and the pre-trained classification model.
In an optional embodiment, the step 1043 of combining the pixel matrix and the first feature matrix and generating the second feature matrix may include the following steps of: converting the first feature matrix into a third feature matrix according to a Hilbert matrix; and combining the pixel matrix and the third feature matrix to generate the second feature matrix. The hilbert matrix is a mathematical transformation matrix, positive definite, and highly ill-conditioned. That is, when any element changes little, the determinant value and the inverse matrix of the whole matrix change greatly, the pathological degree and the order are related, and the hilbert matrix is multiplied to find out the more special characteristics and rules in the data. Therefore, the first feature matrix is converted by using the Hilbert matrix and then combined, which is beneficial for the classification model to better find the features in the data.
Because the image shot by the camera of the electronic device is generally high-definition, that is, the number of pixel points is large, if the image information is directly fused with the user feature vector, the data volume is large, and the calculation speed is low. Therefore, the image is compressed first. The preset length and the preset width may be preset, and the preset length is generally greater than or equal to the number of columns of the user feature matrix.
For example, assume that the predetermined length is equal to 200 pixels and the predetermined width is equal to 100 pixels. For example, the size of the image obtained by the camera is 2736 x 3648, the image is compressed to 100 x 200 size using image compression techniques. Assuming that the number of columns of the user feature matrix is 50 and the number of rows is 2, the size of the user feature matrix is 50 × 2, and then the vector is converted into a first feature matrix by superimposing the matrix itself twice in the transverse direction and fifty times in the longitudinal direction, the size of the first feature matrix is 100 × 100, as follows:
Figure BDA0002022117910000101
next, the first feature matrix is transformed into a third feature matrix according to the Hilbert matrix, wherein elements in the Hilbert matrix
Figure BDA0002022117910000102
And combining the pixel matrix of the original image with the size of 100 x 200 and the first feature matrix with the size of 100 x 100 to obtain a feature matrix with the size of 100 x 300 as a second feature matrix. The two-dimensional matrix comprises feature data of an original image, features extracted from panoramic data, the features can represent the current scene state of the electronic equipment, and the features extracted from historical photographing data of a user can represent the preference of the user on the style of photos.
And taking the two-dimensional feature matrix as input data of a pre-trained classification model to obtain image adjustment parameters. In some embodiments, the step 1045 of obtaining the image adjustment parameter according to the first feature matrix and the pre-trained classification model includes: and acquiring image adjustment parameters according to the second characteristic matrix and a preset convolutional neural network model.
In the embodiment of the application, a classification model is constructed by adopting a convolutional neural network, and a large amount of sample data is collected in advance to train the convolutional neural network model. For example, the method comprises the steps of collecting photos taken by a test user in various specific situations, recording panoramic data, obtaining historical photographing data of the test user, and adding image adjustment parameters to the data as tags in a manual tagging mode. Extracting feature matrices according to sample data in the same manner as in the above steps 101 to 104, where the feature matrices have corresponding labels in the image, inputting the feature matrices of the sample data with the labels into a preset convolutional neural network model for training, obtaining weight parameters, and completing training of the convolutional neural network model, where the trained model is a classification model, the last layer of the classification model may be a full-link layer, the full-link layer has a plurality of nodes, and each node corresponds to an image adjustment parameter scheme. And inputting the second feature matrix obtained in the step 1044 into the trained convolutional neural network model, so as to obtain corresponding image adjustment parameters.
In another alternative embodiment, the classification model may adopt an SVM (Support vector machine) classification model instead of the convolutional neural network model.
And 105, adjusting the original image according to the image adjusting parameter.
In an optional embodiment, the image adjustment parameter includes a filtering parameter, and the step 105 of adjusting the original image according to the image adjustment parameter may include: generating prompt information based on the filtering parameters, and displaying the original image and the prompt information; and when a confirmation instruction triggered based on the prompt information is received, adjusting the original image according to the filtering parameters, and displaying the adjusted image.
The filter parameters include a brightness parameter, a saturation parameter, a contrast parameter, a sharpness parameter, a color temperature parameter, and the like. In addition, it can be understood that after the electronic device is started to take a picture, a picture captured by the current camera is displayed in the view finder, and when the user clicks a photographing instruction, a shot picture is generated and displayed. Therefore, in the embodiment of the application, the process of generating the user feature matrix by the electronic device by collecting the panoramic data and the historical photographing data is performed synchronously with the process of capturing the image by the camera. Therefore, after the original image obtained by the photographing operation is obtained, the original image is displayed on a display interface, meanwhile, prompt information is generated based on the obtained filtering parameters, whether the original image is adjusted by using the filtering parameters generated by the system is selected by a user, if the condition that the user triggers a confirmation instruction based on the prompt information is detected, the original image is adjusted according to the filtering parameters, and the adjusted image is displayed.
Or, in another alternative embodiment, the image adjustment parameter includes a filtering parameter, and the step of adjusting the original image using the image adjustment parameter may include:
adjusting the original image according to the filtering parameters; and displaying the original image and the adjusted image.
In this embodiment, the original image and the adjusted image are displayed on the display interface synchronously for the user to select from. Referring to fig. 5, fig. 5 is a schematic diagram illustrating an image display manner in the photographing processing method according to the embodiment of the present application.
Alternatively, in another alternative embodiment, the original image may be adjusted directly according to the image adjustment parameters, and the adjusted image may be displayed. Because the process of acquiring the user characteristic matrix and the process of capturing and generating the original image by the camera are carried out synchronously, after the photographing is finished, the image adjusting parameters are acquired, at the moment, the original image can be cached, and the image adjusting parameters are directly used for rendering and displaying the original image on the display interface. Meanwhile, a control for restoring the original image can be displayed on the display interface, and if the condition that the user triggers a corresponding instruction based on the control is detected, the original image can be restored and displayed.
Optionally, in an embodiment, the image adjustment parameters further include a 3A parameter, where the 3A parameter further includes: AF (Auto Focus), AE (Auto Exposure), and AWB (Auto white balance) parameters. After the 3A parameter is obtained, the parameter is not used for directly adjusting the display effect of the image, but the photographing and imaging quality of the user can be improved by setting the parameter of the camera pipeline (image channel) at the bottom layer of the electronic equipment system. Thus, the user can improve the image imaging quality the next time the user uses the camera.
As can be seen from the above, in the photographing processing method provided in the embodiment of the present application, when a photographing operation is detected, panoramic data is collected, a panoramic feature vector is generated according to the panoramic data, then historical photographing data of the user is obtained, a historical feature vector is generated according to the photographing data, a user feature matrix is generated according to the historical feature vector and the panoramic feature vector, then an image obtained by the photographing operation is obtained, an image adjustment parameter is obtained according to the image, the user feature matrix, and a pre-trained classification model, and the original image is adjusted according to the image adjustment parameter. According to the scheme, panoramic data collected when a user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user feature matrix in real time, the feature vector not only can reflect the current situation of the user, but also combines the photographing habit and preference of the user, a fine-grained picture optimization scheme is provided for the user, personalized image adjustment parameters are generated, and a targeted user picture processing scheme can be realized.
In one embodiment, a photographing processing apparatus is also provided. Referring to fig. 6, fig. 6 is a schematic structural diagram of a photographing processing apparatus 400 according to an embodiment of the present disclosure. The photographing processing apparatus 400 is applied to an electronic device, and the photographing processing apparatus 400 includes a first feature extraction module 401, a second feature extraction module 402, a feature fusion module 403, a parameter acquisition module 404, and an image adjustment module 405, as follows:
the first feature extraction module 401 is configured to, when a photographing operation is detected, acquire panoramic data, and generate a panoramic feature vector according to the panoramic data.
In this embodiment of the application, the electronic device may monitor the photographing operation in real time, for example, detect whether to start the photographing operation by detecting whether a camera module of the electronic device is started. For example, it is detected that the user takes a picture through a camera APP (Application) of the electronic device, or it is detected that the user triggers a picture taking operation through a third party APP. Or, by monitoring the user instruction in real time, when an image shooting request is received, it is determined that a shooting operation is detected, and the first feature extraction module 401 starts to collect panoramic data.
When the electronic equipment detects the photographing operation, panoramic data starts to be collected. The panoramic data includes, but is not limited to, the following data: terminal status data and sensor status data. The terminal operation data comprises operation modes of the electronic equipment in each time interval, wherein the operation modes comprise a game mode, an entertainment mode, a video mode and the like, the operation mode of the electronic equipment can be determined according to the type of the currently operated application program, and the type of the currently operated application program can be directly obtained from the classification information of the application program installation package; or, the terminal operation data may further include a remaining power, a display mode, a network state, a screen-off/lock state, and the like of the electronic device.
The sensor status data includes data collected by each sensor on the electronic device, for example, the following sensors are included on the electronic device: a plurality of sensors such as distance sensor, magnetic field sensor, light sensor, acceleration sensor, fingerprint sensor, hall sensor, position sensor, gyroscope, inertial sensor, gesture sensor, barometer, heart rate sensor. The method comprises the steps of obtaining sensor state data of the electronic equipment when a user instruction is received, or obtaining sensor state data of the electronic equipment for a period of time before the user instruction is received. In some embodiments, the status data of some sensors may be acquired in a targeted manner. For example, data collected by a position sensor and a light sensor are obtained, wherein current position information of the electronic device can be determined according to the data collected by the position sensor, and the light sensor can collect light intensity of the environment where the electronic device is currently located.
In some embodiments, the first feature extraction module 401 is further configured to: acquiring current terminal state data and sensor state data; generating terminal state characteristics according to the terminal state data, and generating terminal scene characteristics according to the sensor state data; and fusing the terminal state characteristic and the terminal scene characteristic to generate the panoramic feature vector.
According toTerminal state data generation terminal state feature ys1(ii) a Acquiring state data of a magnetometer, an accelerometer and a gyroscope according to the state data of the sensors, and processing the acquired state data of the three sensors through a Kalman filtering algorithm to obtain a four-dimensional terminal attitude characteristic ys2~ys5(ii) a Obtaining barometric characteristics ys through data collected by barometer6(ii) a Determining WIFI connection state ys through network module7(ii) a Positioning is carried out through data acquired by position sensing, the current position attribute (such as market, home, company, park and the like) of the user is obtained, and the characteristic ys is generated8(ii) a Furthermore, the method can be combined with magnetometer, acceleration sensor, gyroscope and barometer 10 axis information to obtain new multi-dimensional data by using a filtering algorithm or a principal component analysis algorithm to generate corresponding characteristic ys9. For the non-digital features, index numbers can be established and converted into digital representations, for example, for the feature of the current system terminal state operation mode, the index numbers are used to represent the current state mode, such as 1 is game mode, 2 is entertainment mode, and 3 is video mode. If the current operation mode is the game mode, determining the current system state ys11. After the characteristics represented by all numbers are obtained, the characteristic data are fused to obtain a long vector, and the long vector is normalized to obtain a panoramic characteristic vector s1
s1={ys1,ys2,…,ysn}
The more the types of the panoramic data collected by the first feature extraction module 401 are, the generated panoramic feature vector s is1The longer the length of (a), the larger the value of n. Ys in the above1-ys9By way of example only, the present application is not limited to these features, and features with more dimensions may be obtained according to actual needs.
The second feature extraction module 402 is configured to obtain historical photographing data, and generate a historical feature vector according to the photographing data.
The electronic equipment records and regularly updates historical photographing data of a user, wherein the historical photographing data comprises a historical photo editing scheme, calling third-party photographing APP information, calling system self-contained photographing APP information, sharing picture information and the like. The third-party photographing APP information is beneficial to mining the favorite retouching types of the users, because the subdivision degree of the third-party photographing APP on the market is very high, for example, a budding camera mainly processes a self-photographing head portrait, and Prime mainly performs style migration on a landscape image or a figure image, so that the user retouching preference can be known by calling the system self-photographing APP information. Generally, the installation package information of the third-party photographing APP has multiple layers, wherein the installation package information comprises the image modifying software type to which the APP belongs, the third-party photographing APP information is obtained and called, the image modifying software type to which the called APP belongs is further obtained according to the information, and the image modifying software type reflects the image modifying preference of a user.
In addition, in some current electronic devices, the photographing APP of the system itself also has a picture-repairing function, so that information of calling the photographing APP of the system itself can be recorded. In addition, different photo sharing habits of the user can also reflect the photo style preferred by the user. For example, if a user shares a photo with an Instagram (a social application for picture sharing), it indicates that the user is biased to a photo in the european and american style, and if the user shares a photo with a QQ space, it indicates that the user is biased to a photo in the youthful style, so that the user-shared photo information can be used as a history photographing data.
Furthermore, when the user manually adjusts the parameters of the brightness, contrast, sharpness, saturation, color temperature, etc. of the photo to change the display effect of the photo, these parameters are recorded as a historical photo editing scheme, for example, after the user takes a food photo at ordinary times, the contrast (+10) and saturation (+5) of the photo are generally manually increased to make the picture have more beautiful colors but not oversaturated with colors, and such parameters as the contrast (+10) and saturation (+5) are recorded as a historical photo editing scheme. The above-mentioned historical photo editing scheme, calling third party to take photo APP information, calling system self-bring photo APP information, sharing picture information and the like are all historical data which are recorded and stored in a preset path when detecting corresponding operation of a user, and the longer the time that the user uses the electronic equipment, the more the recorded historical data are, the stronger the referential of the data is.
After the data are acquired, the second feature extraction module 402 generates a historical feature vector s according to the historical photographing data of the data2
s2={ya1,ya2,…,yam}
A feature fusion module 403, configured to generate a user feature matrix according to the historical feature vector and the panoramic feature vector.
Panoramic feature vector s generated based on first feature extraction module 4011And the historical feature vector s generated by the second feature extraction module 4022The feature fusion module 403 fuses the two feature vectors to generate a user feature matrix. There are many ways to fuse feature vectors. For example, in the first mode, s1And s2And adding to generate a long vector with the length of m + n, namely a matrix with m + n columns and 1 row. The user characteristic matrix is then as follows:
{ys1,ys2,…,ysn,ya1,ya2,…,yam}。
in a second manner, the second feature extraction module 402 is further configured to: and performing matrix superposition processing on the historical characteristic vector and the panoramic characteristic vector to generate a user characteristic matrix. I.e. s1And s2Superposing to generate a matrix with m (or n) columns and 2 rows, if n is less than m, adopting a zero filling mode to fill the panoramic eigenvector s1Is extended by m. If n is more than m, adopting a zero filling mode to fill the historical characteristic vector s2Is extended by n. And when n is greater than m, obtaining a user feature matrix through superposition as follows:
Figure BDA0002022117910000151
a parameter obtaining module 404, configured to, when a photographing instruction is received, obtain an original image obtained by the photographing operation, and obtain an image adjustment parameter according to the original image, the user feature matrix, and a pre-trained classification model.
After the photographing operation is completed and the original image is obtained, the image adjusting parameters suitable for the current original image are generated by combining the obtained user characteristic vector, the original image and the pre-trained classification model. In some embodiments, the parameter acquisition module 404 is further configured to: compressing the image according to a preset length and a preset width, and generating a pixel matrix according to the compressed image; converting the user characteristic matrix into a first characteristic matrix, wherein the row number of the first characteristic matrix is matched with the preset width; combining the pixel matrix and the first feature matrix to generate a second feature matrix; and acquiring image adjustment parameters according to the second feature matrix and a pre-trained classification model.
In an optional implementation manner, the parameter obtaining module 404 is further configured to: converting the first feature matrix into a third feature matrix according to a Hilbert matrix; and combining the pixel matrix and the third feature matrix to generate the second feature matrix. The hilbert matrix is a mathematical transformation matrix, positive definite, and highly ill-conditioned. That is, when any element changes little, the determinant value and the inverse matrix of the whole matrix change greatly, the pathological degree and the order are related, and the hilbert matrix is multiplied to find out the more special characteristics and rules in the data. Therefore, the first feature matrix is converted by using the Hilbert matrix and then combined, which is beneficial for the classification model to better find the features in the data.
Because the image shot by the camera of the electronic device is generally high-definition, that is, the number of pixel points is large, if the image information is directly fused with the user feature vector, the data volume is large, and the calculation speed is low. Therefore, the image is compressed first. The preset length and the preset width may be preset, and the preset length is generally greater than or equal to the number of columns of the user feature matrix.
For example, assume that the predetermined length is equal to 200 pixels and the predetermined width is equal to 100 pixels. For example, the size of the image obtained by the camera is 2736 x 3648, the image is compressed to 100 x 200 size using image compression techniques. Assuming that the number of columns of the user feature matrix is 50 and the number of rows is 2, the size of the user feature matrix is 50 × 2, and then the vector is converted into a first feature matrix by superimposing the matrix itself twice in the transverse direction and fifty times in the longitudinal direction, the size of the first feature matrix is 100 × 100, as follows:
Figure BDA0002022117910000171
next, the first feature matrix is transformed into a third feature matrix according to the Hilbert matrix, wherein elements in the Hilbert matrix
Figure BDA0002022117910000172
And combining the pixel matrix of the original image with the size of 100 x 200 and the first feature matrix with the size of 100 x 100 to obtain a feature matrix with the size of 100 x 300 as a second feature matrix. The two-dimensional matrix comprises feature data of an original image, features extracted from panoramic data, the features can represent the current scene state of the electronic equipment, and the features extracted from historical photographing data of a user can represent the preference of the user on the style of photos.
And taking the two-dimensional feature matrix as input data of a pre-trained classification model to obtain image adjustment parameters. In some embodiments, the parameter acquisition module 404 is further configured to: and acquiring image adjustment parameters according to the second characteristic matrix and a preset convolutional neural network model.
In the embodiment of the application, a classification model is constructed by adopting a convolutional neural network, and a large amount of sample data is collected in advance to train the convolutional neural network model. For example, the method comprises the steps of collecting photos taken by a test user in various specific situations, recording panoramic data, obtaining historical photographing data of the test user, and adding image adjustment parameters to the data as tags in a manual tagging mode. Extracting feature matrixes according to sample data, wherein the feature matrixes have corresponding labels in images, inputting the feature matrixes of the sample data with the labels into a preset convolutional neural network model for training to obtain weight parameters, finishing the training of the convolutional neural network model, wherein the trained model is a classification model, the last layer of the classification model can be a full connection layer, the full connection layer is provided with a plurality of nodes, and each node corresponds to an image adjustment parameter scheme. And inputting the second characteristic matrix into the trained convolutional neural network model, so as to obtain corresponding image adjusting parameters.
In another alternative embodiment, the classification model may adopt an SVM (Support vector machine) classification model instead of the convolutional neural network model.
An image adjusting module 405, configured to adjust the original image according to the image adjusting parameter.
In an alternative embodiment, the image adjustment parameters include filtering parameters, and the image adjustment module 405 is further configured to: generating prompt information based on the filtering parameters, and displaying the original image and the prompt information; and when a confirmation instruction triggered based on the prompt information is received, adjusting the original image according to the filtering parameters, and displaying the adjusted image.
The filter parameters include a brightness parameter, a saturation parameter, a contrast parameter, a sharpness parameter, a color temperature parameter, and the like. In addition, it can be understood that after the electronic device is started to take a picture, a picture captured by the current camera is displayed in the view finder, and when the user clicks a photographing instruction, a shot picture is generated and displayed. Therefore, in the embodiment of the application, the process of generating the user feature matrix by the electronic device by collecting the panoramic data and the historical photographing data is performed synchronously with the process of capturing the image by the camera. Therefore, after the original image obtained by the photographing operation is acquired, the original image is displayed on the display interface, meanwhile, the image adjusting module 405 generates prompt information based on the acquired filter parameters, the user selects whether to use the filter parameters generated by the system to adjust the original image, and if it is detected that the user triggers a confirmation instruction based on the prompt information, the original image is adjusted according to the filter parameters, and the adjusted image is displayed.
Alternatively, in another alternative embodiment, the image adjustment parameters include filtering parameters, and the image adjustment module 405 is further configured to: adjusting the original image according to the filtering parameters; and displaying the original image and the adjusted image.
In this embodiment, the original image and the adjusted image are displayed on the display interface synchronously for the user to select from.
Alternatively, in another alternative embodiment, the image adjustment module 405 may adjust the original image directly according to the image adjustment parameter and display the adjusted image. Because the process of acquiring the user characteristic matrix and the process of capturing and generating the original image by the camera are carried out synchronously, after the photographing is finished, the image adjusting parameters are acquired, at the moment, the original image can be cached, and the image adjusting parameters are directly used for rendering and displaying the original image on the display interface. Meanwhile, a control for restoring the original image can be displayed on the display interface, and if the condition that the user triggers a corresponding instruction based on the control is detected, the original image can be restored and displayed.
Optionally, in an embodiment, the image adjustment parameters further include a 3A parameter, where the 3A parameter further includes: AF (Auto Focus), AE (Auto Exposure), and AWB (Auto white balance) parameters. After the 3A parameter is obtained, the parameter is not used for directly adjusting the display effect of the image, but the photographing and imaging quality of the user can be improved by setting the parameter of the camera pipeline (image channel) at the bottom layer of the electronic equipment system. Thus, the user can improve the image imaging quality the next time the user uses the camera.
As can be seen from the above, in the photographing processing apparatus provided in this embodiment of the application, when the first feature extraction module 401 detects a photographing operation, acquires panoramic data, and generates a panoramic feature vector according to the panoramic data, then the second feature extraction module 402 acquires historical photographing data of the user, and generates a historical feature vector according to the photographing data, the feature fusion module 403 generates a user feature matrix according to the historical feature vector and the panoramic feature vector, then the parameter acquisition module 404 acquires an image obtained by the photographing operation, acquires an image adjustment parameter according to the image, the user feature matrix, and a pre-trained classification model, and the image adjustment module 405 adjusts the original image according to the image adjustment parameter. According to the scheme, panoramic data collected when a user takes a picture, the historical photographing habit of the user and the shot picture are combined to obtain the user feature matrix in real time, the feature vector not only can reflect the current situation of the user, but also combines the photographing habit and preference of the user, a fine-grained picture optimization scheme is provided for the user, personalized image adjustment parameters are generated, and a targeted user picture processing scheme can be realized.
The embodiment of the application also provides the electronic equipment. The electronic device can be a smart phone, a tablet computer and the like. As shown in fig. 7, fig. 7 is a schematic view of a first structure of an electronic device according to an embodiment of the present application. The electronic device 300 comprises a processor 301 and a memory 302. The processor 301 is electrically connected to the memory 302.
The processor 301 is a control center of the electronic device 300, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or calling a computer program stored in the memory 302 and calling data stored in the memory 302, thereby performing overall monitoring of the electronic device.
In this embodiment, the processor 301 in the electronic device 300 loads instructions corresponding to one or more processes of the computer program into the memory 302 according to the following steps, and the processor 301 runs the computer program stored in the memory 302, so as to implement various functions:
when the photographing operation is detected, collecting panoramic data, and generating a panoramic feature vector according to the panoramic data;
acquiring historical photographing data, and generating a historical feature vector according to the photographing data;
generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user characteristic matrix and a pre-trained classification model;
and adjusting the original image according to the image adjusting parameter.
In some embodiments, the panoramic data includes terminal state data and sensor state data; when collecting panoramic data and generating a panoramic feature vector from the panoramic data, the processor 301 performs the following steps:
acquiring current terminal state data and sensor state data;
generating terminal state characteristics according to the terminal state data, and generating terminal scene characteristics according to the sensor state data;
and fusing the terminal state characteristic and the terminal scene characteristic to generate the panoramic feature vector.
In some embodiments, when generating the user feature matrix from the historical feature vector and the panoramic feature vector, the processor 301 performs the following steps:
and performing matrix superposition processing on the historical characteristic vector and the panoramic characteristic vector to generate a user characteristic matrix.
In some embodiments, when obtaining the image adjustment parameter according to the image, the user feature matrix, and the pre-trained classification model, the processor 301 performs the following steps:
compressing the image according to a preset length and a preset width, and generating a pixel matrix according to the compressed image;
converting the user characteristic matrix into a first characteristic matrix, wherein the row number of the first characteristic matrix is matched with the preset width;
converting the first feature matrix into a second feature matrix according to a Hilbert matrix;
combining the pixel matrix and the second feature matrix to generate a third feature matrix;
and acquiring image adjustment parameters according to the second feature matrix and a pre-trained classification model.
In some embodiments, when obtaining the image adjustment parameter according to the second feature matrix and the pre-trained classification model, the processor 301 performs the following steps:
and acquiring image adjustment parameters according to the second characteristic matrix and a preset convolutional neural network model.
In some embodiments, the image adjustment parameters include filtering parameters, and when the original image is adjusted using the image adjustment parameters, the processor 301 performs the following steps:
generating prompt information based on the filtering parameters, and displaying the original image and the prompt information;
and when a confirmation instruction triggered based on the prompt information is received, adjusting the original image according to the filtering parameters, and displaying the adjusted image.
In some embodiments, when receiving a confirmation instruction triggered based on the hint information, after the step of adjusting the original image according to the filtering parameters, the processor 301 performs the following steps:
and updating the historical photographing data according to the filtering parameters.
In some embodiments, the image adjustment parameters further include a 3A parameter, and after the step of adjusting the original image using the image adjustment parameters, the processor 301 performs the steps of:
and resetting image channel parameters according to the 3A parameters.
Memory 302 may be used to store computer programs and data. The memory 302 stores computer programs containing instructions executable in the processor. The computer program may constitute various functional modules. The processor 301 executes various functional applications and data processing by calling a computer program stored in the memory 302.
In some embodiments, as shown in fig. 8, fig. 8 is a second schematic structural diagram of an electronic device provided in the embodiments of the present application. The electronic device 300 further includes: radio frequency circuit 303, display screen 304, control circuit 305, input unit 306, audio circuit 307, sensor 308, and power supply 309. The processor 301 is electrically connected to the rf circuit 303, the display 304, the control circuit 305, the input unit 306, the audio circuit 307, the sensor 308, and the power source 309, respectively.
The radio frequency circuit 303 is used for transceiving radio frequency signals to communicate with a network device or other electronic devices through wireless communication.
The display screen 304 may be used to display information entered by or provided to the user as well as various graphical user interfaces of the electronic device, which may be comprised of images, text, icons, video, and any combination thereof.
The control circuit 305 is electrically connected to the display screen 304, and is used for controlling the display screen 304 to display information.
The input unit 306 may be used to receive input numbers, character information, or user characteristic information (e.g., fingerprint), and to generate keyboard, mouse, joystick, optical, or trackball signal inputs related to user settings and function control. The input unit 306 may include a fingerprint recognition module.
Audio circuitry 307 may provide an audio interface between the user and the electronic device through a speaker, microphone. Where audio circuitry 307 includes a microphone. The microphone is electrically connected to the processor 301. The microphone is used for receiving voice information input by a user.
The sensor 308 is used to collect external environmental information. The sensor 308 may include one or more of an ambient light sensor, an acceleration sensor, a gyroscope, and the like.
The power supply 309 is used to power the various components of the electronic device 300. In some embodiments, the power source 309 may be logically coupled to the processor 301 through a power management system, such that functions to manage charging, discharging, and power consumption management are performed through the power management system.
Although not shown in fig. 8, the electronic device 300 may further include a camera, a bluetooth module, and the like, which are not described in detail herein.
As can be seen from the above, an embodiment of the present application provides an electronic device, where when the electronic device detects a photographing operation, panoramic data is collected, and a panoramic feature vector is generated according to the panoramic data; acquiring historical photographing data, and generating a historical feature vector according to the photographing data; generating a user feature matrix according to the historical feature vector and the panoramic feature vector; when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user characteristic matrix and a pre-trained classification model; and adjusting the original image according to the image adjusting parameter.
An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the photographing processing method according to any one of the above embodiments.
It should be noted that, all or part of the steps in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, which may include, but is not limited to: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The photographing processing method, the photographing processing apparatus, the storage medium, and the electronic device provided in the embodiments of the present application are described in detail above. The principle and the implementation of the present application are explained herein by applying specific examples, and the above description of the embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (12)

1. A photographing processing method, comprising:
when the photographing operation is detected, collecting panoramic data, and generating a panoramic feature vector according to the panoramic data;
acquiring historical photographing data, and generating a historical feature vector according to the photographing data;
generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
when a photographing instruction is received, acquiring an original image obtained by the photographing operation, and acquiring image adjustment parameters according to the original image, the user characteristic matrix and a pre-trained classification model;
and adjusting the original image according to the image adjusting parameter.
2. The photo processing method of claim 1, wherein the panoramic data includes terminal status data and sensor status data; collecting panoramic data, and generating panoramic eigenvectors according to the panoramic data, including:
acquiring current terminal state data and sensor state data;
generating terminal state characteristics according to the terminal state data, and generating terminal scene characteristics according to the sensor state data;
and fusing the terminal state characteristic and the terminal scene characteristic to generate the panoramic feature vector.
3. The photographing processing method of claim 1, wherein the step of generating a user feature matrix from the historical feature vector and the panoramic feature vector comprises:
and performing matrix superposition processing on the historical characteristic vector and the panoramic characteristic vector to generate a user characteristic matrix.
4. The photographing processing method of claim 1, wherein the step of obtaining image adjustment parameters according to the image, the user feature matrix and a pre-trained classification model comprises:
compressing the image according to a preset length and a preset width, and generating a pixel matrix according to the compressed image;
converting the user characteristic matrix into a first characteristic matrix, wherein the row number of the first characteristic matrix is matched with the preset width;
combining the pixel matrix and the first feature matrix to generate a second feature matrix;
and acquiring image adjustment parameters according to the second feature matrix and a pre-trained classification model.
5. The photographing processing method according to claim 4, wherein the step of combining the pixel matrix and the first feature matrix to generate a second feature matrix comprises:
converting the first feature matrix into a third feature matrix according to a Hilbert matrix;
and combining the pixel matrix and the third feature matrix to generate the second feature matrix.
6. The photographing processing method of claim 4, wherein the step of obtaining image adjustment parameters according to the second feature matrix and a pre-trained classification model comprises:
and acquiring image adjustment parameters according to the second characteristic matrix and a preset convolutional neural network model.
7. The photo processing method of any of claims 1 to 6 wherein the image adjustment parameters include filter parameters, and the step of adjusting the original image using the image adjustment parameters comprises:
generating prompt information based on the filtering parameters, and displaying the original image and the prompt information;
and when a confirmation instruction triggered based on the prompt information is received, adjusting the original image according to the filtering parameters, and displaying the adjusted image.
8. The photo processing method of claim 7, wherein after the step of adjusting the original image according to the filter parameter when receiving a confirmation instruction triggered based on the prompt information, the method further comprises:
and updating the historical photographing data according to the filtering parameters.
9. The photo processing method of claim 7 wherein the image adjustment parameters further include a 3A parameter, and wherein after the step of adjusting the original image using the image adjustment parameters, the method further comprises:
and resetting image channel parameters according to the 3A parameters.
10. A photographing processing apparatus, comprising:
the first feature extraction module is used for collecting panoramic data when a photographing operation is detected, and generating a panoramic feature vector according to the panoramic data;
the second feature extraction module is used for acquiring historical photographing data and generating a historical feature vector according to the photographing data;
the feature fusion module is used for generating a user feature matrix according to the historical feature vector and the panoramic feature vector;
the parameter acquisition module is used for acquiring an original image obtained by the photographing operation when a photographing instruction is received, and acquiring image adjustment parameters according to the original image, the user feature matrix and a pre-trained classification model;
and the image adjusting module is used for adjusting the original image according to the image adjusting parameters.
11. A storage medium having stored thereon a computer program, characterized by causing a computer to execute the photographing processing method according to any one of claims 1 to 9 when the computer program runs on the computer.
12. An electronic device comprising a processor and a memory, the memory storing a computer program, wherein the processor is configured to execute the photographing processing method according to any one of claims 1 to 9 by calling the computer program.
CN201910282456.4A 2019-04-09 2019-04-09 Photographing processing method and device, storage medium and electronic equipment Active CN111800569B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910282456.4A CN111800569B (en) 2019-04-09 2019-04-09 Photographing processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910282456.4A CN111800569B (en) 2019-04-09 2019-04-09 Photographing processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111800569A true CN111800569A (en) 2020-10-20
CN111800569B CN111800569B (en) 2022-02-22

Family

ID=72805685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910282456.4A Active CN111800569B (en) 2019-04-09 2019-04-09 Photographing processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111800569B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307227A (en) * 2020-11-24 2021-02-02 国家电网有限公司大数据中心 Data classification method
CN113329173A (en) * 2021-05-19 2021-08-31 Tcl通讯(宁波)有限公司 Image optimization method and device, storage medium and terminal equipment
CN113840086A (en) * 2021-09-06 2021-12-24 联想(北京)有限公司 Information processing method and electronic equipment
CN117014561A (en) * 2023-09-26 2023-11-07 荣耀终端有限公司 Information fusion method, training method of variable learning and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140321747A1 (en) * 2013-04-28 2014-10-30 Tencent Technology (Shenzhen) Co., Ltd. Method, apparatus and terminal for detecting image stability
US20150189181A1 (en) * 2010-07-26 2015-07-02 Apple Inc. System and Method For Contextual Digital Photography Mode Selection
US20150310301A1 (en) * 2011-09-24 2015-10-29 Lotfi A. Zadeh Analyzing or resolving ambiguities in an image for object or pattern recognition
JP2018014686A (en) * 2016-07-22 2018-01-25 日本電信電話株式会社 Camera calibration device, camera calibration method and camera calibration program
CN107944035A (en) * 2017-12-13 2018-04-20 合肥工业大学 A kind of image recommendation method for merging visual signature and user's scoring
CN108174096A (en) * 2017-12-29 2018-06-15 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of acquisition parameters setting
CN109299994A (en) * 2018-07-27 2019-02-01 北京三快在线科技有限公司 Recommended method, device, equipment and readable storage medium storing program for executing
CN109344314A (en) * 2018-08-20 2019-02-15 腾讯科技(深圳)有限公司 A kind of data processing method, device and server

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150189181A1 (en) * 2010-07-26 2015-07-02 Apple Inc. System and Method For Contextual Digital Photography Mode Selection
US20150310301A1 (en) * 2011-09-24 2015-10-29 Lotfi A. Zadeh Analyzing or resolving ambiguities in an image for object or pattern recognition
US20140321747A1 (en) * 2013-04-28 2014-10-30 Tencent Technology (Shenzhen) Co., Ltd. Method, apparatus and terminal for detecting image stability
JP2018014686A (en) * 2016-07-22 2018-01-25 日本電信電話株式会社 Camera calibration device, camera calibration method and camera calibration program
CN107944035A (en) * 2017-12-13 2018-04-20 合肥工业大学 A kind of image recommendation method for merging visual signature and user's scoring
CN108174096A (en) * 2017-12-29 2018-06-15 广东欧珀移动通信有限公司 Method, apparatus, terminal and the storage medium of acquisition parameters setting
CN109299994A (en) * 2018-07-27 2019-02-01 北京三快在线科技有限公司 Recommended method, device, equipment and readable storage medium storing program for executing
CN109344314A (en) * 2018-08-20 2019-02-15 腾讯科技(深圳)有限公司 A kind of data processing method, device and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马泽龙: "《采用图像直方图特征函数的高速相机自动曝光方法》", 《光学精密工程》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307227A (en) * 2020-11-24 2021-02-02 国家电网有限公司大数据中心 Data classification method
CN112307227B (en) * 2020-11-24 2023-08-29 国家电网有限公司大数据中心 Data classification method
CN113329173A (en) * 2021-05-19 2021-08-31 Tcl通讯(宁波)有限公司 Image optimization method and device, storage medium and terminal equipment
CN113840086A (en) * 2021-09-06 2021-12-24 联想(北京)有限公司 Information processing method and electronic equipment
CN117014561A (en) * 2023-09-26 2023-11-07 荣耀终端有限公司 Information fusion method, training method of variable learning and electronic equipment
CN117014561B (en) * 2023-09-26 2023-12-15 荣耀终端有限公司 Information fusion method, training method of variable learning and electronic equipment

Also Published As

Publication number Publication date
CN111800569B (en) 2022-02-22

Similar Documents

Publication Publication Date Title
US11244170B2 (en) Scene segmentation method and device, and storage medium
CN111800569B (en) Photographing processing method and device, storage medium and electronic equipment
JP7058760B2 (en) Image processing methods and their devices, terminals and computer programs
KR101727169B1 (en) Method and apparatus for generating image filter
CN109087376B (en) Image processing method, image processing device, storage medium and electronic equipment
CN103152489A (en) Showing method and device for self-shooting image
CN112287852B (en) Face image processing method, face image display method, face image processing device and face image display equipment
CN113411498B (en) Image shooting method, mobile terminal and storage medium
CN110290426B (en) Method, device and equipment for displaying resources and storage medium
CN105427369A (en) Mobile terminal and method for generating three-dimensional image of mobile terminal
WO2022001806A1 (en) Image transformation method and apparatus
CN114096994A (en) Image alignment method and device, electronic equipment and storage medium
CN112052897A (en) Multimedia data shooting method, device, terminal, server and storage medium
CN114926351A (en) Image processing method, electronic device, and computer storage medium
CN114741559A (en) Method, apparatus and storage medium for determining video cover
WO2021180046A1 (en) Image color retention method and device
CN113810588B (en) Image synthesis method, terminal and storage medium
CN113642359B (en) Face image generation method and device, electronic equipment and storage medium
CN110047115B (en) Star image shooting method and device, computer equipment and storage medium
CN115115679A (en) Image registration method and related equipment
CN111800538B (en) Information processing method, device, storage medium and terminal
CN113407774A (en) Cover determining method and device, computer equipment and storage medium
CN112907702A (en) Image processing method, image processing device, computer equipment and storage medium
CN111796928A (en) Terminal resource optimization method and device, storage medium and terminal equipment
CN117014561B (en) Information fusion method, training method of variable learning and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant