CN109819167B - Image processing method and device and mobile terminal - Google Patents
Image processing method and device and mobile terminal Download PDFInfo
- Publication number
- CN109819167B CN109819167B CN201910100756.6A CN201910100756A CN109819167B CN 109819167 B CN109819167 B CN 109819167B CN 201910100756 A CN201910100756 A CN 201910100756A CN 109819167 B CN109819167 B CN 109819167B
- Authority
- CN
- China
- Prior art keywords
- user
- shooting
- image
- processing
- processing parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Studio Devices (AREA)
- Telephone Function (AREA)
Abstract
The invention provides an image processing method, an image processing device and a mobile terminal, and relates to the technical field of image processing. Wherein, the method comprises the following steps: acquiring a user characteristic attribute; determining shooting parameters and first processing parameters corresponding to the user characteristic attributes; when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameters, shooting is carried out to obtain a target image; and processing the target image according to the first processing parameter. In the invention, the mobile terminal can take the target posture indicated by the shooting parameter corresponding to the user characteristic attribute as the reference of the shooting user, and then can shoot the target image when the user posture is matched with the target posture, and then the mobile terminal can process the target image according to the first processing parameter corresponding to the user characteristic attribute, so that the processed target image is more in line with the preference of the user, thus the satisfaction degree of the user to the final image can be improved, and the shooting efficiency is improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and a mobile terminal.
Background
With the rapid development of mobile terminal technology, electronic photos gradually become a mainstream way for recording the aspects of people's life, appearance and the like, and generally, people can take electronic photos through a mobile terminal, and then perform art designing treatment or share the photos with relatives and friends.
Generally, users hope to keep the best life moment through photos, therefore, in practical application, users often have a less satisfactory effect on most photos, and then users will take photos repeatedly until obtaining satisfactory photos, and thus users need to spend a lot of shooting time to obtain satisfactory photos, resulting in low shooting efficiency.
Disclosure of Invention
The invention provides an image processing method, an image processing device and a mobile terminal, and aims to solve the problem that a user needs to consume a large amount of shooting time to obtain a satisfactory photo, so that the shooting efficiency is low.
In order to solve the technical problem, the invention is realized as follows: an image processing method applied to a mobile terminal comprises the following steps:
acquiring a user characteristic attribute;
determining shooting parameters and first processing parameters corresponding to the user characteristic attributes;
when the user gesture in the shooting preview interface is matched with the target gesture corresponding to the shooting parameter, shooting is carried out to obtain a target image;
and processing the target image according to the first processing parameter.
In a first aspect, an embodiment of the present invention further provides an image processing apparatus, where the apparatus includes:
the first acquisition module is used for acquiring the characteristic attribute of the user;
the first determining module is used for determining shooting parameters and first processing parameters corresponding to the user attribute characteristics;
the shooting module is used for shooting when the user gesture in the shooting preview interface is matched with the target gesture corresponding to the shooting parameters to obtain a target image;
and the processing module is used for processing the target image according to the first processing parameter.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when the computer program is executed by the processor, the steps of the image processing method according to the present invention are implemented.
In a third aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the image processing method according to the present invention.
In the embodiment of the invention, the mobile terminal can firstly acquire the user characteristic attribute, then determines the shooting parameter and the first processing parameter corresponding to the user characteristic attribute, and then shoots to acquire the target image when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameter, and can process the target image according to the first processing parameter. In the embodiment of the invention, the mobile terminal can take the target posture indicated by the shooting parameter corresponding to the user characteristic attribute as the reference of the shooting user, and then can shoot the target image when the user posture is matched with the target posture, and then the mobile terminal can process the target image according to the first processing parameter corresponding to the user characteristic attribute, so that the processed target image is more in line with the preference of the user, and thus, the satisfaction degree of the user on the final image can be improved, and the shooting efficiency is improved.
Drawings
FIG. 1 is a flow chart of an image processing method according to a first embodiment of the invention;
FIG. 2 is a flow chart of an image processing method according to a second embodiment of the present invention;
fig. 3 is a block diagram showing a configuration of an image processing apparatus according to a third embodiment of the present invention;
fig. 4 is a block diagram showing another image processing apparatus according to a third embodiment of the present invention;
fig. 5 is a diagram illustrating a hardware structure of a mobile terminal according to various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
Referring to fig. 1, a flowchart of an image processing method according to a first embodiment of the present invention is shown, which may specifically include the following steps:
In the embodiment of the invention, when a user needs to take a picture and triggers the mobile terminal to open a shooting preview interface, the mobile terminal may first acquire user characteristic attributes, such as user gender, user age, user skin, user height, user body type, and the like. In practical applications, different user groups or user individuals usually have shooting preferences corresponding to group characteristics or personal characteristics of the user, such as specific postures, expressions and the like, and a beautiful image processing effect after shooting, such as low image brightness, bright image tone and the like, so that the mobile terminal can firstly obtain the user characteristic attributes, and then subsequently determine how to provide a targeted posture guidance for the user according to the characteristic attributes of the user who needs to shoot at present, and how to perform targeted post-processing on the shot image.
And 102, determining shooting parameters and first processing parameters corresponding to the user characteristic attributes.
In the embodiment of the present invention, the mobile terminal may store the shooting parameters and the first processing parameters corresponding to different user characteristic attributes in advance, where each shooting parameter and each first processing parameter may be preset according to different user groups, and of course, may also be obtained by learning and analyzing after the mobile terminal collects favorite habits of individual users with different user characteristic attributes for a long time, which is not specifically limited in this embodiment of the present invention.
The shooting parameters can represent a target posture which accords with the user preference, namely a posture which is provided for the user to make reference, so that the user can be subjected to targeted shooting guidance in the aspects of standing position, posture, expression and the like. The first processing parameter may include at least one processing parameter corresponding to an image processing effect, such as a processing parameter corresponding to a caricature filter effect, a processing parameter corresponding to a face brightening effect, and the like.
And 103, when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameters, shooting to obtain a target image.
In the embodiment of the invention, the mobile terminal can provide the target posture corresponding to the shooting parameter for the user so that the user can obtain the shooting suggestion information for guiding the posture, and then the user can adjust the posture of the user according to the shooting suggestion information provided by the mobile terminal, wherein the posture comprises a standing position, a posture, an expression and the like.
And 104, processing the target image according to the first processing parameter.
In the embodiment of the present invention, after obtaining the target image, the mobile terminal may process the target image according to the first processing parameter corresponding to the current user, that is, may process the target image into an image processing effect that is preferred by the current user, so as to obtain the processed target image.
In the embodiment of the invention, the mobile terminal can firstly acquire the user characteristic attribute, then determines the shooting parameter and the first processing parameter corresponding to the user characteristic attribute, and then shoots to acquire the target image when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameter, and can process the target image according to the first processing parameter. In the embodiment of the invention, the mobile terminal can take the target posture indicated by the shooting parameter corresponding to the user characteristic attribute as the reference of the shooting user, and then can shoot the target image when the user posture is matched with the target posture, and then the mobile terminal can process the target image according to the first processing parameter corresponding to the user characteristic attribute, so that the processed target image is more in line with the preference of the user, and thus, the satisfaction degree of the user on the final image can be improved, and the shooting efficiency is improved.
Example two
Referring to fig. 2, a flowchart of an image processing method according to a second embodiment of the present invention is shown, which may specifically include the following steps:
In the embodiment of the invention, the mobile terminal can have a voice recognition function, so that voice interaction can be carried out with a user. For example, when a user needs to take a picture, the mobile terminal can be awakened to enter an intelligent shooting state through specific awakening words such as 'help me take a picture', 'help me take a beauty picture', and the like, then a shooting preview interface is opened, and the user characteristic attribute starts to be acquired.
Furthermore, when the mobile terminal is used specifically, the mobile terminal can be further provided with a rotatable camera, so that when a user speaks, the sound production direction of the user can be identified, the current position of the user is determined, and the mobile terminal can control the rotatable camera to automatically align with the user, so that the user can be visually located in a shooting preview interface.
In addition, the mobile terminal can guide the user to feed back the shooting range through voice after the rotatable camera is aligned with the user, for example, the mobile terminal can play guiding voice: asking for which photo of a photo of big head, half-body, and whole body? If the user takes a full-body photograph through voice feedback, the mobile terminal can properly adjust the focal length, so that the full-body image of the user is in the photographing preview interface.
In practical application, the mobile terminal may obtain the user characteristic attribute in at least one of the following manners, including:
the first implementation mode comprises the following steps: performing face feature detection in a shooting preview interface to obtain face features of a user; and determining the characteristic attribute of the user according to the face characteristic of the user.
The second implementation mode comprises the following steps: collecting user voice information; carrying out voiceprint recognition on the user voice information to obtain voiceprint characteristics; and determining the user characteristic attribute according to the voiceprint characteristics.
In practical application, the mobile terminal can detect the face features of the user from the image, further analyze the face features of the user and determine the user feature attributes such as gender, age and the like of the user. The mobile terminal can also detect the voiceprint characteristics from the user voice information, further analyze the mechanical energy of the voiceprint characteristics and determine the user characteristic attributes such as the gender, the age and the like of the user. In practical application, the user characteristic attributes such as gender and age of the user are verified doubly through voiceprint recognition and face recognition, so that the accuracy of determining the user characteristic attributes can be improved, and the confidence of providing shooting suggestion information subsequently is improved.
Wherein the user characteristic attribute may include at least one of a user gender, a user age, a user skin, a user height, and a user body type. In specific application, the more contents are contained in the user characteristic attribute, the more accurate and detailed range of a user group to which the current user belongs can be determined by the mobile terminal. Furthermore, because the number of people who use the same mobile terminal to shoot is very limited, for example, specific people such as family, friends, and the like, in general, if the user characteristic attribute contains more content, the mobile terminal may even correspond a certain user characteristic attribute to a specific user individual, and further may perform targeted shooting suggestion information for the user individual.
In the embodiment of the present invention, the mobile terminal may store the shooting parameters and the first processing parameters corresponding to different user characteristic attributes in advance, and when a certain user uses the intelligent shooting function in the mobile terminal for the first time, the mobile terminal may determine the shooting parameters and the first processing parameters corresponding to the user characteristic attributes according to the user characteristic attributes of the current user, where the shooting parameters and the first processing parameters may be the shooting parameters and the first processing parameters corresponding to a user group to which the current user belongs, that is, the favorite habits of a certain group. The shooting parameters may specifically include at least one of a shooting position, a shooting angle, a shooting posture and a shooting expression.
After the mobile terminal shoots and processes the user each time, the mobile terminal can provide a plurality of images adopting different shooting modes and processing modes for the user to score, and then the mobile terminal can collect the shooting mode and the processing mode with higher user score according to the score of the user on each final image after multiple shooting processes, so that the shooting mode and the processing mode are taken as the favorite of the user to record, and the shooting parameters and the first processing parameters corresponding to the user characteristic attributes of the user are updated, so that the user can be provided with photos more conforming to the favorite of the user.
And step 203, outputting shooting suggestion information according to the shooting parameters.
In the embodiment of the invention, the mobile terminal can provide shooting suggestions which accord with the shooting parameters of the user for the current user in the modes of voice, characters, diagrams and the like. In practical applications, the step can be implemented by the following steps: generating a shooting suggestion text according to the shooting parameters; converting the shooting suggestion text into shooting suggestion voice; and playing shooting suggestion voice.
In one embodiment, before the user adjusts the gesture, the mobile terminal may generate a shooting suggestion Text according To shooting parameters of the user, for example, the shooting parameters may include a picture middle position, the mobile terminal may generate a shooting suggestion Text of "please stand at the picture middle position", and for example, the shooting parameters may include a smile expression, the mobile terminal may generate a shooting suggestion Text of "please keep smile", and then the mobile terminal may convert the shooting suggestion Text into shooting suggestion voice through a Text To Speech (Text To Speech) broadcast technology, so that before the user adjusts the gesture, the shooting suggestion voice may be broadcast To enable the user To adjust the gesture of the user according To a prompt of the shooting suggestion voice.
In another embodiment, the mobile terminal may detect a user gesture in the shooting preview interface in real time, and generate a shooting suggestion text according to a difference between a current user gesture and a target gesture indicated by the shooting parameters when detecting that the user gesture does not conform to the shooting parameters, for example, at least one of the following shooting suggestion texts:
shooting position suggestion text: please walk two steps ahead, ask your face to lean back.
Shooting angle suggestion text: please slightly lower the head and slightly raise the head to the right.
Shooting gesture suggestion text: please face me slightly to the left and slightly to the right.
Shooting a posture suggestion text: please lift the chest and contract the abdomen.
Shooting an expression suggestion text: please keep smiling.
And then the mobile terminal can convert the shooting suggestion text into shooting suggestion voice through a TTS (text to speech) broadcasting technology, so that the shooting suggestion voice can be broadcasted in the process of adjusting the posture of the user, and the user can adjust the posture of the user according to the prompt of the shooting suggestion voice.
And 204, when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameters, shooting to obtain a target image.
In the embodiment of the invention, when the user gesture in the shooting preview interface is matched with the target gesture corresponding to the shooting parameter, that is, the mobile terminal detects that the user is currently at the favorite position indicated by the shooting parameter, and puts out the favorite gesture, favorite expression and the like indicated by the shooting parameter, the mobile terminal can shoot, so that the target image which accords with the shooting preference of the current user is obtained.
In the embodiment of the present invention, the first processing parameter may include a processing parameter acting on a reference image whose first user score exceeds a preset score, and since the user may score at least two final images after shooting and processing the image each time, the first processing parameter may include a processing parameter acting on a reference image whose first user score exceeds the preset score, that is, the first user score corresponding to the reference image is relatively high, a processing effect presented in the reference image is preferred by the user, and accordingly, after obtaining the target image, the mobile terminal may process the target image according to the processing parameter acting on the reference image, that is, perform the same processing operation as the reference image, so that the first image meeting the image processing preference of the current user may be obtained.
At step 206, at least one second processing parameter other than the first processing parameter is selected from the preset image processing parameters.
In this embodiment of the present invention, a plurality of image processing parameters may be preset in the mobile terminal, where each image processing parameter corresponds to one or more image processing effects, and each image processing parameter may include a display attribute processing parameter and a portrait processing parameter, where the display attribute processing parameter may specifically include at least one of an image brightness processing parameter, an image saturation processing parameter, and an image hue processing parameter, the portrait processing parameter may specifically include at least one of a part contour processing parameter and a five sense organ processing parameter, and the five sense organ processing parameter may specifically include a local beautification operation on five sense organs, such as eyes, mouth, and nose, which is not specifically limited in this embodiment of the present invention.
Since the image processing effect corresponding to the first image may be one of various image processing effects that have been tried by the user so far, after the target image is obtained, the mobile terminal may further select at least one second processing parameter other than the first processing parameter from preset image processing parameters, that is, select at least one image processing effect that may not be tried by the user or that may be tried a few times.
And step 207, processing the target image according to each second processing parameter to obtain at least one second image.
In the embodiment of the present invention, the mobile terminal may respectively perform image processing operations corresponding to different second processing parameters on the target image, that is, may perform at least one image processing operation that may not be tried by the user or has a small number of attempts on the target image to obtain at least one second image, so that the user's beauty parameters may be updated and adjusted subsequently according to the rating feedback of the user on the first image and the at least one second image.
In the embodiment of the invention, the mobile terminal can display the first image and each second image for the user to check, the user can respectively score the first image and each second image and input the first image and each second image into the mobile terminal, and then the mobile terminal can acquire the second user score corresponding to the first image and each second image.
Step 209 determines a high-score image with a second user score greater than the first user score from the first image and each second image.
In the embodiment of the present invention, the mobile terminal may determine, from the first image and each second image, whether there is a high-score image with a second user score greater than the first user score, that is, whether there is an image with a score higher than that of the reference image, and if there is an image with a second user score greater than that of the first user score, select the image.
And step 210, updating the shooting parameters according to the user posture in the high-resolution image.
In the embodiment of the invention, if the first image and each second image have an image with a higher grade than the reference image, the mobile terminal can determine that the user posture in the high-resolution image is a favorite posture of the user compared with the reference image, and then the mobile terminal can replace the originally stored shooting parameters with the shooting parameters corresponding to the user posture in the high-resolution image, so that the shooting parameters can be updated. And when the user shoots again, shooting suggestions can be provided for the user according to the updated shooting parameters, and shooting guidance can be provided.
And step 211, updating the first processing parameter according to the second processing parameter corresponding to the high-resolution image.
In the embodiment of the present invention, the mobile terminal may replace the first processing parameter with the second processing parameter corresponding to the image processing operation that is performed on the high-resolution image, so that the update of the first processing parameter may be implemented. Further, when the user performs image processing after shooting again, the shot image may be processed according to the updated first processing parameter.
In the embodiment of the invention, the mobile terminal can firstly obtain the characteristic attribute of the user, then determines the shooting parameter and the first processing parameter corresponding to the characteristic attribute of the user, then outputs shooting suggestion information according to the shooting parameter to be used as a reference for the user to adjust the shooting posture, further shoots when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameter, obtains the target image, respectively carries out different processing on the target image according to the first processing parameter, then can obtain the grade of the user on the image processed in different modes, and further can update the shooting parameter and the first processing parameter according to the grade of the user. In the embodiment of the invention, the mobile terminal can provide a shooting suggestion which accords with the user preference for the user in the shooting stage according to the shooting parameters corresponding to the characteristic attributes of the user, and further can carry out processing which accords with the user preference on the target image according to the first processing parameters corresponding to the user adjustment attributes after the target image is obtained through shooting. Because the final image is obtained after shooting and processing based on the preference of the user, the satisfaction degree of the user on the final image can be improved, and the shooting efficiency is improved. In addition, the mobile terminal can update the shooting parameters and the first processing parameters according to the user scores corresponding to the final images with different processing styles, so that images more conforming to the user preference can be obtained when shooting and image processing are carried out again.
EXAMPLE III
Referring to fig. 3, a block diagram of an image processing apparatus 300 according to a third embodiment of the present invention is shown, which may specifically include:
a first obtaining module 301, configured to obtain a user characteristic attribute;
a first determining module 302, configured to determine a shooting parameter and a first processing parameter corresponding to the user characteristic attribute;
the shooting module 303 is configured to shoot to obtain a target image when the user gesture in the shooting preview interface matches the target gesture corresponding to the shooting parameter;
the processing module 304 is configured to process the target image according to the first processing parameter.
Optionally, referring to fig. 4, the processing module 304 includes:
a first processing submodule 3041, configured to process the target image according to the first processing parameter, so as to obtain a first image;
a selecting submodule 3042 for selecting at least one second processing parameter other than the first processing parameter from preset respective image processing parameters;
the second processing sub-module 3043 is configured to process the target image according to each of the second processing parameters, so as to obtain at least one second image.
Optionally, the first processing parameter includes a processing parameter acting on a reference image with a first user score exceeding a preset score; referring to fig. 4, the apparatus 300 further includes:
a second obtaining module 305, configured to obtain the first image and a second user score corresponding to each second image;
a second determining module 306, configured to determine, from the first image and each of the second images, a high-score image with the second user score being greater than the first user score;
a first updating module 307, configured to update the shooting parameter according to a user gesture in the high-resolution image;
and a second updating module 308, configured to update the first processing parameter according to a second processing parameter corresponding to the high-resolution image.
Optionally, the image processing parameters include display attribute processing parameters and portrait processing parameters;
the display attribute processing parameter comprises at least one of an image brightness processing parameter, an image saturation processing parameter and an image tone processing parameter;
the portrait processing parameters include at least one of a location contour processing parameter and a facial feature processing parameter.
Optionally, referring to fig. 4, the first obtaining module 301 includes:
the detection submodule 3011 is configured to perform face feature detection in the shooting preview interface to obtain a user face feature; the first determining submodule 3012 is configured to determine a user feature attribute according to the user face feature; and/or the presence of a gas in the gas,
the acquisition submodule 3013 is configured to acquire user voice information; the recognition sub-module 3014 is configured to perform voiceprint recognition on the user voice information to obtain a voiceprint feature; a second determining sub-module 3015, configured to determine a user feature attribute according to the voiceprint feature;
wherein the user characteristic attribute comprises at least one of a user gender, a user age, a user skin, a user height, and a user body type.
Optionally, referring to fig. 4, the apparatus 300 further includes:
a suggestion module 309, configured to output shooting suggestion information according to the shooting parameter;
wherein the shooting parameters comprise at least one of shooting position, shooting angle, shooting posture and shooting expression.
The image processing apparatus provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 and fig. 2, and is not described herein again to avoid repetition.
In the embodiment of the invention, the mobile terminal can firstly obtain the user characteristic attribute through the first obtaining module, then determine the shooting parameter and the first processing parameter corresponding to the user characteristic attribute through the first determining module, further shoot through the shooting module when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameter, obtain the target image, and process the target image through the processing module according to the first processing parameter. In the embodiment of the invention, the mobile terminal can take the target posture indicated by the shooting parameter corresponding to the user characteristic attribute as the reference of the shooting user, and then can shoot the target image when the user posture is matched with the target posture, and then the mobile terminal can process the target image according to the first processing parameter corresponding to the user characteristic attribute, so that the processed target image is more in line with the preference of the user, and thus, the satisfaction degree of the user on the final image can be improved, and the shooting efficiency is improved.
Example four
Figure 5 is a schematic diagram of a hardware configuration of a mobile terminal implementing various embodiments of the present invention,
the mobile terminal 500 includes, but is not limited to: a radio frequency unit 501, a network module 502, an audio output unit 503, an input unit 504, a sensor 505, a display unit 506, a user input unit 507, an interface unit 508, a memory 509, a processor 510, and a power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 510 is configured to obtain a user characteristic attribute; determining shooting parameters and first processing parameters corresponding to the user characteristic attributes; when the user gesture in the shooting preview interface is matched with the target gesture corresponding to the shooting parameter, shooting is carried out to obtain a target image; and processing the target image according to the first processing parameter.
In the embodiment of the invention, the mobile terminal can firstly acquire the user characteristic attribute, then determines the shooting parameter and the first processing parameter corresponding to the user characteristic attribute, and then shoots to acquire the target image when the user posture in the shooting preview interface is matched with the target posture corresponding to the shooting parameter, and can process the target image according to the first processing parameter. In the embodiment of the invention, the mobile terminal can take the target posture indicated by the shooting parameter corresponding to the user characteristic attribute as the reference of the shooting user, and then can shoot the target image when the user posture is matched with the target posture, and then the mobile terminal can process the target image according to the first processing parameter corresponding to the user characteristic attribute, so that the processed target image is more in line with the preference of the user, and thus, the satisfaction degree of the user on the final image can be improved, and the shooting efficiency is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external device is connected to the mobile terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program, when executed by the processor 510, implements each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising" is used to specify the presence of stated features, integers, steps, operations, elements, components, operations.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. An image processing method applied to a mobile terminal is characterized by comprising the following steps:
acquiring a user characteristic attribute;
determining shooting parameters and first processing parameters corresponding to the user characteristic attributes;
when the user gesture in the shooting preview interface is matched with the target gesture corresponding to the shooting parameter, shooting is carried out to obtain a target image;
processing the target image according to the first processing parameter;
the step of processing the target image according to the first processing parameter includes:
processing the target image according to the first processing parameter to obtain a first image;
selecting at least one second processing parameter except the first processing parameter from preset image processing parameters; the first processing parameter is a processing parameter with a higher user score, and the second processing parameter is a processing parameter which is not tried by the user or has a smaller number of attempts;
processing the target image according to each second processing parameter to obtain at least one second image;
the first processing parameter comprises a processing parameter acting on a reference image with a first user score exceeding a preset score; after the step of processing the target image according to the first processing parameter, the method further includes:
acquiring a second user score corresponding to the first image and each second image;
determining, from the first image and each of the second images, a high scoring image for which the second user score is greater than the first user score;
updating the shooting parameters according to the user posture in the high-resolution image;
and updating the first processing parameter according to a second processing parameter corresponding to the high-resolution image.
2. The method of claim 1, wherein the image processing parameters include a display attribute processing parameter and a portrait processing parameter;
the display attribute processing parameter comprises at least one of an image brightness processing parameter, an image saturation processing parameter and an image tone processing parameter;
the portrait processing parameters include at least one of a location contour processing parameter and a facial feature processing parameter.
3. The method of claim 1, wherein the step of obtaining the user characteristic attribute comprises:
performing face feature detection in a shooting preview interface to obtain face features of a user; determining a user characteristic attribute according to the user face characteristic; and/or the presence of a gas in the gas,
collecting user voice information; carrying out voiceprint recognition on the user voice information to obtain voiceprint characteristics; determining a user characteristic attribute according to the voiceprint characteristics;
wherein the user characteristic attribute comprises at least one of a user gender, a user age, a user skin, a user height, and a user body type.
4. The method according to claim 1, wherein after the step of determining the shooting parameters and the first processing parameters corresponding to the user characteristic attributes, the method further comprises:
outputting shooting suggestion information according to the shooting parameters;
wherein the shooting parameters comprise at least one of shooting position, shooting angle, shooting posture and shooting expression.
5. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring the characteristic attribute of the user;
the first determining module is used for determining shooting parameters and first processing parameters corresponding to the user characteristic attributes;
the shooting module is used for shooting when the user gesture in the shooting preview interface is matched with the target gesture corresponding to the shooting parameters to obtain a target image;
the processing module is used for processing the target image according to the first processing parameter;
the processing module comprises:
the first processing submodule is used for processing the target image according to the first processing parameter to obtain a first image;
the selection sub-module is used for selecting at least one second processing parameter except the first processing parameter from preset image processing parameters; the first processing parameter is a processing parameter with a higher user score, and the second processing parameter is a processing parameter which is not tried by the user or has a smaller number of attempts;
and the second processing submodule is used for processing the target image according to each second processing parameter to obtain at least one second image.
The first processing parameter comprises a processing parameter acting on a reference image with a first user score exceeding a preset score; the device further comprises:
the second acquisition module is used for acquiring the first images and second user scores corresponding to the second images;
a second determining module, configured to determine, from the first image and each of the second images, a high-score image with the second user score being greater than the first user score;
the first updating module is used for updating the shooting parameters according to the user posture in the high-resolution image;
and the second updating module is used for updating the first processing parameter according to a second processing parameter corresponding to the high-resolution image.
6. The apparatus of claim 5, wherein the image processing parameters comprise a display attribute processing parameter and a portrait processing parameter;
the display attribute processing parameter comprises at least one of an image brightness processing parameter, an image saturation processing parameter and an image tone processing parameter;
the portrait processing parameters include at least one of a location contour processing parameter and a facial feature processing parameter.
7. The apparatus of claim 5, wherein the first obtaining module comprises:
the detection submodule is used for detecting the face characteristics in the shooting preview interface to obtain the face characteristics of the user; the first determining submodule is used for determining a user characteristic attribute according to the face characteristic of the user; and/or the presence of a gas in the gas,
the acquisition submodule is used for acquiring the voice information of the user; the recognition submodule is used for carrying out voiceprint recognition on the user voice information to obtain voiceprint characteristics; the second determining submodule is used for determining the attribute of the user characteristic according to the voiceprint characteristic;
wherein the user characteristic attribute comprises at least one of a user gender, a user age, a user skin, a user height, and a user body type.
8. The apparatus of claim 5, further comprising:
the suggestion module is used for outputting shooting suggestion information according to the shooting parameters;
wherein the shooting parameters comprise at least one of shooting position, shooting angle, shooting posture and shooting expression.
9. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910100756.6A CN109819167B (en) | 2019-01-31 | 2019-01-31 | Image processing method and device and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910100756.6A CN109819167B (en) | 2019-01-31 | 2019-01-31 | Image processing method and device and mobile terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109819167A CN109819167A (en) | 2019-05-28 |
CN109819167B true CN109819167B (en) | 2020-11-03 |
Family
ID=66606411
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910100756.6A Active CN109819167B (en) | 2019-01-31 | 2019-01-31 | Image processing method and device and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109819167B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110418062A (en) * | 2019-08-29 | 2019-11-05 | 上海云从汇临人工智能科技有限公司 | A kind of image pickup method, device, equipment and machine readable media |
US11436470B2 (en) * | 2019-09-13 | 2022-09-06 | Kyndryl, Inc. | Generating simulated image training data |
CN111309423B (en) * | 2020-02-13 | 2023-11-21 | 北京百度网讯科技有限公司 | Terminal interface image configuration method, device, equipment and medium |
CN112511748A (en) * | 2020-11-30 | 2021-03-16 | 努比亚技术有限公司 | Lens target intensified display method and device, mobile terminal and storage medium |
CN112887782A (en) * | 2021-01-19 | 2021-06-01 | 维沃移动通信有限公司 | Image output method and device and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101655850A (en) * | 2008-08-21 | 2010-02-24 | 日电(中国)有限公司 | Generating equipment for knowledge extraction process, regulating equipment for knowledge extraction process and methods thereof |
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN105095241A (en) * | 2014-04-30 | 2015-11-25 | 华为技术有限公司 | Information recommendation method, device and system |
CN107018333A (en) * | 2017-05-27 | 2017-08-04 | 北京小米移动软件有限公司 | Shoot template and recommend method, device and capture apparatus |
CN107770452A (en) * | 2015-05-19 | 2018-03-06 | 广东欧珀移动通信有限公司 | A kind of photographic method and terminal and related media production |
JP2018160200A (en) * | 2017-03-24 | 2018-10-11 | 富士通株式会社 | Method for learning neural network, neural network learning program, and neural network learning program |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW201021550A (en) * | 2008-11-19 | 2010-06-01 | Altek Corp | Emotion-based image processing apparatus and image processing method |
JP5954750B2 (en) * | 2014-06-30 | 2016-07-20 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | Information processing apparatus, information processing method, and program |
-
2019
- 2019-01-31 CN CN201910100756.6A patent/CN109819167B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101655850A (en) * | 2008-08-21 | 2010-02-24 | 日电(中国)有限公司 | Generating equipment for knowledge extraction process, regulating equipment for knowledge extraction process and methods thereof |
CN103413270A (en) * | 2013-08-15 | 2013-11-27 | 北京小米科技有限责任公司 | Method and device for image processing and terminal device |
CN105095241A (en) * | 2014-04-30 | 2015-11-25 | 华为技术有限公司 | Information recommendation method, device and system |
CN107770452A (en) * | 2015-05-19 | 2018-03-06 | 广东欧珀移动通信有限公司 | A kind of photographic method and terminal and related media production |
JP2018160200A (en) * | 2017-03-24 | 2018-10-11 | 富士通株式会社 | Method for learning neural network, neural network learning program, and neural network learning program |
CN107018333A (en) * | 2017-05-27 | 2017-08-04 | 北京小米移动软件有限公司 | Shoot template and recommend method, device and capture apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN109819167A (en) | 2019-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109819167B (en) | Image processing method and device and mobile terminal | |
CN110740259B (en) | Video processing method and electronic equipment | |
CN108184050B (en) | Photographing method and mobile terminal | |
CN108062400A (en) | Examination cosmetic method, smart mirror and storage medium based on smart mirror | |
CN108712603B (en) | Image processing method and mobile terminal | |
CN107832784B (en) | Image beautifying method and mobile terminal | |
CN108924412B (en) | Shooting method and terminal equipment | |
CN108683850B (en) | Shooting prompting method and mobile terminal | |
CN109391842B (en) | Dubbing method and mobile terminal | |
CN108881782B (en) | Video call method and terminal equipment | |
CN109065060B (en) | Voice awakening method and terminal | |
CN109788204A (en) | Shoot processing method and terminal device | |
CN109272473B (en) | Image processing method and mobile terminal | |
CN108154121A (en) | Cosmetic auxiliary method, smart mirror and storage medium based on smart mirror | |
EP3340077B1 (en) | Method and apparatus for inputting expression information | |
CN109448069B (en) | Template generation method and mobile terminal | |
CN108984143B (en) | Display control method and terminal equipment | |
CN108495036B (en) | Image processing method and mobile terminal | |
CN110808019A (en) | Song generation method and electronic equipment | |
CN111080747B (en) | Face image processing method and electronic equipment | |
CN107563353B (en) | Image processing method and device and mobile terminal | |
CN108924413B (en) | Shooting method and mobile terminal | |
CN111341317B (en) | Method, device, electronic equipment and medium for evaluating wake-up audio data | |
CN108551562A (en) | A kind of method and mobile terminal of video communication | |
CN112449098B (en) | Shooting method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |