CN108495036B - Image processing method and mobile terminal - Google Patents

Image processing method and mobile terminal Download PDF

Info

Publication number
CN108495036B
CN108495036B CN201810271337.4A CN201810271337A CN108495036B CN 108495036 B CN108495036 B CN 108495036B CN 201810271337 A CN201810271337 A CN 201810271337A CN 108495036 B CN108495036 B CN 108495036B
Authority
CN
China
Prior art keywords
image
target
determining
audio file
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810271337.4A
Other languages
Chinese (zh)
Other versions
CN108495036A (en
Inventor
钟昌勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201810271337.4A priority Critical patent/CN108495036B/en
Publication of CN108495036A publication Critical patent/CN108495036A/en
Application granted granted Critical
Publication of CN108495036B publication Critical patent/CN108495036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides an image processing method and a mobile terminal, relates to the technical field of communication, and aims to solve the problem that an existing image beautifying mode is single, so that a beautified image cannot meet the actual requirements of a user. The method comprises the following steps: acquiring target parameters of a target audio file; determining an attribute value of the image to be processed according to the target parameter; and generating a target image according to the attribute value. The embodiment of the invention solves the problem that the image beautifying mode in the prior art is single, so that the processed image can better meet the actual requirements of users.

Description

Image processing method and mobile terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and a mobile terminal.
Background
With the development of mobile phone photographing technology, the adoption of mobile phones for photographing and image beautification becomes an indispensable part in daily use of mobile phones. Therefore, many image processing software for image beautification have been introduced. The software can adjust the relevant attributes of the image, such as brightness, sharpness, tone and the like by utilizing a digital image processing algorithm to realize the beautification processing of the image.
However, most image beautification software processing methods are a single processing method for image attributes or a simple superposition of multiple processing methods. Some image beautifying software providing intelligent beautifying technology also sets a fixed beautifying mode, and the obtained beautifying results are mostly similar and cannot obtain personalized beautifying effect. Therefore, it can be seen that the beautified image obtained by the existing image beautification mode cannot meet the actual requirements of users.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a mobile terminal, and aims to solve the problem that an existing image beautifying mode is single, so that a beautified image cannot meet the actual requirements of a user.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring target parameters of a target audio file;
determining an attribute value of the image to be processed according to the target parameter;
and generating a target image according to the attribute value.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including:
the acquisition module is used for acquiring target parameters of a target audio file;
the first determining module is used for determining the attribute value of the image to be processed according to the target parameter;
and the processing module is used for generating a target image according to the attribute value.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the image processing method according to the first aspect.
Therefore, in the embodiment of the invention, the attribute to be adjusted of the image to be processed can be adjusted according to the target parameter of the target audio file, so that the problem of single image beautifying mode in the prior art is solved, and the processed image can better meet the actual requirement of a user.
Drawings
FIG. 1 is a flow chart of an image processing method provided by an embodiment of the invention;
FIG. 2 is a second flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a third flowchart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a fourth flowchart of an image processing method according to an embodiment of the present invention;
fig. 5 is one of the structural diagrams of a mobile terminal according to an embodiment of the present invention;
fig. 6 is a second structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the method comprises the following steps:
step 101, obtaining target parameters of a target audio file.
Here, the target audio file refers to an audio file, such as a piece of music, for performing adjustment operations, such as beautifying, on the image to be processed. For a target audio file, the parameters include frequency, frequency histogram, and the like.
The target audio file may be any audio file selected by a user, and the target parameters may include a frequency, a number of times of change of a note within a certain sampling period, a frequency histogram, and the like.
And step 102, determining the attribute value of the image to be processed according to the target parameter.
In the embodiment of the present invention, the image to be processed may be any image selected by a user. Wherein, for an image, its attributes include: brightness, hue, saturation, etc. Therefore, the attribute value here may be an attribute value of any of the above attributes.
According to different contents included in the target parameters, there are the following processing modes:
(1) the target parameter includes a frequency of the target audio file.
In this case, in order to make the obtained image more suitable for the user's requirement, step 101 specifically includes: and acquiring N sampling frequencies of the target audio file at N sampling moments. Step 102 specifically comprises: determining N attribute values corresponding to the N sampling frequencies according to the N sampling frequencies; wherein N is a natural number.
For example, in this step, taking brightness as an example of the attribute to be adjusted, a first correspondence between the frequency of the target audio file and the attribute value of the brightness may be acquired. In the embodiment of the present invention, the first corresponding relationship may be a linear relationship between the frequency and the attribute value, and may be preset. Then, N sampling frequencies of the target audio file at N sampling moments in a sampling time period are obtained. Wherein the target audio file itself has a certain playing time length. Within the playing time length, a period of time can be arbitrarily selected or the whole playing time can be used as the sampling time period. For the sampling period, it may be divided to include a plurality of sampling instants at each of which the sampling frequency at that time can be acquired. And finally, obtaining N attribute values of the brightness according to the first corresponding relation and the N sampling frequencies.
(2) The target parameters include: a frequency histogram of the target audio file.
In this case, in order to make the obtained image more suitable for the user's requirement, step 101 specifically includes: and acquiring a frequency histogram of the target audio file. Step 102 specifically comprises: adjusting the attribute histogram of the image to be processed according to the frequency histogram of the target audio file; and determining the attribute value of the image to be processed according to the attribute histogram.
For example, taking brightness as an attribute to be adjusted as an example, in this step, a histogram of the attribute to be adjusted and a frequency histogram of a target audio file may be obtained. Specifically, the whole piece of music can be traversed, the frequency of the whole piece of music is analyzed by adopting a statistical method, and the frequency histogram of the target audio file is calculated. And then, fusing the frequency histogram and the histogram of the attribute to be adjusted to obtain an image histogram of the image to be processed, and further determining the attribute value of the image to be processed. Specifically, the frequency histogram is fused with the histogram of the attribute to be adjusted by a mapping method. The histogram mapping method may be any histogram mapping method in the prior art.
And 103, generating a target image according to the attribute value.
As for (1) in step 102, this step may specifically be to obtain M intermediate processed images according to the N attribute values; and generating an image according to the M intermediate processing images. Wherein M is a natural number.
As for (2) in step 102, this step may specifically be to generate a target image according to the attribute value of the image to be processed.
In the embodiment of the present invention, the method may be applied to a Mobile terminal, such as a Mobile phone, a tablet personal Computer (tablet personal Computer), a laptop Computer (L ap Computer), a Personal Digital Assistant (PDA), a Mobile Internet Device (MID), or a Wearable Device (Wearable Device).
Therefore, in the embodiment of the invention, the attribute to be adjusted of the image to be processed can be adjusted according to the target parameter of the target audio file, so that the problem of single image beautifying mode in the prior art is solved, and the processed image can better meet the actual requirement of a user.
In this case, to enrich the form of the obtained image, before step 103, the number of times of note changes of the target audio file in a sampling period may also be obtained, and a corresponding image transformation rate is determined according to the number of times of note changes to generate images of different forms. Then, step 103 specifically includes: acquiring M intermediate processing images according to the attribute values, and generating a target image according to the image conversion rate and the M intermediate processing images, wherein the target image is a dynamic image; wherein M is a natural number.
Wherein the dynamic image may be an image in which M intermediate processed images are switched at the image transform rate to generate a dynamic effect.
Specifically, when determining the image conversion rate, a second correspondence between the number of note changes of the target audio file and the image conversion rate may be first obtained. In the embodiment of the present invention, the second corresponding relationship may be a linear relationship between the note changing times and the image transformation rate, and may be preset. Then, the target note change times of the target audio file in the sampling time period are obtained. The change times of the notes in the sampling time period directly reflect the speed of the music rhythm. The more the change times, the faster the rhythm; the fewer the number of changes, the slower the tempo. And finally, acquiring a corresponding image conversion rate according to the target note change times and the second corresponding relation.
Optionally, in the above embodiment, in order to make the image processing manner more flexible, after step 101, determining a target area in the image to be processed may be further included; then, step 102 specifically includes: and determining the attribute value corresponding to the target area according to the target parameter. Wherein the target area can be arbitrarily designated.
In addition, on the basis of the above embodiment, in order to facilitate the user to view the processed image, an image saving instruction of the user may be further received, and according to the image saving instruction, the image to be saved is acquired from the processed image, and the image to be saved is stored. The image to be saved may be all processed images or a part of the processed images.
Referring to fig. 2, fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention. In the embodiment shown in fig. 2, the frequency of music is associated with certain attributes (brightness, hue, saturation, etc.) of the image, which enable the attributes of the picture to change as the frequency of the music changes, thereby achieving the effect of the magic picture. As shown in fig. 2, the method comprises the following steps:
step 201, obtaining the image to be processed selected by the user and the music for beautifying the image.
Step 202, determining the attribute to be adjusted in the image to be processed.
In this embodiment, a certain dimension of the image to be processed, such as brightness, sharpness, hue, saturation, etc., is selected as the attribute to be adjusted. After the attribute to be adjusted is selected, calculating the average value or the median value of the frequency of a certain piece of music (such as a first piece) in the music, taking the calculated frequency value as the frequency corresponding to the image with processing, and setting the current value of the attribute to be adjusted as the initial value of the attribute.
Step 203, generating an image that changes with the music frequency.
With the playing of music, the frequency f of the music will be changed continuously, and at the same time, the attribute value p (one-dimensional) of the attribute to be adjusted is changed. Further, images corresponding to the music frequencies are generated by using the obtained different attribute values, and different image processing effects are generated.
Since the purpose is to change the attribute value to be adjusted of the image to be processed with the frequency of the music, in the embodiment of the present invention, the correspondence between the music frequency and the numerical type of the attribute to be adjusted may adopt a first-order linear correspondence, that is, p ═ kf + c, p represents the attribute value, f represents the frequency of the music, k is a proportionality coefficient, the adjustable change amplitude, and c is an adjustable parameter, which may be determined by an initialization value.
And step 204, storing the generated image.
Since the frequency will change continuously during the music playing process, the attribute value will also change accordingly, and therefore, a plurality of intermediate images can be generated during the above process, thereby forming an image sequence. For the generated image sequence, the whole segment can be stored, and an intermediate image corresponding to a certain segment of the rhythm can be selected for storage. Further, in order to improve the user experience, for a certain rhythm favored by the user, the image frame generated by the music can be intercepted and stored as gif picture.
By the scheme, the image sequence with the image processing effect changed along with the music frequency is generated, the changeable magic picture is generated, the image beautifying mode is enriched, and the purpose of entertainment is achieved in the image processing mode.
Referring to fig. 3, fig. 3 is a flowchart of an image processing method according to an embodiment of the present invention. In the embodiment shown in fig. 3, the frequency of music, the change of tempo, etc. are associated with certain properties of the image (brightness, hue, saturation, etc.), achieving the effect of a magic map. As shown in fig. 3, the method comprises the following steps:
step 301, acquiring the image to be processed selected by the user and the music for beautifying the image.
Step 302, determining the attribute to be adjusted in the image to be processed.
In this embodiment, a certain dimension of the image to be processed, such as brightness, sharpness, hue, saturation, etc., is selected as the attribute to be adjusted. After the attribute to be adjusted is selected, calculating the average value or the median value of the frequency of a certain piece of music (such as a first piece) in the music, taking the calculated frequency value as the frequency corresponding to the image with processing, and setting the current value of the attribute to be adjusted as the initial value of the attribute.
Step 303, generating an image that changes with the music frequency.
With the playing of music, the frequency f of the music will be changed continuously, and at the same time, the attribute value p (one-dimensional) of the attribute to be adjusted is changed. Further, images corresponding to the music frequencies are generated by using the obtained different attribute values, and different image processing effects are generated.
Since the purpose is to change the attribute value to be adjusted of the image to be processed with the frequency of the music, in the embodiment of the present invention, the correspondence between the music frequency and the numerical type of the attribute to be adjusted may adopt a first-order linear correspondence, that is, p ═ kf + c, p represents the attribute value, f represents the frequency of the music, k is a proportionality coefficient, the adjustable change amplitude, and c is an adjustable parameter, which may be determined by an initialization value.
Step 304, determine the image transform rate.
In general, the notes in the high range are often associated with bright visual perception, positive or happy emotional perception, etc., and the notes in the low range are often associated with dim visual perception, dull or sadness emotional perception, etc.; a relaxed tempo tends to give people a wide space or calmer mood, while a brisk tempo tends to give people a narrow space, mood agitation, etc.
Through the analysis of music and human emotion, the high-pitched region of the music can be associated with the attributes of high brightness, warm tone and the like of the image, which are felt to be positive and happy by people; the bass region is associated with the attributes of low brightness, cool tone and the like which make people feel dim and sadness; the slow rhythm slows down the processing speed of the image, creates a slow and calm feeling, and the rapid rhythm accelerates the change speed of the image processing, creating a violent atmosphere. For convenience of description, the music tempo is defined as the number of times a note changes in a unit time, and the more the number of times the note changes, the faster the tempo, and vice versa the slower the tempo.
Therefore, for the generated intermediate processing images whose attribute values change with the change of the music frequency, in the embodiment of the present invention, the image transformation rate between the intermediate processing images can also be determined with the change of the musical note within the sampling time, that is, the constant change of the rhythm of the music. The music rhythm is determined by counting the number N of note changes in the unit sampling time T, and the image transformation rate v is adjusted according to the number N, so that the change rate among the image frames in gif is changed along with the change of the music rhythm, and the image transformation rate is consistent with the music rhythm.
In particular, there may also be a linear relationship between the change in music tempo and the image conversion rate. The specific linear relationship is not limited in the embodiment of the present invention.
Step 305, saving the generated image.
Since the frequency will change continuously during the music playing process, the attribute value will also change accordingly, and therefore, a plurality of intermediate images can be generated during the above process, thereby forming an image sequence. For the generated image sequence, the whole segment can be stored, and an intermediate image corresponding to a certain segment of the rhythm can be selected for storage. Further, in order to improve the user experience, for a certain rhythm favored by the user, the image frame generated by the music can be intercepted and stored as gif picture.
By the scheme, the image sequence changing along with the music rhythm can be obtained, meanwhile, the images can be subjected to special association processing through the emotion contained in the music, a more changeable and more emotional picture sequence is obtained, and therefore the image beautifying mode is further enriched.
Referring to fig. 4, fig. 4 is a flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 4, the method comprises the following steps:
step 401, obtaining the image to be processed selected by the user and the music for beautifying the image.
Step 402, determining the attribute to be adjusted in the image to be processed.
In this embodiment, a certain dimension of the image to be processed, such as brightness, sharpness, hue, saturation, etc., is selected as the attribute to be adjusted.
Step 403, obtaining a frequency histogram of the music.
In this step, the music tempo of the selected music is analyzed to obtain a frequency histogram. For example, the frequency histogram T of music can be calculated by traversing the whole piece of music, analyzing the frequency of the whole piece of music by a statistical method.
Step 404, obtaining a histogram of the attribute to be adjusted.
Step 405, fusing the frequency histogram and the histogram of the attribute to be adjusted to obtain an image histogram of the image to be processed.
In this case, a histogram modification algorithm is mainly used to modify the histogram of the property to be adjusted by using the frequency histogram.
For certain attribute of the image, a histogram S of the attribute of the image can be obtained by simple traversal of the image and utilizing statistical knowledge, then a specific mapping method F is selected to realize fusion of the frequency histogram and the image histogram, a new image histogram W which is F (T, S) is obtained, and the effect of beautifying the image by music is achieved. Wherein, F is a histogram fusion function, and different display forms such as linear, nonlinear and the like can be selected to achieve different beautifying effects.
And step 406, adjusting the attribute to be adjusted by using the image histogram to obtain a processed image.
The method can realize fine adjustment of a certain dimension of the image. The image has certain dimension representing brightness and other attribute dimensions, and the histogram statistics is information of all pixels or local pixels and includes a statement of a row of pixels, so that the defect of uniform adjustment of all pixels of the image by a common adjustment method can be overcome by adopting the method, and the image has music information through the frequency histogram, thereby realizing a unique beautifying effect.
Step 407, the generated image is saved.
By the scheme, personalized beautification of the image can be realized by adopting different music, and an expected acquired image beautification effect is obtained, so that image beautification modes are further enriched, and user experience is improved.
Referring to fig. 5, fig. 5 is a structural diagram of a mobile terminal according to an embodiment of the present invention, and as shown in fig. 5, the mobile terminal 500 includes:
an obtaining module 501, configured to obtain a target parameter of a target audio file; a first determining module 502, configured to determine an attribute value of the image to be processed according to the target parameter; and the processing module 503 is configured to generate a target image according to the attribute value.
Optionally, the target parameters include: a frequency of the target audio file; the obtaining module 501 is specifically configured to: acquiring N sampling frequencies of the target audio file at N sampling moments; the first determining module 502 is specifically configured to: determining N attribute values corresponding to the N sampling frequencies according to the N sampling frequencies; wherein N is a natural number.
Optionally, the obtaining module 501 is further configured to obtain the number of times of change of musical notes of the target audio file in a sampling time period; the mobile terminal may further include: a second determining module 504, configured to determine a corresponding image transformation rate according to the note change times; the processing module 503 includes: an obtaining sub-module 5031, configured to obtain M intermediate processing images according to the attribute values; a generating sub-module 5032, configured to generate a target image according to the image transformation rate and the M intermediate processed images, where the target image is a dynamic image; wherein M is a natural number.
Optionally, the target parameters include: a frequency histogram of the target audio file; the first determining module 502 comprises: the adjusting submodule 5021 is used for adjusting the attribute histogram of the image to be processed according to the frequency histogram of the target audio file; the determining submodule 5022 is used for determining the attribute value of the image to be processed according to the attribute histogram.
In order to make the image processing mode more flexible, the mobile terminal further comprises: a third determining module 505, configured to determine a target region in the image to be processed; the first determining module 502 is specifically configured to determine an attribute value corresponding to the target area according to the target parameter.
The mobile terminal 500 can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
The mobile terminal 500 of the embodiment of the invention can adjust the attribute to be adjusted of the image to be processed according to the target parameter of the target audio file, thereby solving the problem of single image beautifying mode in the prior art and enabling the processed image to meet the actual requirement of the user.
Fig. 6 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention. The mobile terminal 6000 includes but is not limited to: a radio frequency unit 6001, a network module 6002, an audio output unit 6003, an input unit 6004, a sensor 6005, a display unit 6006, a user input unit 6007, an interface unit 6008, a memory 6009, a processor 6060, and a power supply 6011. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 6 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 6060 to: acquiring target parameters of a target audio file; determining an attribute value of the image to be processed according to the target parameter; and generating a target image according to the attribute value.
In the embodiment of the invention, the attribute to be adjusted of the image to be processed can be adjusted according to the target parameter of the target audio file, so that the problem of single image beautifying mode in the prior art is solved, and the processed image can better meet the actual requirement of a user.
The target parameters include: a frequency of the target audio file; optionally, the processor 6060 is further configured to obtain N sampling frequencies of the target audio file at N sampling moments; determining N attribute values corresponding to the N sampling frequencies according to the N sampling frequencies; wherein N is a natural number.
The target parameters further include: the number of note changes of the target audio file within a sampling time period; optionally, the processor 6060 is further configured to:
determining a corresponding image transformation rate according to the note change times; acquiring the note change times of the target audio file in a sampling time period; determining a corresponding image transformation rate according to the note change times; acquiring M intermediate processing images according to the attribute values; generating a target image according to the image conversion rate and the M intermediate processing images, wherein the target image is a dynamic image; wherein M is a natural number.
The target parameters include: a frequency histogram of the target audio file; optionally, the processor 6060 is further configured to adjust an attribute histogram of the to-be-processed image according to the frequency histogram of the target audio file; and determining the attribute value of the image to be processed according to the attribute histogram.
Optionally, the processor 6060 is further configured to determine a target region in the image to be processed; and determining the attribute value corresponding to the target area according to the target parameter.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 6001 may be configured to receive and transmit signals during a message sending and receiving process or a call process, specifically, receive downlink data from a base station, and then process the received downlink data to the processor 6060; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 6001 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 6001 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides wireless broadband internet access to the user through the network module 6002, for example, to help the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 6003 can convert audio data received by the radio frequency unit 6001 or the network module 6002 or stored in the memory 6009 into an audio signal and output as sound. Also, the audio output unit 6003 may also provide audio output related to a specific function performed by the mobile terminal 6000 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 6003 includes a speaker, a buzzer, a receiver, and the like.
The input unit 6004 is used to receive an audio or video signal. The input Unit 6004 may include a Graphics Processing Unit (GPU) 60041 and a microphone 60042, and the Graphics processor 60041 processes image data of a still image or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 6006. The image frames processed by the graphics processor 60041 may be stored in the memory 6009 (or other storage medium) or transmitted via the radio frequency unit 6001 or the network module 6002. The microphone 60042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 6001 in the case of a phone call mode.
The mobile terminal 6000 further includes at least one sensor 6005, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 60061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 60061 and/or the backlight when the mobile terminal 6000 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 6005 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., and are not described in detail herein.
The Display unit 6006 is used to Display information input by a user or information provided to the user the Display unit 6006 may include a Display panel 60061, and the Display panel 60061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light-Emitting Diode (O L ED), or the like.
The user input unit 6007 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 6007 includes a touch panel 60071 and other input devices 60072. Touch panel 60071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on touch panel 60071 or near touch panel 60071 using a finger, a stylus, or any suitable object or attachment). The touch panel 60071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 6060, and receives and executes commands sent by the processor 6060. In addition, the touch panel 60071 can be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to touch panel 60071, user input unit 6007 may include other input devices 60072. In particular, other input devices 60072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, touch panel 60071 can overlay display panel 60061 and, when touch panel 60071 detects a touch event at or near display panel 60061, communicate to processor 6060 to determine the type of touch event, and then processor 6060 provides a corresponding visual output on display panel 60061 according to the type of touch event. Although in fig. 6, the touch panel 60071 and the display panel 60061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 60071 and the display panel 60061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 6008 is an interface for connecting an external device to the mobile terminal 6000. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 6008 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 6000 or may be used to transmit data between the mobile terminal 6000 and the external device.
The memory 6009 can be used to store software programs as well as various data. The memory 6009 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, and the like), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 6009 can include high-speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 6060 is a control center of the mobile terminal, connects various parts of the entire mobile terminal by using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 6009 and calling data stored in the memory 6009, thereby monitoring the mobile terminal as a whole. Processor 6060 may include one or more processing units; preferably, the processor 6060 may integrate an application processor, which handles primarily the operating system, user interface, applications, etc., and a modem processor, which handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 6060.
The mobile terminal 6000 may further include a power supply 6011 (such as a battery) for supplying power to various components, and preferably, the power supply 6011 may be logically connected to the processor 6060 through a power management system, so that functions of managing charging, discharging, and power consumption are implemented through the power management system.
In addition, the mobile terminal 6000 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring target parameters of a target audio file;
determining an attribute value of the image to be processed according to the target parameter;
generating a target image according to the attribute value;
before generating the target image according to the attribute value, the method further comprises:
acquiring the note change times of the target audio file in a sampling time period;
determining a corresponding image transformation rate according to the note change times;
the generating a target image according to the attribute values includes:
acquiring M intermediate processing images according to the attribute values;
generating a target image according to the image conversion rate and the M intermediate processing images, wherein the target image is a dynamic image;
wherein M is a natural number.
2. The method of claim 1, wherein the target parameters comprise: a frequency of the target audio file;
the acquiring of the target parameter of the target audio file includes:
acquiring N sampling frequencies of the target audio file at N sampling moments;
the determining the attribute value of the image to be processed according to the target parameter comprises the following steps:
determining N attribute values corresponding to the N sampling frequencies according to the N sampling frequencies;
wherein N is a natural number.
3. The method of claim 1, wherein the target parameters comprise: a frequency histogram of the target audio file;
the determining the attribute value of the image to be processed according to the target parameter comprises the following steps:
adjusting the attribute histogram of the image to be processed according to the frequency histogram of the target audio file;
and determining the attribute value of the image to be processed according to the attribute histogram.
4. The method according to claim 1, wherein before determining the attribute value of the image to be processed according to the target parameter, the method further comprises:
determining a target area in the image to be processed;
the determining the attribute value of the image to be processed according to the target parameter comprises the following steps:
and determining the attribute value corresponding to the target area according to the target parameter.
5. A mobile terminal, comprising:
the acquisition module is used for acquiring target parameters of a target audio file;
the first determining module is used for determining the attribute value of the image to be processed according to the target parameter;
the processing module is used for generating a target image according to the attribute value;
the acquisition module is further used for acquiring the note change times of the target audio file in a sampling time period;
the mobile terminal further includes:
the second determining module is used for determining the corresponding image transformation rate according to the note change times;
the processing module comprises:
the acquisition submodule is used for acquiring M intermediate processing images according to the attribute values;
a generation submodule, configured to generate a target image according to the image transformation rate and the M intermediate processed images, where the target image is a dynamic image;
wherein M is a natural number.
6. The mobile terminal of claim 5, wherein the target parameters comprise: a frequency of the target audio file; the acquisition module is specifically configured to:
acquiring N sampling frequencies of the target audio file at N sampling moments;
the first determining module is specifically configured to determine, according to the N sampling frequencies, N attribute values corresponding to the N sampling frequencies;
wherein N is a natural number.
7. The mobile terminal of claim 5, wherein the target parameters comprise: a frequency histogram of the target audio file;
the first determining module includes:
the adjusting submodule is used for adjusting the attribute histogram of the image to be processed according to the frequency histogram of the target audio file;
and the determining submodule is used for determining the attribute value of the image to be processed according to the attribute histogram.
8. The mobile terminal of claim 5, wherein the mobile terminal further comprises:
a third determining module, configured to determine a target region in the image to be processed;
the first determining module is specifically configured to determine an attribute value corresponding to the target area according to the target parameter.
9. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
CN201810271337.4A 2018-03-29 2018-03-29 Image processing method and mobile terminal Active CN108495036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810271337.4A CN108495036B (en) 2018-03-29 2018-03-29 Image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810271337.4A CN108495036B (en) 2018-03-29 2018-03-29 Image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN108495036A CN108495036A (en) 2018-09-04
CN108495036B true CN108495036B (en) 2020-07-31

Family

ID=63316909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810271337.4A Active CN108495036B (en) 2018-03-29 2018-03-29 Image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN108495036B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070896B (en) * 2018-10-19 2020-09-01 北京微播视界科技有限公司 Image processing method, device and hardware device
CN109710827B (en) * 2018-12-13 2021-07-13 百度在线网络技术(北京)有限公司 Picture attribute management method and device, picture server and business processing terminal
CN111489769B (en) * 2019-01-25 2022-07-12 北京字节跳动网络技术有限公司 Image processing method, device and hardware device
CN110766606B (en) * 2019-10-29 2023-09-26 维沃移动通信有限公司 Image processing method and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1503159A (en) * 2002-11-25 2004-06-09 ���µ�����ҵ��ʽ���� Short film generation/reproduction apparatus and method thereof
CN101483055A (en) * 2008-01-11 2009-07-15 慧国(上海)软件科技有限公司 Apparatus and method for arranging and playing a multimedia stream
CN102289778A (en) * 2011-05-10 2011-12-21 南京大学 Method for converting image into music

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101593541B (en) * 2008-05-28 2012-01-04 华为终端有限公司 Method and media player for synchronously playing images and audio file

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1503159A (en) * 2002-11-25 2004-06-09 ���µ�����ҵ��ʽ���� Short film generation/reproduction apparatus and method thereof
CN101483055A (en) * 2008-01-11 2009-07-15 慧国(上海)软件科技有限公司 Apparatus and method for arranging and playing a multimedia stream
CN102289778A (en) * 2011-05-10 2011-12-21 南京大学 Method for converting image into music

Also Published As

Publication number Publication date
CN108495036A (en) 2018-09-04

Similar Documents

Publication Publication Date Title
CN107817939B (en) Image processing method and mobile terminal
CN108495036B (en) Image processing method and mobile terminal
CN109461117B (en) Image processing method and mobile terminal
CN111223143B (en) Key point detection method and device and computer readable storage medium
CN109361867B (en) Filter processing method and mobile terminal
CN109089156B (en) Sound effect adjusting method and device and terminal
CN108668024B (en) Voice processing method and terminal
CN109086027B (en) Audio signal playing method and terminal
CN109819167B (en) Image processing method and device and mobile terminal
CN111445927B (en) Audio processing method and electronic equipment
CN111554321A (en) Noise reduction model training method and device, electronic equipment and storage medium
CN111010608B (en) Video playing method and electronic equipment
CN107644396B (en) Lip color adjusting method and device
CN108881782B (en) Video call method and terminal equipment
CN109656636B (en) Application starting method and device
CN109246474B (en) Video file editing method and mobile terminal
CN110602424A (en) Video processing method and electronic equipment
CN110568926A (en) Sound signal processing method and terminal equipment
CN111182211B (en) Shooting method, image processing method and electronic equipment
CN109462727B (en) Filter adjusting method and mobile terminal
CN107665074A (en) A kind of color temperature adjusting method and mobile terminal
CN108712574B (en) Method and device for playing music based on images
CN107563353B (en) Image processing method and device and mobile terminal
CN110766606B (en) Image processing method and electronic equipment
CN111314639A (en) Video recording method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant