CN109727191B - Image processing method and mobile terminal - Google Patents

Image processing method and mobile terminal Download PDF

Info

Publication number
CN109727191B
CN109727191B CN201811605045.6A CN201811605045A CN109727191B CN 109727191 B CN109727191 B CN 109727191B CN 201811605045 A CN201811605045 A CN 201811605045A CN 109727191 B CN109727191 B CN 109727191B
Authority
CN
China
Prior art keywords
dimension data
image
data
target object
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811605045.6A
Other languages
Chinese (zh)
Other versions
CN109727191A (en
Inventor
陈文智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811605045.6A priority Critical patent/CN109727191B/en
Publication of CN109727191A publication Critical patent/CN109727191A/en
Application granted granted Critical
Publication of CN109727191B publication Critical patent/CN109727191B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The embodiment of the invention provides an image processing method and a mobile terminal, wherein the method comprises the following steps: acquiring a target image, wherein the target image comprises a target object; measuring a target object through a sensing module to obtain first space dimension data of at least one first part of the target object; when second space size data of a second part which is identical to the first part exists in the pre-stored space size data information, the first part is adjusted according to the second space size data and the first space size data; the second spatial dimension data of the second part is obtained by measuring the second part of the reference object through the sensing module. In the embodiment of the invention, because the reference object can be a reference object which is acquired by the mobile terminal and meets the requirement of the user, the adjusted image in the embodiment of the invention is not limited to a single adjustment mode any more, and can obtain diversified processing effects, thereby meeting the diversified image processing requirement of the user.

Description

Image processing method and mobile terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and a mobile terminal.
Background
With the development of mobile terminals, a photographing function has become one of important functions of mobile terminals. With the increasing number of beauty applications, the mobile terminal user can perform beauty treatment on the photographed pictures through the beauty application.
In the existing beautifying application program, standard beautifying parameters are usually set to adjust the photos, for example, when the current popular awl face is used for carrying out chin beautifying on the photos, the photos are usually adjusted to be pointed chin according to the standard parameters.
However, since the existing beautifying method adjusts the photos only according to standard parameters, and in practical application, the aesthetic and appearance of each individual are often different, so that the current beautifying method cannot achieve better beautifying effect, and cannot meet diversified beautifying demands of users.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a mobile terminal, which are used for solving the problem that the actual requirements of a user cannot be met when a target image is processed.
In order to solve the technical problem, the present invention provides an image processing method, which is applied to a mobile terminal, and the method includes:
Acquiring a target image, wherein the target image comprises a target object;
measuring the target object through a sensing module to obtain first space dimension data of at least one first part of the target object;
when second space size data of a second part which is identical to the first part exists in the pre-stored space size data information, adjusting the first part according to the second space size data and the first space size data;
the second spatial dimension data of the second part is obtained by measuring the second part of the reference object through the sensing module.
In a first aspect, an embodiment of the present invention further provides a mobile terminal, including:
the target image acquisition module is used for acquiring a target image, wherein the target image comprises a target object;
the first space dimension data acquisition module is used for measuring the target object through the sensing module to acquire first space dimension data of at least one first part of the target object;
an adjustment module, configured to adjust the first location according to the second spatial dimension data and the first spatial dimension data when second spatial dimension data of a second location that is the same as the first location exists in the pre-stored spatial dimension data information; the second spatial dimension data of the second part is obtained by measuring the second part of the reference object through the sensing module.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored in the memory and executable on the processor, where the computer program is executed by the processor to implement the steps of the foregoing image processing method.
In a third aspect, embodiments of the present invention additionally provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the aforementioned image processing method.
In the embodiment of the invention, when the target image including the target object is acquired, the sensing module is used for measuring the target object to obtain the first space dimension data of at least one first part included in the target object, the mobile terminal can be pre-stored with space dimension data information, the pre-stored space dimension data information is obtained by measuring the second space dimension data of a second part which is the same as the first part in other reference objects through the sensing module, and when the second space dimension data of the second part which is the same as the first part exists in the pre-stored space dimension data information, the first part in the target image can be adjusted according to the second space dimension data and the first space dimension data, so that the adjusted image is related to the second part serving as a reference, and because the second part can be acquired by a mobile terminal user, the second space dimension data of the second part is the reference data which accords with the aesthetic requirements of the user.
Drawings
FIG. 1 is a flow chart of steps of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart showing the steps of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a display portion selection control according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another display portion selection control according to an embodiment of the present invention;
fig. 5 is a block diagram of an image processing mobile terminal according to an embodiment of the present invention;
fig. 6 is a specific block diagram of an image processing mobile terminal according to an embodiment of the present invention;
fig. 7 is a block diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flowchart of steps of an image processing method in an embodiment of the present invention is shown.
The method comprises the following specific steps:
step 101: a target image is acquired, the target image comprising a target object.
The embodiment of the invention can be applied to a mobile terminal, and the mobile terminal can be a mobile phone, a computer, an electronic reader and the like, and the embodiment of the invention is not particularly limited.
In a specific application, the target image may be an image to be subjected to image processing, and the target object may be a face, a human body or the like.
In a specific application, the mobile terminal may acquire the target image through the camera, for example, after a user opens a beauty application program for beautifying by adopting the image processing method of the embodiment of the present invention, the camera may be opened to acquire the target image in the beauty application program.
Step 102: and measuring the target object through a sensing module to obtain first space dimension data of at least one first part of the target object.
In the embodiment of the invention, when the target image is acquired, the sensing module can be also called, and the first space size data of at least one first part of the target object is obtained by measuring the target object in the target image through the sensing module.
In a specific application, the sensing module can provide hardware support for the AR ruler application program, and a mobile terminal user can accurately measure a real object by using the AR ruler application program to obtain the space size data of the measured object.
The sensing module may be a structured light module or a Time of Flight (TOF) module.
The structure light module comprises a light projection unit, a camera and an image processing unit, wherein the light projection unit can be an infrared laser, the camera is an infrared camera, light with certain structural characteristics is projected onto a target object through the infrared laser, the infrared camera collects three-dimensional light images formed on the target object, and the collected three-dimensional light images are processed by the image processing unit to obtain depth data of the target object. When the relative positions of the light projection unit and the camera are fixed, the distortion degree of the light projected onto the target object depends on the depth of the surface of the target object, so the three-dimensional light image collected by the infrared camera is a light image with depth information.
The TOF module comprises a light emitting unit, an optical lens, an optical filter and an image sensor, wherein the light emitting unit emits infrared light with specific wavelength, the emitted infrared light is reflected after encountering a target object, the reflected light enters the image sensor through the optical lens and the optical filter, the image sensor measures the time of the light of each pixel point from the light emitting unit to the target object and reflected back to the image sensor, and the distance between the mobile terminal and the target object is calculated, so that depth information is generated. The optical filter only allows infrared light with specific wavelength to pass through, and light with other wavelengths cannot enter the image sensor through the optical filter, so that accuracy of depth information calculation can be improved.
In a specific embodiment, the first portion may correspond to any one of the following target objects: eyebrows, eyes, nose, mouth, facial outline, head, shoulders, chest, waist, buttocks, knees, feet, etc., the first spatial dimension data of the first portion may be parameters such as length, width, height, etc. Because the above parts are more common beautifying parts in the function of beautifying the figure, the first part corresponds to any one of eyebrows, eyes, nose, mouth, face outline, head, shoulder, chest, waist, buttocks, knees and feet, and can conform to the use habit of the mobile terminal user.
As a preferred solution of the embodiment of the present invention, step 102 may comprise the following sub-steps:
substep A1: at least two target images are acquired through sensing module identification.
In the embodiment of the invention, when the sensing module is used for measuring the first space size of the first part, a plurality of target images can be acquired, and corresponding space size data can be further determined according to the marks of the acceleration sensor in the mobile terminal on the space coordinates of the same part in the target images. The specific test procedure includes sub-steps A2 to A5.
Substep A2: and determining a frame of at least one first part of the target object in each target image, wherein the frame is the smallest frame surrounding the first part.
In the embodiment of the invention, at least one first part of the target object in each target image can be determined through technologies such as image recognition and the like, and a minimum frame which can surround the first part is determined around each first part in a frame surrounding mode.
Substep A3: a pair of marker points is defined on each of the opposite sets of edges in the frame.
In the embodiment of the invention, the frame is generally provided with four edges, and the two opposite groups of edges can respectively determine the length and the width of the frame, so that a pair of mark points can be determined on each opposite group of edges to measure the length, the width and the like of the frame through the pair of mark points.
Substep A4: and determining the space coordinates of the mark points in each image.
In the embodiment of the invention, the space coordinates of each marking point in each image can be determined through acceleration sensor equipment such as a gyroscope in the mobile terminal.
Substep A5: and determining first space dimension data of the first part according to the space coordinates of each pair of mark points in each target image.
In the embodiment of the invention, after the space coordinates of each pair of mark points in each target image are determined, the first space size data of the first part can be obtained according to the relative movement distance of the first part in any two target images and the space distance between each pair of mark points, and the first space size data can be specifically the length data, the width data and the like of the first part.
In the embodiment of the invention, the sensing module is adopted to measure and obtain the first space dimension data of at least one first part of the target object, and the sensing module has the characteristics of high efficiency and accuracy in measurement, so that the accurate first space dimension data can be obtained, and a good processing effect can be achieved when the target image is processed through the second space dimension data in the follow-up process.
As another preferred solution of the embodiment of the present invention, the step 102 may comprise the following sub-steps:
substep B1: at least one first location of the target object is identified.
In the embodiment of the invention, at least one first part of the target object can be identified through technologies such as image identification and the like.
Substep B2: and respectively measuring the depth information of the at least one first part through the sensing module.
According to the embodiment of the invention, the infrared light with the specific wavelength can be emitted according to the light emitting unit in the sensing module, the emitted infrared light is reflected after encountering the first part, the reflected light enters the image sensor through the optical lens and the optical filter, the image sensor measures the time of the light of each pixel point from the light emitting unit to the first part and reflected back to the image sensor, and the distance between the mobile terminal and the first part is calculated, so that the depth information of at least one first part is measured.
Substep B3: and calculating first space dimension data of the at least one first part according to the depth information of the at least one first part.
In the embodiment of the present invention, for example, if the first portion is a shoulder, the distance between the mobile terminal and the left shoulder is detected to be d1, the distance between the mobile terminal and the right shoulder is detected to be d2, and the shoulder width data can be calculated according to the distance d1 and the distance d 2.
In the embodiment of the invention, the sensing module measures the first space dimension data of at least one first part of the target object by measuring the first part depth information, so that the measuring process is simpler, and the efficiency of measuring the first space dimension data of the first part is higher.
Step 106: when second space size data of a second part which is identical to the first part exists in the pre-stored space size data information, adjusting the first part according to the second space size data and the first space size data; the second spatial dimension data of the second part is obtained by measuring the second part of the reference object through the sensing module.
In the embodiment of the invention, the spatial dimension information of the reference object meeting the user requirement can be stored in the mobile terminal in advance, specifically, the spatial dimension data of the second part of the reference object, which is the same as the first part, can be obtained by shooting the second part of the reference object in a similar manner to the step 102 through the sensing module.
In practical applications, the mobile terminal user may have a reference object that is more aesthetically compatible with the mobile terminal user, such as a beautiful friend, and after the mobile terminal user selects the reference object, the mobile terminal may determine the reference object in the mobile terminal, and the mobile terminal may measure the reference object through the sensing module to obtain second spatial dimension data of at least one second portion of the reference object that is the same as at least one first portion, for example, the mobile terminal may obtain second portions of eyebrows, eyes, nose, mouth, facial outline, head, shoulder, chest, waist, buttocks, knees, feet, and the like of the reference person, where the second spatial dimension data of the second portion may be parameters such as length, width, height, and the like.
In the embodiment of the invention, when the second spatial dimension data of the second part which is the same as the first part exists in the pre-stored spatial dimension data information, the first part in the target image can be adjusted according to the second spatial dimension data, so that the effect of beautifying the target image through the reference object can be achieved.
As a preferred solution of the embodiment of the present invention, the adjusting the first location according to the second spatial dimension data and the first spatial dimension data in step 106 may include the following substeps:
substep C1: and acquiring image size data of the first part in the target image.
Substep C2: a first ratio of the image size data to the first spatial size data is calculated.
Substep C3: and scaling the first part according to the product of the second space dimension data and the first proportion.
In the embodiment of the invention, when the first part is adjusted by the second space size data and the first space size data, the image size data of the first part in the target image, namely the size of the first part in the target image, can be obtained first, and it can be understood that because the photo is usually a program of reducing the real object in equal proportion, the first space size data and the image size data of the first part usually have corresponding reduction ratios, the first ratio of the image size data and the first space size data can be obtained by dividing the image size data and the first space size data, then the second space size data and the first ratio are multiplied, the target image data to be adjusted by the first part in the target image can be obtained, and if the image size data of the first part in the target image is smaller than the target image data, the first part can be stretched; if the image size data of the first region in the target image is larger than the target image data, the first region may be shortened.
For example, if the first portion is a leg, the first spatial dimension data of the leg, that is, the true leg length is 80cm, and the image dimension data of the leg in the target image is 2cm, the ratio is 1:40; the second spatial dimension data of the legs of the reference object is 100cm, and the target image data of 2.5cm is obtained by 100 x 1 x 40, so that the legs in the target image can be lengthened by 0.5cm.
The embodiment of the invention provides a specific implementation manner of adjusting the first part according to the second spatial dimension data and the first spatial dimension data, by which the first part can be efficiently and quickly adjusted, and it can be understood that in practical application, a person skilled in the art can determine a corresponding implementation manner according to a practical application scenario, and the embodiment of the invention is not limited in particular.
As a preferred embodiment of the present invention, step 106 may further include step 103 and step 105 before step 106, as shown in fig. 2.
Step 103: a location selection control corresponding to the at least one first location is displayed.
Step 104: a first input from a user to the site selection control is received.
Step 105: at least one first location of the target object is determined in response to the first input.
In the embodiment of the present invention, a position selection control corresponding to the at least one first position may be displayed in a user interface of the mobile terminal, a mobile terminal user may perform a first input on a first position where image processing is desired according to needs, where the first input may be a click input, a long press input, a voice input, or the like, and the first input may be a first operation, for example, an operation of setting a preset gesture, etc., and it is understood that a person skilled in the art may determine a specific content of the first input according to an actual application scenario.
Upon receiving a first input from the user to the site selection control, the mobile terminal may determine at least one first site to be processed in response to the input and perform an image processing operation as in step 106 on the at least one first site to be processed.
For example, as shown in fig. 3, the target object included in the target image is a human face 10, a position selection control 20 corresponding to at least one first position may be displayed on a user interface of the mobile terminal, and a chin selection control, a nose selection control, an eye selection control, and the like, where the mobile terminal user clicks the chin selection control in the position selection control, and then the mobile terminal may process the chin through the above image processing method in response to the click.
For another example, as shown in fig. 4, the target object included in the target image is a body 30, and a portion selection control 40 corresponding to at least one first portion, such as a shoulder selection control, a waist selection control, a leg selection control, and the like, may be displayed on a user interface of the mobile terminal, and when the mobile terminal user clicks the waist selection control in the portion selection control, the mobile terminal may process the waist through the above-mentioned image processing method in response to the click.
In the embodiment of the invention, the position selection control of at least one first position is provided in the mobile terminal, the mobile terminal user can carry out the first input on the position selection control, and flexibly select the position which the user wants to beautify, so that the beautifying effect which meets the requirements of the user is obtained, and the user operation is very simple.
In summary, in the embodiment of the present invention, when the target image including the target object is acquired, the sensing module may measure the target object to obtain the first spatial dimension data of at least one first portion included in the target object, and the mobile terminal may store spatial dimension data information in advance, where the pre-stored spatial dimension data information is obtained by measuring the second spatial dimension data of a second portion identical to the first portion in other reference objects through the sensing module, and in the case where the pre-stored spatial dimension data information includes the second spatial dimension data of the second portion identical to the first portion, the first portion in the target image may be adjusted according to the second spatial dimension data and the first spatial dimension data, and then the adjusted image is related to the second portion serving as a reference, and because the second portion may be acquired by a user of the mobile terminal, the second spatial dimension data of the second portion is the reference data conforming to the aesthetic requirement of the user.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required for the present invention.
Referring to fig. 5, a block diagram of an image processing mobile terminal 300 in an embodiment of the present invention is shown. Comprising the following steps:
a target image acquisition module 310, configured to acquire a target image, where the target image includes a target object;
a first spatial dimension data obtaining module 320, configured to measure the target object through a sensing module, and obtain first spatial dimension data of at least one first portion of the target object;
an adjustment module 330, configured to adjust the first location according to the second spatial dimension data and the first spatial dimension data when there is second spatial dimension data of a second location that is the same as the first location in the pre-stored spatial dimension data information; the second spatial dimension data of the second part is obtained by measuring the second part of the reference object through the sensing module.
Preferably, referring to fig. 6, the image processing mobile terminal 300 may further include, on the basis of fig. 5:
a display module 340 for displaying a location selection control corresponding to the at least one first location;
a receiving module 350, configured to receive a first input from a user to the location selection control;
a first location determination module 360 for determining at least one first location of the target object in response to the first input.
Preferably, the first spatial dimension data acquisition module 320 includes:
the target image acquisition sub-module is used for acquiring at least two target images through the sensing module identification;
the frame determination submodule is used for determining a frame of at least one first position of the target object in each target image, wherein the frame is the smallest frame surrounding the first position;
a mark point determining sub-module for determining a pair of mark points on each set of opposite sides in the frame;
the space coordinate determination submodule is used for determining the space coordinates of the marking points in each image;
and the first space dimension data acquisition sub-module is used for determining the first space dimension data of the first part according to the space coordinates of each pair of mark points in each target image.
Preferably, the first spatial dimension data acquisition module 320 includes:
an identification sub-module for identifying at least one first location of the target object;
the measuring submodule is used for respectively measuring the depth information of the at least one first part through the sensing module;
and the computing sub-module is used for computing first space dimension data of the at least one first position according to the depth information of the at least one first position.
The adjustment module 330 includes:
an image size data acquisition sub-module, configured to acquire image size data of the first portion in the target image;
a first proportion calculating sub-module for calculating a first proportion of the image size data to the first space size data;
and the processing submodule is used for scaling the first part according to the product of the second space dimension data and the first proportion.
In summary, in the embodiment of the present invention, when the target image including the target object is acquired, the sensing module may measure the target object to obtain the first spatial dimension data of at least one first portion included in the target object, and the mobile terminal may store spatial dimension data information in advance, where the pre-stored spatial dimension data information is obtained by measuring the second spatial dimension data of a second portion identical to the first portion in other reference objects through the sensing module, and in the case where the pre-stored spatial dimension data information includes the second spatial dimension data of the second portion identical to the first portion, the first portion in the target image may be adjusted according to the second spatial dimension data and the first spatial dimension data, and then the adjusted image is related to the second portion serving as a reference, and because the second portion may be acquired by a user of the mobile terminal, the second spatial dimension data of the second portion is the reference data conforming to the aesthetic requirement of the user.
The above mobile terminal can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 4, and in order to avoid repetition, a description is omitted here.
Fig. 7 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power source 511. Those skilled in the art will appreciate that the mobile terminal structure shown in fig. 7 is not limiting of the mobile terminal and that the mobile terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the mobile terminal comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
Wherein, the processor 510 is configured to obtain a target image, where the target image includes a target object; measuring the target object through a sensing module to obtain first space dimension data of at least one first part of the target object; when second space size data of a second part which is identical to the first part exists in the pre-stored space size data information, adjusting the first part according to the second space size data and the first space size data; the second spatial dimension data of the second part is obtained by measuring the second part of the reference object through the sensing module.
In the embodiment of the invention, when the target image including the target object is acquired, the sensing module is used for measuring the target object to obtain the first space dimension data of at least one first part included in the target object, the mobile terminal can be pre-stored with space dimension data information, the pre-stored space dimension data information is obtained by measuring the second space dimension data of a second part which is the same as the first part in other reference objects through the sensing module, and when the second space dimension data of the second part which is the same as the first part exists in the pre-stored space dimension data information, the first part in the target image can be adjusted according to the second space dimension data and the first space dimension data, so that the adjusted image is related to the second part serving as a reference, and because the second part can be acquired by a mobile terminal user, the second space dimension data of the second part is the reference data which accords with the aesthetic requirements of the user.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the downlink data with the processor 510; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 may also communicate with networks and other devices through a wireless communication system.
The mobile terminal provides wireless broadband internet access to the user through the network module 502, such as helping the user to send and receive e-mail, browse web pages, access streaming media, etc.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the mobile terminal 500. The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used for receiving an audio or video signal. The input unit 504 may include a graphics processor (Graphics Processing Unit, GPU) 5041 and a microphone 5042, the graphics processor 5041 processing image data of still pictures or video obtained by an image capturing mobile terminal (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphics processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. Microphone 5042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 501 in case of a phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 5061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 5061 and/or backlight when the mobile terminal 500 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 505 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 506 is used to display information input by a user or information provided to the user. The display unit 506 may include a display panel 5061, and the display panel 5061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on touch panel 5071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 5071 may include two parts, a touch detection mobile terminal and a touch controller. The touch detection mobile terminal detects the touch azimuth of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch detection mobile terminal, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. In particular, other input devices 5072 may include, but are not limited to, physical keyboards, function keys (e.g., volume control keys, switch keys, etc.), trackballs, mice, joysticks, and so forth, which are not described in detail herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or thereabout, the touch operation is transmitted to the processor 510 to determine a type of touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of touch event. Although in fig. 7, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 508 is an interface through which an external mobile terminal is connected to the mobile terminal 500. For example, the external mobile terminal may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting to a mobile terminal having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from an external mobile terminal and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and an external mobile terminal.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by running or executing software programs and/or modules stored in the memory 509, and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 510.
The mobile terminal 500 may further include a power source 511 (e.g., a battery) for powering the various components, and preferably the power source 511 may be logically connected to the processor 510 via a power management system that performs functions such as managing charging, discharging, and power consumption.
In addition, the mobile terminal 500 includes some functional modules, which are not shown, and will not be described herein.
Preferably, the embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program stored in the memory 509 and capable of running on the processor 510, where the computer program when executed by the processor 510 implements each process of the above embodiment of the image processing method, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above-mentioned image processing method embodiment, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or mobile terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or mobile terminal. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or mobile terminal comprising the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (9)

1. An image processing method applied to a mobile terminal, comprising:
acquiring a target image, wherein the target image comprises a target object;
measuring the target object through a sensing module to obtain first space dimension data of at least one first part of the target object;
when second space size data of a second part which is identical to the first part exists in the pre-stored space size data information, adjusting the first part according to the second space size data and the first space size data;
wherein second spatial dimension data of the second portion is obtained by measuring the second portion of the reference object by the sensing module;
The adjusting the first portion according to the second spatial dimension data and the first spatial dimension data includes:
acquiring image size data of the first part in the target image;
calculating a first ratio of the image size data to the first spatial size data;
and scaling the first part according to the product of the second space dimension data and the first proportion.
2. The method of claim 1, wherein measuring the target object by the sensing module to obtain first spatial dimension data of at least a first portion of the target object comprises:
acquiring at least two target images through sensing module identification;
determining a frame of at least one first part of the target object in each target image, wherein the frame is the smallest frame surrounding the first part;
determining a pair of marker points on each set of opposite sides of the frame;
determining the space coordinates of the mark points in each image;
and determining first space dimension data of the first part according to the space coordinates of each pair of mark points in each target image.
3. The method of claim 1, wherein measuring the target object by the sensing module to obtain first spatial dimension data of at least a first portion of the target object comprises:
Identifying at least one first location of the target object;
measuring depth information of the at least one first part through the sensing module respectively;
and calculating first space dimension data of the at least one first part according to the depth information of the at least one first part.
4. The method of claim 1, wherein, in the presence of second spatial dimension data of a second location that is the same as the first location, before adjusting the first location based on the second spatial dimension data and the first spatial dimension data, further comprises:
displaying a location selection control corresponding to the at least one first location;
receiving a first input of a user to the part selection control;
at least one first location of the target object is determined in response to the first input.
5. The method of any one of claims 1 to 4, wherein the first site comprises any one of: eyebrows, eyes, nose, mouth, facial contours, head, shoulders, chest, waist, buttocks, knees, feet.
6. A mobile terminal, comprising:
The target image acquisition module is used for acquiring a target image, wherein the target image comprises a target object;
the first space dimension data acquisition module is used for measuring the target object through the sensing module to acquire first space dimension data of at least one first part of the target object;
an adjustment module, configured to adjust the first location according to the second spatial dimension data and the first spatial dimension data when second spatial dimension data of a second location that is the same as the first location exists in the pre-stored spatial dimension data information; wherein second spatial dimension data of the second portion is obtained by measuring the second portion of the reference object by the sensing module;
the adjustment module includes:
an image size data acquisition sub-module, configured to acquire image size data of the first portion in the target image;
a first proportion calculating sub-module for calculating a first proportion of the image size data to the first space size data;
and the processing submodule is used for scaling the first part according to the product of the second space dimension data and the first proportion.
7. The mobile terminal of claim 6, further comprising:
the display module is used for displaying a part selection control corresponding to the at least one first part;
the receiving module is used for receiving a first input of the part selection control by a user;
a first location determination module for determining at least one first location of the target object in response to the first input.
8. A mobile terminal comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the image processing method according to any one of claims 1 to 5.
9. A computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 5.
CN201811605045.6A 2018-12-26 2018-12-26 Image processing method and mobile terminal Active CN109727191B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811605045.6A CN109727191B (en) 2018-12-26 2018-12-26 Image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811605045.6A CN109727191B (en) 2018-12-26 2018-12-26 Image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN109727191A CN109727191A (en) 2019-05-07
CN109727191B true CN109727191B (en) 2023-08-08

Family

ID=66296495

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811605045.6A Active CN109727191B (en) 2018-12-26 2018-12-26 Image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN109727191B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529770B (en) * 2020-12-07 2024-01-26 维沃移动通信有限公司 Image processing method, device, electronic equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8908928B1 (en) * 2010-05-31 2014-12-09 Andrew S. Hansen Body modeling and garment fitting using an electronic device
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8908928B1 (en) * 2010-05-31 2014-12-09 Andrew S. Hansen Body modeling and garment fitting using an electronic device
CN107786812A (en) * 2017-10-31 2018-03-09 维沃移动通信有限公司 A kind of image pickup method, mobile terminal and computer-readable recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于运动目标尺寸检测的实时图像处理与显示;翟亚宇等;《计算机测量与控制》;20141125(第11期);全文 *

Also Published As

Publication number Publication date
CN109727191A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN109361865B (en) Shooting method and terminal
CN108184050B (en) Photographing method and mobile terminal
CN110969981B (en) Screen display parameter adjusting method and electronic equipment
CN109461117B (en) Image processing method and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN108989678B (en) Image processing method and mobile terminal
CN110443769B (en) Image processing method, image processing device and terminal equipment
CN107846583B (en) Image shadow compensation method and mobile terminal
CN108683850B (en) Shooting prompting method and mobile terminal
CN107644396B (en) Lip color adjusting method and device
CN111031234B (en) Image processing method and electronic equipment
CN110533651B (en) Image processing method and device
CN107741814B (en) Display control method and mobile terminal
CN109448069B (en) Template generation method and mobile terminal
CN109544445B (en) Image processing method and device and mobile terminal
CN108881782B (en) Video call method and terminal equipment
CN111031178A (en) Video stream clipping method and electronic equipment
CN111091519B (en) Image processing method and device
CN110555815B (en) Image processing method and electronic equipment
CN110944112A (en) Image processing method and electronic equipment
CN111080747A (en) Face image processing method and electronic equipment
CN108833791B (en) Shooting method and device
CN107563353B (en) Image processing method and device and mobile terminal
CN111405361B (en) Video acquisition method, electronic equipment and computer readable storage medium
CN109727191B (en) Image processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant