CN107707818B - Image processing method, image processing apparatus, and computer-readable storage medium - Google Patents

Image processing method, image processing apparatus, and computer-readable storage medium Download PDF

Info

Publication number
CN107707818B
CN107707818B CN201710895943.9A CN201710895943A CN107707818B CN 107707818 B CN107707818 B CN 107707818B CN 201710895943 A CN201710895943 A CN 201710895943A CN 107707818 B CN107707818 B CN 107707818B
Authority
CN
China
Prior art keywords
image
processed
image processing
area
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710895943.9A
Other languages
Chinese (zh)
Other versions
CN107707818A (en
Inventor
吴智聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201710895943.9A priority Critical patent/CN107707818B/en
Publication of CN107707818A publication Critical patent/CN107707818A/en
Application granted granted Critical
Publication of CN107707818B publication Critical patent/CN107707818B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, an image processing device and a computer readable storage medium, wherein the image processing method comprises the following steps: acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information; acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image; and integrating the intermediate image and the background image to form a target image. According to the scheme, the background image of the environment where the image to be processed is located is obtained before the image to be processed corresponding to the object to be processed is obtained, so that after the image to be processed is processed to obtain the intermediate image, information loss caused in the process of processing the image to be processed into the intermediate image is completed through the background image by integrating the intermediate image and the background image, a target image is formed, and the target image is more real while the target image is optimized.

Description

Image processing method, image processing apparatus, and computer-readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and a computer-readable storage medium.
Background
At present, mobile terminals represented by mobile phones and tablet computers in the market have increasingly powerful functions, wherein beauty is almost a standard matching function of the mobile terminals, and is mainly used for optimizing facial details of a shot figure picture so that the face of the figure in the picture can reach the best state required by a user. Although the function of optimizing the details of the human face is endless, the optimization of the human posture is rare, and the human posture is generally modified by using picture processing software such as Adobe Photoshop, and the modification trace of the software is too obvious and has higher requirement on the operation level of an operator. After the partial information in a single picture is modified, other non-modified partial information is inevitably lost and cannot be supplemented, and phenomena such as line distortion between the modified partial information and the non-modified partial information, object deformation and the like occur.
Disclosure of Invention
The invention mainly aims to provide an image processing method, an image processing device and a computer readable storage medium, and aims to solve the problem that in the prior art, partial information of a single picture is modified, so that non-modified partial information is lost and cannot be supplemented.
In order to achieve the above object, the present invention provides an image processing method, including:
acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information;
acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image;
and integrating the intermediate image and the background image to form a target image.
Optionally, the step of integrating the intermediate image and the background image to form the target image includes:
acquiring a processing area subjected to image processing in the intermediate image, comparing the processing area with a corresponding area in the background image, and determining an area to be integrated;
and integrating the areas to be integrated in the intermediate image and the background image to form a target image.
Optionally, the step of acquiring environment information acquired in a terminal viewfinder frame and determining the object to be processed and the background image according to the environment information includes:
determining environment information previewed in a terminal viewing frame, and collecting the environment information;
and acquiring the environmental information collected by the terminal viewing frame within a preset time.
Optionally, the step of obtaining a to-be-processed image corresponding to the to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image includes:
acquiring a to-be-processed image corresponding to the to-be-processed object, and determining a contour region of the to-be-processed object in a background image;
and acquiring a region to be processed in the contour region, and performing image processing on the region to be processed to obtain an intermediate image.
Optionally, the step of performing image processing on the region to be processed to obtain an intermediate image includes:
and receiving attribute information of an object to be processed, and performing image processing on the area to be processed according to the attribute information to obtain an intermediate image.
Optionally, the step of performing image processing on the region to be processed according to the attribute information to obtain an intermediate image includes:
and acquiring a preprocessing scheme corresponding to the attribute information, and performing image processing on the area to be processed according to the preprocessing scheme to obtain an intermediate image corresponding to the attribute information.
Optionally, the step of obtaining an intermediate image corresponding to the attribute information includes:
when an image processing operation is received, processing the intermediate image according to the image processing operation to update the intermediate image.
Optionally, the step of integrating the intermediate image and the background image to form the target image includes:
and previewing and displaying the target image, and storing the target image when receiving a target image storage instruction.
Further, to achieve the above object, the present invention also proposes an image processing apparatus comprising: a memory, a processor, a communication bus, and an image processing program stored on the memory:
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used for executing the image processing program to realize the following steps:
acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information;
acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image;
and integrating the intermediate image and the background image to form a target image.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors for:
acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information;
acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image;
and integrating the intermediate image and the background image to form a target image.
According to the image processing method, the environmental information collected in the terminal viewing frame is obtained, and the object to be processed and the background image are determined according to the environmental information; acquiring an image to be processed corresponding to the object to be processed, and performing image processing on the image to be processed to obtain an intermediate image; thereby integrating the intermediate image and the background image to form a target image. According to the scheme, the background image of the environment where the image to be processed is located is obtained before the image to be processed corresponding to the object to be processed is obtained, so that after the image to be processed is processed to obtain the intermediate image, information loss caused in the process of processing the image to be processed into the intermediate image is completed through the background image by integrating the intermediate image and the background image, a target image is formed, and the target image is more real while the target image is optimized.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of an alternative mobile terminal for implementing various embodiments of the present invention;
FIG. 2 is a diagram illustrating a wireless communication system of the mobile terminal shown in FIG. 1;
FIG. 3 is a flowchart illustrating a first embodiment of an image processing method according to the present invention;
FIG. 4 is a flowchart illustrating a second embodiment of an image processing method according to the present invention;
FIG. 5 is a schematic diagram of an apparatus structure of a hardware operating environment according to a method of an embodiment of the present invention
FIG. 6 is a diagram illustrating a first scenario of an image processing method according to the present invention;
FIG. 7 is a diagram illustrating a second scenario of the image processing method according to the present invention;
fig. 8 is a schematic diagram of a third scenario of the image processing method according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex-Long Term Evolution), and TDD-LTE (Time Division duplex-Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and charging functions Entity) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication device structure, embodiments of the image processing method of the present invention are provided.
Referring to fig. 3, the present invention provides an image processing method, which, in a first embodiment thereof, includes:
step S10, acquiring environment information collected in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information;
with the increasing development of image processing technology, more and more image processing technologies are applied to the daily life of the public, such as the appearance of various beauty cameras, and are popular with beauty lovers. The beauty camera is mainly used for beautifying the details of the face of a person in a picture, and the image processing method is mainly used for beautifying the body state of the person when the person is shot through the mobile terminal, wherein the mobile terminal can be a smart phone or a tablet personal computer and other terminals with shooting functions and view frames. Understandably, when a user shoots a shooting object by using a terminal with a shooting function, the user needs to view a view through a view frame of the terminal to preview the shooting effect of the shooting object in the environment where the shooting object is located, wherein the shooting object is a person. The process of previewing the shooting effect of the person in the environment information through the view frame is the process of acquiring the environment information through the view frame of the terminal. And people and other objects in the environment can be inevitably collected in the process of collecting the environment information. In the embodiment, a shooting object is used as a to-be-processed object which needs to be subjected to image processing, and other objects in the environment where the shooting object is located are used as background images, so that the environment information acquired in the terminal view finder, namely the environment information including people and other objects in the environment where the people are located, is acquired. And determining an object to be processed and a background image according to the environment information, wherein the object to be processed can be determined by a face recognition technology, and the background image is an image except the object to be processed. The background image is the original image information which is not processed by the image processing, and the processed image information can be trimmed and supplemented by the background image without deformation and distortion.
Step S20, acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image;
furthermore, after the object to be processed and the background image are determined, the image to be processed corresponding to the object to be processed is further acquired, that is, the object to be processed is shot as the image to be processed by receiving the shooting instruction on the mobile terminal. The shot image to be processed comprises an image of an object to be processed and an image of an environment where the object to be processed is located, wherein the image of the object to be processed is a figure image, the image of the environment where the object to be processed is located is a figure background image, and the image of the object to be processed in the image to be processed is required to be processed. The image processing is to optimize the image of the object to be processed, namely the figure image posture, such as lengthening figure body proportion, slimming waist, muscle increasing and the like; face processing technology can also be integrated to beautify the details of the human face in the image of the object to be processed, such as skin grinding, face thinning, whitening and the like. Understandably, in order to realize the human posture optimization, the position of a human needs to be identified first, namely the position of the human in a shot image is determined; and after the position of the person is identified, the human body information is further determined, namely the trunk, the arms, the legs and the like are determined. The person position recognition can be performed through a Face Detection (Face Detection) algorithm, the Face Detection algorithm is an algorithm for automatically detecting the Face position in any given image, the Face Detection algorithm comprises a template matching algorithm, a method based on traditional machine learning and a method based on deep learning, and the position of a person in a shot image is determined through the Face position Detection algorithm.
The human body information can be determined by a matching algorithm, and an approximate position is determined according to the logical relationship that the lower part of the face position is a trunk, the left side and the right side above the trunk are arms, and the lower part of the trunk is legs. And presetting matching standards by acquiring the shapes of arms, trunks and legs of various different characters to determine human body information, and when some parts in the human body are matched with the legs in the preset matching standards, indicating that the parts are the legs, so as to perform image processing according to the parts in the determined human body information. When a waist-thinning operation is required, it is practical to move the contour of the waist in the direction of the center of the waist and map the associated pixels P (x, y) around the contour of the waist onto a new position P' (u, v) of the target image in a certain relationship, and this can be generally realized by a Homograph Matrix (Homograph Matrix). Because the mapping relation of each point in the image to be processed needs to be determined according to the relative position between the pixel and the moving feature point, the existing pixel can be mapped to the new position of the target image after the mapping relation is given, and the image with the thin waist is obtained. The position of each pixel in the target image in the original image can be calculated, the value of the actual pixel is calculated by an interpolation method, and the actual pixel is backfilled on the target image, so that the image with the slender waist can be obtained. The figure posture optimization image obtained through image processing is an intermediate image; and according to the missing condition of the information in the intermediate image, completing the intermediate image by the background image.
Step S30, the intermediate image and the background image are integrated to form a target image.
And further, after the intermediate image and the background image are acquired, integrating the intermediate image and the background image to form a target image. Because the image to be processed includes the image of the object to be processed and the image of the environment where the object to be processed is located, when the image of the object to be processed is processed, the image of the environment where the object to be processed is located in the image to be processed is inevitably affected. Referring to fig. 6, the acquired environment information includes a person and a cube, the person is an object to be processed, and the cube is a background image, so that the image to be processed corresponding to the object to be processed includes an image of the person and an image of the cube. Lengthening the leg lines in the image of the person is equivalent to expanding and lengthening the leg contours, so that the cubic lines adjacent to the leg contours can be pulled downwards in the lengthening process, namely, the image around the processing area of the image to be processed is distorted, and the processed leg lines and the distorted image near the leg contours are intermediate images obtained by image processing. In the embodiment, the distorted part in the intermediate image is determined by comparing the processed intermediate image with the unprocessed background image, and the distorted part and the unprocessed background image are integrated and supplemented. The essence is that the processed region in the intermediate image is compared with the corresponding part in the background image, whether the processed region in the intermediate image is distorted or not is checked, and if the distortion occurs, undistorted region information in the background image is extracted and supplemented to the intermediate image to form an undistorted target image. In the middle image after the leg lines are drawn as described above, the leg contour neighboring cube is compared with the leg contour neighboring cube in the unprocessed background image, and whether the pixel difference size between the two exceeds the pixel variable range is checked. When the difference of the pixels between the two images does not exceed the changeable range of the pixels, the difference of the two images is not large, the distortion phenomenon does not occur in the intermediate image, and the completion is not needed. And when the pixel difference between the two exceeds the changeable range of the pixel, the difference between the two is larger, the distortion phenomenon appears in the intermediate image, the leg contour adjacent cube in the background image is integrated into the leg contour adjacent cube in the intermediate image, and the leg contour adjacent cube is complemented to form a more real target image.
The image processing method comprises the steps of acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information; acquiring an image to be processed corresponding to the object to be processed, and performing image processing on the image to be processed to obtain an intermediate image; thereby integrating the intermediate image and the background image to form a target image. According to the scheme, the background image of the environment where the image to be processed is located is obtained before the image to be processed corresponding to the object to be processed is obtained, so that after the image to be processed is processed to obtain the intermediate image, information loss caused in the process of processing the image to be processed into the intermediate image is completed through the background image by integrating the intermediate image and the background image, a target image is formed, and the target image is more real while the target image is optimized.
Further, in another embodiment of the image processing method of the present invention, the step of integrating the intermediate image and the background image to form the target image includes:
step S31, acquiring a processing area subjected to image processing in the intermediate image, comparing the processing area with a corresponding area in the background image, and determining an area to be integrated;
and step S32, integrating the to-be-integrated areas in the intermediate image and the background image to form a target image.
Furthermore, since there may be multiple regions of the image of the object to be processed in the image to be processed, the multiple regions are subjected to image processing to obtain an intermediate image. Correspondingly, during integration, all processing areas subjected to image processing in the intermediate image need to be acquired, so as to compare and integrate all processing areas subjected to image processing. If the object to be processed is a boy, the corresponding image to be processed is the image of the boy standing at the amusement facility of the amusement park, so that the image of the object to be processed is the image of the boy in the image. The image-processed regions include legs, an abdomen, and arms of the boy, so that the image-processed regions including the leg region, the abdomen region, and the arm region in the intermediate image are acquired. And simultaneously acquiring a background image, namely comparing leg areas, abdomen areas and arm areas of the boys in the unprocessed boys standing in the original image beside the amusement facility of the amusement park, and determining an area to be integrated according to a comparison result. Specifically, the processing region substantially includes the processing portion and a region within a certain range around the processing portion, and if the processing portion is the leg portion, the processing region includes the leg portion and a region within a certain range around the leg portion, so that whether or not information of the region around the processing portion is lost by determining the processing of the processing portion by the region around the processing portion.
After the processing area is acquired, the acquired processing area can be divided into a plurality of square small blocks, the corresponding area in the corresponding background image is also divided into a plurality of square small blocks, and the difference value between the processing area and the corresponding area in the background image is determined through comparison between the small blocks. And setting a preset value for judging the size of the section difference, and when the difference values of all the square small blocks are larger than the preset value, indicating that the difference between the processing area and the corresponding area in the background image is larger, and requiring integration and completion, determining the processing area as the area to be integrated. On the contrary, when the difference values of all the square small blocks are not greater than the preset value, it is indicated that the difference between the processing area and the corresponding area in the background image is small, and integration and completion are not required. And when the processing area is determined as the area to be integrated and needs to be integrated and supplemented, acquiring an area corresponding to the area to be integrated in the background image, and integrating the area to be integrated by using the area to form a target image. Specifically, the plurality of square small blocks divided by the processing area can be compared with the plurality of square small blocks divided by the corresponding area in the background image by directly integrating after comparing the single square small blocks. When the difference value between a certain square small block in the processing area and the corresponding square small block in the background image is large and reaches the difference size needing to be integrated, determining the square small block in the processing area as an area to be integrated, determining the corresponding square small block in the background image as an integrated area, extracting the integrated area, and integrating the integrated area into the area to be integrated to form an undistorted target image.
Further, in another embodiment of the image processing method of the present invention, the step of acquiring environment information collected in a terminal viewfinder frame and determining the object to be processed and the background image according to the environment information includes:
step S11, determining the environmental information previewed in the terminal viewfinder, and collecting the environmental information;
and step S12, acquiring the environmental information collected by the terminal viewfinder in a preset time.
Understandably, in order to achieve integration of a processed target graphic by using an unprocessed original image, distortion is avoided, and the unprocessed original image needs to be acquired before processing the image. When the terminal is used for shooting, environment information is previewed through a terminal view frame, and the environment information comprises a to-be-processed object to be processed and a background image. The background image and the object to be processed are relative concepts, and in an image comprising a person and a scene, a boundary line between the person and the scene is a boundary line between the object to be processed and the background image, namely environment information is arranged outside a person outline. When the human body state is subjected to image processing, scenes around the body state contour are affected, and scenes outside the contour are used as environment information. In addition, in some cases, the clothes of the person in the image may be environmental information, for example, when the naked arm in the image is processed, the sleeve is affected, and thus the sleeve is the environmental information. Before shooting, the environment information is previewed through the viewfinder, so that the relative positions of the object to be processed and the background object in the environment information are in the optimal state. When the environment information previewed in the terminal view-finding frame is determined to be in the best state, the environment information is collected, and the collected environment information is obtained within the preset time. Wherein the amount of the acquired information, or the amount of the acquired environmental information is determined by a preset time. In this embodiment, a preset time is preset, and the preset time may be a time period between the focus preview and the shooting, and may also be set to 1S or 2S, which is not limited herein. And acquiring the environmental information collected in the terminal viewfinder in the preset time period. The object to be processed in the acquired environment information during previewing can be in the middle, the lower left side, the lower right side and the like of the previewed image, the position of the middle, the upper left side or the lower right side is the position of the shooting object in the environment substantially, and the acquired environment information is acquired to meet the shooting requirements of users.
Further, in another embodiment of the image processing method of the present invention, the step of obtaining an image to be processed corresponding to an object to be processed, and performing image processing on the image to be processed to obtain an intermediate image includes:
step S21, acquiring a to-be-processed image corresponding to the to-be-processed object, and determining a contour region of the to-be-processed object in the background image;
and step S22, acquiring a region to be processed in the contour region, and performing image processing on the region to be processed to obtain an intermediate image.
Furthermore, the acquired to-be-processed image corresponding to the to-be-processed object includes an image of the to-be-processed object on the one hand, and an image of an environment where the to-be-processed object is located on the other hand. When the image is processed, only the image of the object to be processed needs to be processed, and the boundary between the image of the object to be processed and the image of the environment where the object to be processed is located is the outline of the image of the object to be processed. Therefore, after the to-be-processed image corresponding to the to-be-processed object is obtained, the contour region of the to-be-processed object in the background image is determined, and the contour region can be used for representing the image of the to-be-processed object. And not all the contents of the image of the object to be processed need to be processed, only partial processing is needed, so that the part needing to be processed in the image to be processed is determined, the area to be processed in the outline area is obtained, the area to be processed is the part needing to be processed in the image to be processed, and the area to be processed is subjected to image processing to obtain an intermediate image. The determination of the region to be processed can be determined by preset parameters, such as setting the optimal ratio of the upper half body to the lower half body, waist-hip ratio and other parameters. After the contour region is obtained, human body information such as arms, a trunk and legs in the contour region is determined, and the human body information is compared with preset parameters. And when the human body information is not matched with the preset parameters due to too large difference, determining the human body information as the area to be processed, otherwise, not taking the human body information as the area to be processed. Specifically, the step of performing image processing on the region to be processed to obtain an intermediate image includes:
and receiving attribute information of an object to be processed, and performing image processing on the area to be processed according to the attribute information to obtain an intermediate image.
Understandably, the sex of the person is classified into men and women, and the judgment criteria of the body state of men and women are far from each other, the body state of men is mainly characterized by the body size ratio and muscles, and the body state of women is mainly characterized by the soft-beauty curve and the body size ratio. Therefore, after the region to be processed is determined, if the gender distinction of the male and female people is not carried out, the image processing is carried out according to the same scheme, and the processed intermediate image possibly differs from the user requirement to a great extent. Since the present embodiment is provided with the gender discrimination mechanism, the attribute information of the object to be processed, which is information on whether the object to be processed is male or female, is accepted. And performing targeted image processing on the area to be processed according to the attribute information of the object to be processed to obtain an intermediate image which better meets the requirements of users. The step of processing the image of the area to be processed according to the attribute information to obtain an intermediate image comprises the following steps:
step S221, a preprocessing scheme corresponding to the attribute information is obtained, and image processing is performed on the area to be processed according to the preprocessing scheme to obtain an intermediate image corresponding to the attribute information.
Furthermore, the present embodiment sets a corresponding preprocessing scheme corresponding to different attribute information, and if the attribute information of the object to be processed is female, the corresponding preprocessing scheme mainly represents a delicate curve of the female, and if the waist-hip ratio is set to be a, the upper-body-to-lower-body ratio is set to be B, and the arm thickness is set to be C, for example. When the attribute information of the object to be processed is male, the corresponding preprocessing scheme mainly represents the muscle strength of the male, for example, the waist-hip ratio is set as a, the upper-half body and lower-half body ratio is set as b, the arm thickness is set as c, and the like. Specifically, referring to fig. 7, when the received attribute information of the object to be processed is female, it is determined from the contour area in the image that the abdomen is excessively fleshy and does not conform to the waist-hip ratio; in addition, the thickness of the arm also does not meet the set requirement; and taking the abdomen and the arms as areas to be treated, and acquiring a pretreatment method corresponding to the areas to be treated so as to treat the areas to be treated according to a pretreatment scheme and enable the areas to be treated to meet set requirements. And processing the image of the region to be processed according to the preprocessing scheme corresponding to the region to be processed, so that the figure posture in the image reaches a perfect state, and obtaining an intermediate image corresponding to the attribute information of the object to be processed. In addition, it is considered that the intermediate image obtained through the image processing may not meet the requirements of the user, so that the user can modify the target image according to the requirements of the user. Specifically, the step of obtaining the intermediate image corresponding to the attribute information includes:
step S222, when receiving the image processing operation, processing the intermediate image according to the image processing operation to update the intermediate image.
When the intermediate image obtained through the image processing cannot meet the requirements of the user, the user can perform the image processing operation on the intermediate image. And receiving the image processing operation sent by the user according to the requirement of the user, and processing the intermediate image according to the image processing operation. If the image processing operation is to increase the abdominal muscle sensation, the muscle sensation is increased to the human abdomen in the intermediate image according to the image processing operation. And updating the intermediate image processed by the image processing operation into a new intermediate image, and subsequently integrating and complementing the new intermediate image to form a target image so as to meet the requirements of users.
Further, referring to fig. 4, a second embodiment of the image processing method according to the present invention is proposed based on the first embodiment of the image processing method according to the present invention, and in the second embodiment, the step of integrating the intermediate image and the background image to form the target image includes:
in step S40, the target image is displayed as a preview, and when the target image storing instruction is received, the target image is stored.
Further, referring to fig. 8, after the intermediate image and the background image are integrated to form the target image, the formed intermediate image is displayed on the mobile terminal for the user to view the effect after image processing and integration completion. And simultaneously displaying a saved or cancelled virtual key on a preview display interface, wherein when the target image can meet the user requirement, the user can click the saved virtual key to save the target image, and after the saving is successful, displaying an image saved prompt word on the display interface of the mobile terminal. And when the target image does not meet the requirements of the user, the user can click the cancelled virtual key to cancel and store the target image, and jump to a shooting interface so that the user can perform the shooting processing again. And when receiving the target image storage instruction, indicating that the user is satisfied with the target image, and storing the target image into the mobile terminal for subsequent use to finish the image processing of the object to be processed.
Referring to fig. 5, fig. 5 is a schematic device structure diagram of a hardware operating environment related to a method according to an embodiment of the present invention.
The image processing device comprises an interactive mobile terminal, wherein the mobile terminal can be a mobile terminal device with a display function, such as a smart phone, a tablet computer, an electronic book reader, an MP3(Moving Picture Experts Group Audio Layer III, motion Picture Experts compression Standard Audio Layer 3) player, an MP4(Moving Picture Experts Group Audio Layer IV, motion Picture Experts compression Standard Audio Layer 3) player, a portable computer and the like.
As shown in fig. 5, the image processing apparatus may include: a processor 110, such as a CPU, a memory 109, and a communication bus 1002. Wherein the communication bus 1002 is used for implementing connection communication between the processor 110 and the memory 109. The memory 109 may be a high-speed RAM memory or a non-volatile memory (e.g., a disk memory). The memory 109 may alternatively be a storage device separate from the processor 110 described above.
Optionally, the image processing apparatus may further include a user interface, a network interface, a camera, RF (radio frequency) circuits, a sensor, an audio circuit, a WiFi module, and the like. The user interface may comprise a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface).
Those skilled in the art will appreciate that the image processing apparatus configuration shown in fig. 5 does not constitute a limitation of the image processing apparatus, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 5, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and an image processing program. The operating system is a program that manages and controls hardware and software resources of the image processing apparatus, and supports the operation of the image processing program as well as other software and/or programs. The network communication module is used to implement communication between components within the memory 1005 and with other hardware and software in the image processing apparatus.
In the image processing apparatus shown in fig. 5, the image processing program is applicable to a mobile terminal as a transmitting end, and the processor 1001 is configured to execute the image processing program stored in the memory 1005, and implement the steps of:
acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information;
acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image;
and integrating the intermediate image and the background image to form a target image.
Further, the step of integrating the intermediate image and the background image to form the target image includes:
acquiring a processing area subjected to image processing in the intermediate image, comparing the processing area with a corresponding area in the background image, and determining an area to be integrated;
and integrating the areas to be integrated in the intermediate image and the background image to form a target image.
Further, the step of acquiring environment information acquired in a terminal viewfinder and determining the object to be processed and the background image according to the environment information includes:
determining environment information previewed in a terminal viewing frame, and collecting the environment information;
and acquiring the environmental information collected by the terminal viewing frame within a preset time.
Further, the step of obtaining a to-be-processed image corresponding to the to-be-processed object and performing image processing on the to-be-processed image to obtain an intermediate image includes:
acquiring a to-be-processed image corresponding to the to-be-processed object, and determining a contour region of the to-be-processed object in a background image;
and acquiring a region to be processed in the contour region, and performing image processing on the region to be processed to obtain an intermediate image.
Further, the step of performing image processing on the region to be processed to obtain an intermediate image includes:
and receiving attribute information of an object to be processed, and performing image processing on the area to be processed according to the attribute information to obtain an intermediate image.
Further, the step of performing image processing on the region to be processed according to the attribute information to obtain an intermediate image includes:
and acquiring a preprocessing scheme corresponding to the attribute information, and performing image processing on the area to be processed according to the preprocessing scheme to obtain an intermediate image corresponding to the attribute information.
Further, after the step of obtaining the intermediate image corresponding to the attribute information, the processor 1001 is configured to execute an image processing program stored in the memory 1005, and implement the following steps:
when an image processing operation is received, processing the intermediate image according to the image processing operation to update the intermediate image.
Further, after the step of integrating the intermediate image and the background image to form the target image, the processor 1001 is configured to execute the image processing program stored in the memory 1005, and implement the following steps:
and previewing and displaying the target image, and storing the target image when receiving a target image storage instruction.
The specific implementation of the image processing apparatus of the present invention is substantially the same as the embodiments of the image processing method described above, and will not be described herein again.
The present invention provides a computer readable storage medium storing one or more programs, the one or more programs further executable by one or more processors for:
acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information;
acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image;
and integrating the intermediate image and the background image to form a target image.
Further, the step of integrating the intermediate image and the background image to form the target image includes:
acquiring a processing area subjected to image processing in the intermediate image, comparing the processing area with a corresponding area in the background image, and determining an area to be integrated;
and integrating the areas to be integrated in the intermediate image and the background image to form a target image.
Further, the step of acquiring environment information acquired in a terminal viewfinder and determining the object to be processed and the background image according to the environment information includes:
determining environment information previewed in a terminal viewing frame, and collecting the environment information;
and acquiring the environmental information collected by the terminal viewing frame within a preset time.
Further, the step of obtaining a to-be-processed image corresponding to the to-be-processed object and performing image processing on the to-be-processed image to obtain an intermediate image includes:
acquiring a to-be-processed image corresponding to the to-be-processed object, and determining a contour region of the to-be-processed object in a background image;
and acquiring a region to be processed in the contour region, and performing image processing on the region to be processed to obtain an intermediate image.
Further, the step of performing image processing on the region to be processed to obtain an intermediate image includes:
and receiving attribute information of an object to be processed, and performing image processing on the area to be processed according to the attribute information to obtain an intermediate image.
Further, the step of performing image processing on the region to be processed according to the attribute information to obtain an intermediate image includes:
and acquiring a preprocessing scheme corresponding to the attribute information, and performing image processing on the area to be processed according to the preprocessing scheme to obtain an intermediate image corresponding to the attribute information.
Further, after the step of obtaining the intermediate image corresponding to the attribute information, the one or more programs may be further executable by one or more processors for:
when an image processing operation is received, processing the intermediate image according to the image processing operation to update the intermediate image.
Further, after the step of integrating the intermediate image and the background image to form the target image, the one or more programs may be further executable by the one or more processors for:
and previewing and displaying the target image, and storing the target image when receiving a target image storage instruction.
The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the image processing method described above, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (9)

1. An image processing method, characterized by comprising the steps of:
acquiring environment information acquired in a terminal viewing frame, and determining an object to be processed and a background image according to the environment information;
acquiring a to-be-processed image corresponding to a to-be-processed object, and performing image processing on the to-be-processed image to obtain an intermediate image;
integrating the intermediate image and the background image to form a target image;
wherein the step of integrating the intermediate image and the background image to form the target image comprises:
acquiring a processing area subjected to image processing in the intermediate image, and dividing the processing area and a corresponding area in the background image into a plurality of squares respectively;
comparing the plurality of squares of the processing area with the plurality of squares of the corresponding area in the background image, and determining the difference value between the processing area and the corresponding area in the background image;
if the difference value is larger than a preset value, determining the processing area as an area to be integrated;
and integrating the area to be integrated in the intermediate image and the background image to form a target image.
2. The image processing method according to claim 1, wherein the step of acquiring the environment information acquired in the terminal finder frame and determining the object to be processed and the background image based on the environment information comprises:
determining environment information previewed in a terminal viewing frame, and collecting the environment information;
and acquiring the environmental information collected by the terminal viewing frame within a preset time.
3. The image processing method according to claim 1, wherein the step of obtaining the image to be processed corresponding to the object to be processed and performing image processing on the image to be processed to obtain the intermediate image comprises:
acquiring a to-be-processed image corresponding to the to-be-processed object, and determining a contour region of the to-be-processed object in a background image;
and acquiring a region to be processed in the contour region, and performing image processing on the region to be processed to obtain an intermediate image.
4. The image processing method according to claim 3, wherein the step of performing image processing on the region to be processed to obtain an intermediate image comprises:
and receiving attribute information of an object to be processed, and performing image processing on the area to be processed according to the attribute information to obtain an intermediate image.
5. The image processing method according to claim 4, wherein the step of performing image processing on the region to be processed according to the attribute information to obtain an intermediate image comprises:
and acquiring a preprocessing scheme corresponding to the attribute information, and performing image processing on the area to be processed according to the preprocessing scheme to obtain an intermediate image corresponding to the attribute information.
6. The image processing method according to claim 5, wherein the step of obtaining the intermediate image corresponding to the attribute information is followed by:
when an image processing operation is received, processing the intermediate image according to the image processing operation to update the intermediate image.
7. The image processing method of any of claims 1 to 6, wherein the step of integrating the intermediate image and the background image to form the target image is followed by:
and previewing and displaying the target image, and storing the target image when receiving a target image storage instruction.
8. An image processing apparatus characterized in that the image processing system comprises: a memory, a processor, a communication bus, and an image processing program stored on the memory:
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute the image processing program to implement the steps of the image processing method according to any one of claims 1 to 7.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon an image processing program which, when executed by a processor, implements the steps of the image processing method according to any one of claims 1 to 7.
CN201710895943.9A 2017-09-27 2017-09-27 Image processing method, image processing apparatus, and computer-readable storage medium Active CN107707818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710895943.9A CN107707818B (en) 2017-09-27 2017-09-27 Image processing method, image processing apparatus, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710895943.9A CN107707818B (en) 2017-09-27 2017-09-27 Image processing method, image processing apparatus, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN107707818A CN107707818A (en) 2018-02-16
CN107707818B true CN107707818B (en) 2020-09-29

Family

ID=61175891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710895943.9A Active CN107707818B (en) 2017-09-27 2017-09-27 Image processing method, image processing apparatus, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN107707818B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989839A (en) * 2018-08-27 2018-12-11 深圳艺达文化传媒有限公司 The leading role's selection method and Related product of promotion video
CN110084766A (en) * 2019-05-08 2019-08-02 北京市商汤科技开发有限公司 A kind of image processing method, device and electronic equipment
CN112598678A (en) * 2020-11-27 2021-04-02 努比亚技术有限公司 Image processing method, terminal and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003281563A (en) * 2002-03-22 2003-10-03 Minolta Co Ltd Facial expression generating device, facial expression generating method and facial expression generating program
CN101489147A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Width/height ratio conversion method based on interested region
CN103810674A (en) * 2012-11-13 2014-05-21 清华大学 Dependency perception object position reconfiguration based image enhancing method
CN105335451A (en) * 2014-08-15 2016-02-17 宇龙计算机通信科技(深圳)有限公司 Processing method and apparatus for display data in finder frame, shooting method and terminal
CN107154030A (en) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003281563A (en) * 2002-03-22 2003-10-03 Minolta Co Ltd Facial expression generating device, facial expression generating method and facial expression generating program
CN101489147A (en) * 2009-01-16 2009-07-22 西安电子科技大学 Width/height ratio conversion method based on interested region
CN103810674A (en) * 2012-11-13 2014-05-21 清华大学 Dependency perception object position reconfiguration based image enhancing method
CN105335451A (en) * 2014-08-15 2016-02-17 宇龙计算机通信科技(深圳)有限公司 Processing method and apparatus for display data in finder frame, shooting method and terminal
CN107154030A (en) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 Image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN107707818A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107093418B (en) Screen display method, computer equipment and storage medium
CN106558025B (en) Picture processing method and device
CN107767333B (en) Method and equipment for beautifying and photographing and computer storage medium
CN107835464B (en) Video call window picture processing method, terminal and computer readable storage medium
CN107731199B (en) Screen color temperature adjusting method, terminal and computer readable storage medium
CN107231470B (en) Image processing method, mobile terminal and computer readable storage medium
CN108459799B (en) Picture processing method, mobile terminal and computer readable storage medium
CN108053371B (en) Image processing method, terminal and computer readable storage medium
CN108200421B (en) White balance processing method, terminal and computer readable storage medium
CN107644396B (en) Lip color adjusting method and device
CN107707818B (en) Image processing method, image processing apparatus, and computer-readable storage medium
CN112995467A (en) Image processing method, mobile terminal and storage medium
CN108200332A (en) A kind of pattern splicing method, mobile terminal and computer readable storage medium
CN110069122B (en) Screen control method, terminal and computer readable storage medium
CN108848298B (en) Picture shooting method, flexible terminal and computer readable storage medium
CN107241504B (en) Image processing method, mobile terminal and computer readable storage medium
CN112000410A (en) Screen projection control method and device and computer readable storage medium
CN109683797B (en) Display area control method and device and computer readable storage medium
CN113301252B (en) Image photographing method, mobile terminal and computer readable storage medium
CN109859115A (en) A kind of image processing method, terminal and computer readable storage medium
CN108830901B (en) Image processing method and electronic equipment
CN108196924B (en) Brightness adjusting method, terminal and computer readable storage medium
CN108171652A (en) A kind of method, mobile terminal and storage medium for improving image stylistic effects
CN112532838B (en) Image processing method, mobile terminal and computer storage medium
CN114900613A (en) Control method, intelligent terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant