CN113592983A - Image processing method and device and computer readable storage medium - Google Patents

Image processing method and device and computer readable storage medium Download PDF

Info

Publication number
CN113592983A
CN113592983A CN202010368185.7A CN202010368185A CN113592983A CN 113592983 A CN113592983 A CN 113592983A CN 202010368185 A CN202010368185 A CN 202010368185A CN 113592983 A CN113592983 A CN 113592983A
Authority
CN
China
Prior art keywords
image
management component
area
cover
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010368185.7A
Other languages
Chinese (zh)
Inventor
吴歆婉
宋睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010368185.7A priority Critical patent/CN113592983A/en
Publication of CN113592983A publication Critical patent/CN113592983A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/80Creating or modifying a manually drawn or painted image using a manual input device, e.g. mouse, light pen, direction keys on keyboard
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Abstract

The embodiment of the invention provides an image processing method, an image processing device and a computer readable storage medium, wherein the method comprises the following steps: the method comprises the steps of obtaining a target image, detecting editing operation on the target image and/or a management component in an image editing interface of an application, wherein the management component comprises a cover area management component and a head portrait area management component, and determining a first image area serving as a cover and a second image area serving as a head portrait from the target image according to the editing operation, so that the cover and the head portrait are cut simultaneously aiming at one image submitted by a user, the operation of setting the cover and the head portrait can be effectively simplified, the flexibility is strong, and the personalized setting requirement of the user can be fully met.

Description

Image processing method and device and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method and apparatus, and a computer-readable storage medium.
Background
At present, social applications such as instant messaging are very popular, and the social applications are essential applications of user terminals such as smart phones and tablet computers. In order to meet the personalized requirements of users, the applications usually allow users to freely set covers and avatars, but in practical use, it is found that the setting of the covers and the avatars usually requires respective uploading and clipping of images, which results in tedious setting operations of the covers and the avatars. Therefore, how to simplify the operation of setting the cover and the avatar and meet the personalized requirements of the user becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and a computer-readable storage medium, which can effectively simplify the operation of setting a cover and a head portrait, have stronger flexibility and can fully meet the personalized setting requirements of users.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
and acquiring a target image.
An editing operation on the target image and/or management components in an image editing interface of an application is detected, wherein the management components comprise a cover area management component and a head portrait area management component.
And determining a first image area as a front cover and a second image area as a head portrait from the target image according to the editing operation.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
and the acquisition module is used for acquiring the target image.
The detection module is used for detecting the editing operation of the target image and/or the management component in the image editing interface of the application, wherein the management component comprises a cover area management component and a head portrait area management component.
And the processing module is used for determining a first image area serving as a cover and a second image area serving as a head portrait from the target image according to the editing operation.
In a third aspect, an embodiment of the present invention provides a user terminal, where the user terminal includes a processor, a storage device, a network interface, and an input/output device, where the processor, the storage device, the network interface, and the input/output device are connected to each other, where the network interface is controlled by the processor to send and receive data, the input/output device is used to detect a touch operation and output an image, the storage device is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the image processing method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, where the computer program includes program instructions, and the program instructions are executed by a processor to perform the image processing method according to the first aspect.
In the embodiment of the invention, the user terminal acquires the target image, detects the editing operation of the user on the target image and/or the management component in the image editing interface of the application, the management component comprises a cover area management component and a head portrait area management component, and determines a first image area serving as a cover and a second image area serving as a head portrait from the target image according to the editing operation, so that the cover and the head portrait are cut simultaneously aiming at one image submitted by the user, the uploading cutting cost of the user is reduced, the user can freely cut the cover and the head portrait which the user wants, the operation of setting the cover and the head portrait can be effectively simplified, the flexibility is strong, and the personalized setting requirement of the user can be fully met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1a is a schematic view of a cover change provided by the prior art;
FIG. 1b is a schematic diagram of a replacement avatar provided by the prior art;
FIG. 1c is a schematic view of a prior art arrangement of a cover and avatar;
FIG. 1d is a schematic diagram illustrating the effect of the front cover and the head portrait provided by the prior art;
FIG. 2 is a flowchart illustrating an image processing method according to an embodiment of the present invention;
FIG. 3a is a diagram illustrating a function option for setting a cover and a head portrait according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a selected target image according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of an image selection interface provided by an embodiment of the invention;
FIG. 3d is a schematic view of another image selection interface provided by embodiments of the present invention;
FIG. 3e is a diagram of an image editing interface according to an embodiment of the present invention;
FIG. 3f is a diagram of another image editing interface provided by an embodiment of the invention;
FIG. 3g is a diagram of another image editing interface provided by an embodiment of the invention;
FIG. 3h is a schematic diagram illustrating the effect of the cover and the avatar according to an embodiment of the present invention;
FIG. 4 is a flow chart of another image processing method according to an embodiment of the present invention;
FIG. 5a is a schematic diagram illustrating a zoom operation performed on a target image according to an embodiment of the present invention;
FIG. 5b is a diagram illustrating an editing operation performed on the avatar region management component according to an embodiment of the present invention;
FIG. 5c is a diagram illustrating another editing operation performed on the head portrait area management component according to an embodiment of the present invention;
FIG. 5d is a diagram illustrating a display hierarchy partitioning according to an embodiment of the present invention;
FIG. 5e is a schematic diagram of a management component provided by an embodiment of the present invention;
FIG. 5f is a schematic diagram of another management component provided by embodiments of the present invention;
FIG. 5g is a schematic diagram of a cover region image and a head region image according to an embodiment of the present invention;
FIG. 5h is a diagram of a cover area management component and an avatar area management component according to an embodiment of the present invention;
FIG. 5i is a schematic diagram of another cover area management component and avatar area management component provided by embodiments of the present invention;
FIG. 5j is a diagram illustrating a cover area management component and an avatar area management component according to an embodiment of the present invention;
FIG. 5k is a diagram illustrating another image editing interface provided by an embodiment of the invention;
FIG. 5l is a schematic diagram of a cover and avatar fusion display provided by an embodiment of the present invention;
FIG. 5m is a diagram illustrating another editing operation performed on the head portrait area management component according to an embodiment of the present invention;
FIG. 5n is a schematic view of another merged display of a cover and a head portrait according to an embodiment of the present invention;
FIG. 6 is a flow chart of a cover and avatar setup provided by an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a user terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Currently, a user generally sets a cover and a head portrait on an application in the following two ways:
the first method is as follows: as shown in fig. 1a and 1b, a user needs to upload an image and clip to set a cover and a head portrait, respectively, in fig. 1a, after clicking "change cover", the user uploads an image, clips the image, and sets the image as the cover, and in fig. 1b, after clicking "change head portrait", the user uploads an image, clips the image, and sets the image as the head portrait.
The second method comprises the following steps: as shown in fig. 1c and 1d, the user uploads an image as a cover and then takes an image area at a fixed position in the image of the cover as a head portrait, and in fig. 1c, the user uploads an image as a cover and then directly takes a part of an image area at a central position in the image of the cover as a head portrait, and the setting result is shown in fig. 1 d.
In practical use, the user needs to upload images for cutting for many times in the first mode, so that the setting operation of the front cover and the head portrait is complicated, and the efficiency is low; and the second mode has poor flexibility when setting the head portrait and is difficult to meet the personalized aesthetic requirements of users.
Aiming at the problems that the operation is complicated, the efficiency is low, the flexibility is poor and the personalized aesthetic requirement of a user is difficult to meet when a cover and a head portrait are set at present, the embodiment of the invention provides the image processing method, which can effectively simplify the operation when the cover and the head portrait are set, has strong flexibility and can fully meet the personalized setting requirement of the user.
The implementation details of the technical scheme of the embodiment of the invention are explained in detail as follows:
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present invention, where the image processing method includes the following steps:
201. the user terminal acquires a target image.
The user terminal can comprise electronic equipment such as a smart phone, a tablet computer, a notebook computer, a desktop computer and a vehicle-mounted intelligent terminal.
Specifically, three function options of "set cover", "set avatar", "set cover and avatar" may be provided to the user through the display interface of the application, as shown in fig. 3 a. After detecting that the user triggers the function option of "setting cover and head portrait", the user terminal may prompt the user to upload an image, for example, open an album manager (a local album manager or a cloud album manager), the user may select a target image from a plurality of images displayed in a display interface of the album manager, and the user terminal obtains the target image selected by the user. As shown in fig. 3b, the user selects one of the person images of 2016, 6, 27 as the target image, which is used for setting both the cover and the avatar.
In some possible embodiments, the user terminal may detect a touch operation of a user on a current cover or avatar in the display interface of the application, and when the touch operation is detected, three function options of "set cover", "set avatar", "set cover and avatar" are provided to the user through the display interface of the application, where the touch operation may be specifically a click operation, a long-press operation, a slide operation, and the like.
In some possible embodiments, the user terminal may output an image selection interface after detecting that the user triggers the function option of "set cover and head portrait", as shown in fig. 3 c. After detecting that the user clicks the "please select an image" button, a selectable item of a manner of selecting an image may be output, which may specifically include "select an image from a gallery" or "take a picture", as shown in fig. 3 d. When the user selection "select image from gallery" is detected, the album manager may be opened in the manner shown in FIG. 3b, and the user then selects the target image therefrom. When detecting that the user selects "take a picture", the user terminal may start the camera to take a picture, for example, the user may take a self-picture, so as to take the picture obtained by taking a picture as the target picture.
In some possible embodiments, after the user selects the target image from the album manager or obtains the target image by taking a picture, the user can perform personalized editing on the target image, for example, add text content or decorations to the target image, perform beautification (e.g., beautifying) or doodling operations on the target image, and so on.
202. The user terminal detects editing operation on the target image and/or the management components in the image editing interface of the application, wherein the management components comprise a cover area management component and a head portrait area management component.
Specifically, the user terminal may provide an image editing interface for a user, where the image editing interface displays a target image and a management component, and the management component includes a cover area management component and an avatar area management component, where the cover area management component is configured to adjust an image area serving as a cover, and the avatar area management component is configured to adjust an image area serving as an avatar. The cover region management component is specifically a cover marquee, and has a default position and size; the avatar region management component is specifically an avatar marquee having a default position and size, as shown in fig. 3 e.
In some possible embodiments, the editing operation may be one or both of position adjustment and resizing, that is, the user may adjust the image area as the front cover and the image area as the avatar in the target image by editing the position and size of the target image and/or the management component. The editing operation may be triggered by clicking, sliding, long pressing, etc.
203. And the user terminal determines a first image area serving as a cover and a second image area serving as a head portrait from the target image according to the editing operation.
Specifically, when the user terminal detects that a user edits a target image and/or a management component in an image editing interface of an application, the target image and/or the management component is edited according to the editing operation, and after the editing process is completed, a first image area serving as a cover and a second image area serving as a head can be determined from the target image according to positions of a cover area management component and a head area management component in the target image, wherein the image area surrounded by a frame of the cover area management component in the target image is the cover, the image area surrounded by the frame of the head area management component in the target image is the head, the first image area is saved as the cover, and the second image area is saved as the head.
In some possible embodiments, the positions and sizes of the target image, the cover area management component and the avatar area management component are editable, and when the user crops the target image, one or more of the target image, the cover area management component and the avatar area management component may be edited according to personal habits and actual requirements, which is not limited by the embodiments of the present invention.
In some possible embodiments, the function description information of a certain management component may be presented in the image editing interface, for example, the user terminal may present its function description information "avatar" at the position of the avatar area management component when detecting that the user edits the circular avatar area management component, as shown in fig. 3 f.
In some possible embodiments, taking the avatar area management component as an example, when it is detected that the cursor moves to an edge of the avatar area management component or an inner area of the avatar area management component, the functional description information "avatar" is exposed at the position of the avatar area management component.
In some possible embodiments, the user terminal may also directly display the function description information of each management component in the image editing interface, specifically, the function description information "front cover" may be displayed at the position of the front cover area management component, and the function description information "head portrait" may be displayed at the position of the head portrait area management component, as shown in fig. 3g, so that the user may intuitively know the function of each management component.
In some possible embodiments, after the user finishes editing the target image and/or the management component, the user may click the "finish" option shown in fig. 3e, and the user terminal may save the setting result of the front cover and the avatar shown in fig. 3h, that is, the user only needs to submit an image once to complete the setting of the front cover and the avatar at the same time.
In the embodiment of the invention, a user terminal acquires a target image submitted by a user, the target image and a management component are displayed in an image editing interface, the management component comprises a cover area management component and a head portrait area management component, the editing operation of the user on the target image and/or the management component in the image editing interface is detected, and a first image area serving as a cover and a second image area serving as a head portrait are determined from the target image according to the editing operation, so that the cover and the head portrait are simultaneously cut aiming at one image submitted by the user, the uploading cutting cost of the user is reduced, the cover and the head portrait which the user wants can be freely cut by the user, the operation of setting the cover and the head portrait can be effectively simplified, the flexibility is strong, and the individual aesthetic requirements of the user can be fully met.
Fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present invention, where the image processing method includes the following steps:
401. the user terminal acquires a target image.
The specific implementation manner of step 401 may refer to the related description in step 201, and is not described herein again.
402. And the user terminal outputs an applied image editing interface through the display screen.
403. And the user terminal displays the target image and the management component in the image editing interface, wherein the management component comprises a cover area management component and a head portrait area management component.
Specifically, after acquiring a target image submitted by a user, the user terminal may output an image editing interface of an application through the display screen, and display the target image, the cover area management component, and the avatar area management component in the image editing interface, as shown in fig. 3 e.
In some feasible embodiments, the user terminal may also output an image editing interface through the display screen, prompt the user to submit the target image in the image editing interface, and after the target image is obtained, display the target image, the cover area management component and the head area management component in the image editing interface, that is, the image editing interface may be simultaneously used as the image selection interface in step 201. The method specifically comprises the following steps: the method comprises the steps that a user terminal outputs an applied image editing interface through a display screen, detects image selection operation input by a user on the image editing interface, determines a target image according to the image selection operation, wherein the target image comprises images in a gallery or images shot by a camera of the user terminal, and displays the target image, a cover area management component and a head image area management component in the image editing interface after the target image is obtained.
404. The user terminal detects editing operation on the target image in an applied image editing interface, and edits the target image according to the editing operation so as to adjust a first image area serving as a cover in the target image.
Specifically, the position and size of the cover region management component may be fixed, the user may adjust the cover by performing an editing operation on the target image, the editing operation may be to adjust the position and size, and the image region surrounded by the frame of the cover region management component in the target image is changed after the target image is edited. As shown in fig. 5a, a user may edit the size of a target image by sliding two fingers closer to or away from each other in an image editing interface, and a user terminal performs editing processing on the target image after detecting an editing operation of the user on the target image, for example, performs reduction processing on the target image when detecting that two fingers of the user slide closer to each other; detecting that two fingers of a user slide away from each other, and then amplifying the target image; when the dragging of the target image by the user is detected, the display position of the target image in the image editing interface is adjusted, so that the image area serving as a cover in the target image (namely the first image area corresponding to the cover area management component) is adjusted through the editing operation of the target image. Wherein, the reduction or enlargement factor can be related to the sliding distance and the sliding speed.
405. And the user terminal detects the editing operation of the head portrait region management component included in the management component in the image editing interface, and edits the head portrait region management component according to the editing operation so as to adjust a second image region serving as a head portrait in the target image.
Specifically, the user may adjust the avatar through an editing operation on the avatar region management component, where the editing operation may be to adjust the position and/or size of the avatar region management component, after the avatar region management component is edited, an image region surrounded by a frame of the avatar region management component in the target image is changed accordingly, and after the user terminal detects the editing operation on the avatar region management component by the user, the user terminal performs an editing process on the avatar region management component, so as to adjust an image region serving as the avatar in the target image (i.e., a second image region corresponding to the avatar region management component).
In some possible embodiments, as shown in fig. 5b, the user may drag the avatar area management component within the area of the avatar area management component with a single finger, so as to adjust the position of the avatar area management component.
In some possible implementations, the avatar region management component may include a stretch button for adjusting the size of the avatar region management component. As shown in fig. 5c, the user may drag the pull button with a single or multiple fingers to adjust the size of the avatar area management component, specifically, enlarge the avatar area management component when dragging to the outside of the area of the avatar area management component; when dragging to the inside of the area of the head portrait area management component, the head portrait area management component is reduced.
In some possible embodiments, the avatar area management component may not include a stretch button, and the size of the avatar area management component may be adjusted by dragging a frame of the avatar area management component through touch operation.
In some possible embodiments, the size of the avatar region management component may also be adjusted through a non-contact gesture operation, for example, when a gesture operation that the user inputs two fingers (such as a thumb and an index finger) to merge is detected by the camera, the avatar region management component is reduced, and when a gesture operation that the user inputs two fingers to separate is detected by the camera, the avatar region management component is enlarged.
406. And after the user terminal edits the target image and/or the management component according to the editing operation, determining a first image area corresponding to the cover area management component from the target image, and determining a second image area corresponding to the head portrait area management component from the first image area.
407. The user terminal takes the first image area as a front cover and the second image area as a head portrait.
Specifically, after the user finishes editing the target image and the avatar area management component, the user terminal acquires an image area (denoted as a first image area) surrounded by a frame of the avatar area management component in the target image, takes the first image area as a cover, acquires an image area (denoted as a second image area) surrounded by a frame of the avatar area management component in the first image area, and takes the second image area as an avatar.
In some possible embodiments, the position and size of the cover area management component are also adjustable, and a user may adjust the cover by performing an editing operation on the cover area management component, where the editing operation may specifically refer to the aforementioned manner of adjusting the position and/or size of the avatar area management component, that is, the position and/or size of the cover area management component may be adjusted by the editing operation to change the image area surrounded by the frame of the cover area management component in the target image, so as to implement adjustment on the image area corresponding to the cover.
In some possible embodiments, when adjusting the image area serving as the cover, the user may also select to perform an editing operation on both the target image and the cover area management component, and the embodiments of the present invention are not limited thereto.
In some possible embodiments, the cover area management component may include a stretch button for adjusting the size of the cover area management component, which may be distributed on a border of the cover area management component. The form of the stretch button may refer to the stretch button of the avatar region management component shown in fig. 5 c.
In some possible embodiments, the user may only need to edit the avatar area management component without performing a scaling operation or an editing operation on the target image and the cover area management component, that is, after uploading the target image, the user uses the first image area surrounded by the frame of the cover area management component with the default position and size as the cover.
In some possible embodiments, the specific implementation manner of the user terminal for saving the cover and the avatar may include: the user terminal calls a background saving method, such as saveBackgroundImage (), and inputs the first image area as a parameter into the background saving method, thereby saving the first image area as a cover (or background). The user terminal calls an avatar saving method, such as savevatatar (), and inputs the second image area as a parameter into the avatar saving method, thereby realizing saving of the second image area as an avatar.
In some possible embodiments, as shown in fig. 5d, the layers of the image editing interface are divided into a hierarchy, where the bottom layer may be an image uploaded by a user (i.e., the target image), the image serves as a background of the bottom layer, the middle layer may be a cover page marquee (i.e., a cover page area management component), and the top layer may be an avatar marquee (i.e., an avatar area management component).
In some possible embodiments, the shapes of the front cover region management component and the avatar region management component may be in various combinations, specifically, any two combinations of shapes of a rectangle, a square, a circle, and the like may be used, and of course, the same shape may also be used, and the embodiment of the present invention is not limited. As shown in fig. 5e, the shape of the front cover region managing member may be a rectangle, and the shape of the avatar region managing member may be a circle. As shown in fig. 5f, the shape of the cover region management component may be square, and the shape of the avatar region management component may also be square.
In some possible embodiments, as shown in fig. 5g, the user terminal obtains the cover area image and the head area image of the target image after inputting the editing operation and clicking the "complete" option.
In some possible embodiments, the position and size of the avatar area management component can be adjusted only in the inner area of the cover area management component, i.e. cannot exceed the boundary of the cover area management component, as shown in fig. 5 h. Further as shown in fig. 5i, is the maximum size of the avatar region management component. Of course, a minimum size of the avatar region management component may also be specified, e.g. 30 × 30 pixels, as shown in fig. 5 j.
In some possible embodiments, for the display of the front cover and the avatar on the personal homepage interface, a fusion display mode may be adopted, and the user terminal may obtain the position information of the second image area as the avatar in the first image area as the front cover, and then display the front cover and the avatar in the personal homepage interface of the application according to the position information, specifically, display the avatar in a position corresponding to the position information in the front cover, so as to perform fusion display on the front cover and the avatar. As shown in fig. 5k, the second image area as the avatar is located at the central position in the first image area as the front cover, so that the avatar can be displayed at the central position of the front cover when the front cover and the avatar are displayed in the personal homepage interface of the application, as shown in fig. 5l, so that the avatar and the front cover are integrated, and the interest in setting the front cover and the avatar is improved. As shown in fig. 5m, the avatar region management component is default to the middle position, if the user moves the position of the avatar region management component, for example, moves the position to the upper left, when the avatar region management component is moved, the avatar region at the original position is indicated by a dotted line, after the user finishes operating the avatar region management component, the avatar region corresponding to the avatar region management component can be displayed at the middle position of the default cover to enhance the interest when setting the cover and the avatar, and the setting result is shown in fig. 5 n.
In the embodiment of the invention, a user terminal acquires a target image submitted by a user, outputs an image editing interface through a display screen, and displays the target image and a management component on the image editing interface, wherein the management component comprises a cover area management component and a head portrait area management component. When the user terminal detects the editing operation of the user on the target image, the user terminal carries out editing processing on the target image according to the editing operation so as to adjust a first image area corresponding to the cover area management component in the target image, and when the user terminal detects the editing operation of the avatar area management component, the user terminal carries out editing processing on the avatar area management component according to the editing operation so as to adjust a second image area corresponding to the avatar area management component in the target image. After the editing processing is finished, the user terminal determines a first image area corresponding to the cover area management component from the target image, determines a second image area corresponding to the head portrait area management component from the first image area, further takes the first image area as a cover and the second image area as a head portrait, so that the cover and the head portrait can be simultaneously cut (such as position adjustment, size adjustment and the like) aiming at one image submitted by the user, the uploading cutting cost of the user is reduced, the user can freely cut the cover and the head portrait wanted by the user, the operation during setting the cover and the head portrait can be effectively simplified, the flexibility is high, and the personalized aesthetic requirement of the user can be fully met.
In some possible embodiments, as shown in fig. 6, a setting flow of a cover and a head portrait provided by an embodiment of the present invention specifically includes: the method comprises the steps that a user terminal obtains initial sizes and positions of a cover page selecting frame (namely the cover page area management component) and an avatar selecting frame (namely the avatar area management component), the initial sizes and the positions are displayed through an image editing interface, then adjustment of the image and/or avatar selecting frame by a user is detected, the adjustment comprises operations such as but not limited to amplification, reduction, movement and the like, after the user adjustment is finished, an image area in the cover page selecting frame and an image area in the avatar selecting frame are respectively obtained, when the user click completion or storage is detected, the image area in the cover page selecting frame is automatically stored as a cover page, and the image area in the avatar selecting frame is automatically stored as an avatar, so that the operation of setting the cover page and the avatar can be effectively simplified, the flexibility is high, and the individual aesthetic requirement of the user can be fully met.
Referring to fig. 7, a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention is shown, where the image processing apparatus includes:
an obtaining module 701, configured to obtain a target image.
A detecting module 702, configured to detect an editing operation on the target image and/or management components in an image editing interface of an application, where the management components include a cover area management component and a head portrait area management component.
A processing module 703, configured to determine, according to the editing operation, a first image area as a front cover and a second image area as a head portrait from the target image.
Optionally, the cover area management component is configured to adjust an image area serving as a cover, and/or the avatar area management component is configured to adjust an image area serving as an avatar.
Optionally, the detecting module 702 is specifically configured to:
an editing operation on the target image in an image editing interface of an application is detected.
And editing the target image according to the editing operation so as to adjust a first image area serving as a cover in the target image.
And detecting editing operation on an avatar area management component included in a management component in the image editing interface.
And editing the avatar area management component according to the editing operation so as to adjust a second image area serving as an avatar in the target image.
Wherein the editing operation comprises one or both of a position adjustment and a size adjustment.
Optionally, the detecting module 702 is specifically configured to:
an editing operation on a cover region management component included in a management component in an image editing interface of an application is detected.
And editing the cover area management component according to the editing operation so as to adjust a first image area serving as a cover in the target image.
An editing operation on an avatar area management component included by the management component in the image editing interface is detected.
And editing the avatar area management component according to the editing operation so as to adjust a second image area serving as an avatar in the target image.
Wherein the editing operation comprises one or both of a position adjustment and a size adjustment.
Optionally, the processing module 703 is specifically configured to:
and after the target image and/or the management component are edited according to the editing operation, determining a first image area corresponding to the cover area management component from the target image.
And determining a second image area corresponding to the head portrait area management component from the first image area.
The first image area is used as a cover, and the second image area is used as a head portrait.
Optionally, the processing module 703 is specifically configured to:
and inputting the first image area as a parameter into a background saving method so as to save the first image area as a cover.
Inputting the second image area as a parameter into a head portrait storage method to store the second image area as a head portrait.
Optionally, the image processing apparatus further includes a presentation module 704, wherein:
the obtaining module 701 is further configured to obtain position information of the second image area in the first image area.
The display module 704 is configured to display the cover and the avatar in the personal homepage interface of the application according to the location information, so as to perform fusion display on the cover and the avatar.
Optionally, the avatar area management component includes a stretch button, and the stretch button is used to adjust the size of the avatar area management component.
Optionally, the cover area management component comprises a stretch button, and the stretch button is used for adjusting the size of the cover area management component.
Optionally, the obtaining module 701 is specifically configured to:
and outputting an applied image editing interface through a display screen of the user terminal.
And detecting the image selection operation input by the user on the image editing interface.
And determining a target image according to the image selection operation, wherein the target image comprises an image in a gallery or an image shot by a camera of the user terminal.
Optionally, the displaying module 704 is further configured to display the target image and the management component in the image editing interface, where the management component includes a cover area management component and a head portrait area management component.
Optionally, the display module 704 is further configured to:
and outputting an applied image editing interface through a display screen of the user terminal.
And displaying the target image and the management component in the image editing interface, wherein the management component comprises a cover area management component and a head portrait area management component.
Optionally, the target image is displayed on a bottom layer of the image editing interface, the cover area management component is displayed on a middle layer of the image editing interface, and the head portrait area management component is displayed on a top layer of the image editing interface.
It should be noted that the functions of each functional module of the image processing apparatus according to the embodiment of the present invention may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the related description of the foregoing method embodiment, which is not described herein again.
Referring to fig. 8, it is a schematic structural diagram of a user terminal according to an embodiment of the present invention, where the user terminal according to the embodiment of the present invention includes a power supply module and the like, and includes a processor 801, a storage device 802, a network interface 803, and an input/output device 804. Data may be exchanged between the processor 801, the storage 802, the network interface 803, and the input/output device 804.
The storage device 802 may include a volatile memory (volatile memory), such as a random-access memory (RAM); the storage device 802 may also include a non-volatile memory (non-volatile memory), such as a flash memory (flash memory), a solid-state drive (SSD), or the like; the storage means 802 may also comprise a combination of memories of the kind described above.
The network interface 803 is used for transceiving data.
The input/output device 804 may be a display screen, a touch screen, a microphone, a speaker, etc., for detecting a touch operation and outputting text, images, sounds, etc.
The processor 801 may be a Central Processing Unit (CPU) 801. In one embodiment, the processor 801 may also be a Graphics Processing Unit (GPU) 801. The processor 801 may also be a combination of a CPU and a GPU. In one embodiment, the storage 802 is used to store program instructions. The processor 801 may invoke the program instructions to perform the following operations:
and acquiring a target image.
An editing operation on the target image and/or management components in an image editing interface of an application is detected through the input/output device 804, wherein the management components comprise a cover area management component and a head portrait area management component.
And determining a first image area as a front cover and a second image area as a head portrait from the target image according to the editing operation.
Optionally, the cover area management component is configured to adjust an image area serving as a cover, and/or the avatar area management component is configured to adjust an image area serving as an avatar.
Optionally, the processor 801 is specifically configured to:
an editing operation on the target image in an image editing interface of an application is detected.
And editing the target image according to the editing operation so as to adjust a first image area serving as a cover in the target image.
And detecting editing operation on an avatar area management component included in a management component in the image editing interface.
And editing the avatar area management component according to the editing operation so as to adjust a second image area serving as an avatar in the target image.
Wherein the editing operation comprises one or both of a position adjustment and a size adjustment.
Optionally, the processor 801 is specifically configured to:
an editing operation on a cover region management component included in a management component in an image editing interface of an application is detected.
And editing the cover area management component according to the editing operation so as to adjust a first image area serving as a cover in the target image.
An editing operation on an avatar area management component included by the management component in the image editing interface is detected.
And editing the avatar area management component according to the editing operation so as to adjust a second image area serving as an avatar in the target image.
Wherein the editing operation comprises one or both of a position adjustment and a size adjustment.
Optionally, the processor 801 is specifically configured to:
and after the target image and/or the management component are edited according to the editing operation, determining a first image area corresponding to the cover area management component from the target image.
And determining a second image area corresponding to the head portrait area management component from the first image area.
The first image area is used as a cover, and the second image area is used as a head portrait.
Optionally, the processor 801 is specifically configured to:
and inputting the first image area as a parameter into a background saving method so as to save the first image area as a cover.
Inputting the second image area as a parameter into a head portrait storage method to store the second image area as a head portrait.
Optionally, the processor 801 is further configured to obtain location information of the second image area in the first image area, and display the front cover and the avatar in a personal homepage interface of the application according to the location information by using the input/output device 804, so as to perform fusion display on the front cover and the avatar.
Optionally, the avatar area management component includes a stretch button, and the stretch button is used to adjust the size of the avatar area management component.
Optionally, the cover area management component comprises a stretch button, and the stretch button is used for adjusting the size of the cover area management component.
Optionally, the processor 801 is specifically configured to:
and outputting an applied image editing interface through a display screen of the user terminal.
And detecting the image selection operation input by the user on the image editing interface.
And determining a target image according to the image selection operation, wherein the target image comprises an image in a gallery or an image shot by a camera of the user terminal.
Optionally, the processor 801 is further configured to display the target image and the management components in the image editing interface by using the input/output device 804, where the management components include a cover area management component and a portrait area management component.
Optionally, the processor 801 is further configured to output an image editing interface of an application through a display screen of the user terminal, and display the target image and the management component in the image editing interface by using the input/output device 804, where the management component includes a cover area management component and a portrait area management component.
Optionally, the target image is displayed on a bottom layer of the image editing interface, the cover area management component is displayed on a middle layer of the image editing interface, and the head portrait area management component is displayed on a top layer of the image editing interface.
In a specific implementation, the processor 801, the storage device 802, the network interface 803, and the input/output device 804 described in this embodiment of the present invention may execute the implementation described in the related embodiment of the image processing method provided in fig. 2 or fig. 4 in this embodiment of the present invention, or may execute the implementation described in the related embodiment of the image processing device provided in fig. 7 in this embodiment of the present invention, which is not described herein again.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute all or part of the steps of the above-described method according to the embodiments of the present invention. The storage medium may include: a U-disk, a removable hard disk, a magnetic disk, an optical disk, a Read-Only Memory (ROM) or a Random Access Memory (RAM), and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. An image processing method applied to a user terminal, the method comprising:
acquiring a target image;
detecting an editing operation on the target image and/or management components in an image editing interface of an application, wherein the management components comprise a cover area management component and a head portrait area management component;
and determining a first image area as a front cover and a second image area as a head portrait from the target image according to the editing operation.
2. The method of claim 1, wherein the cover area management component is configured to adjust an image area as a cover and/or the avatar area management component is configured to adjust an image area as an avatar.
3. The method of claim 1 or 2, wherein the detecting an editing operation on the target image and/or a management component in an image editing interface of an application comprises:
detecting an editing operation on the target image in an applied image editing interface;
editing the target image according to the editing operation so as to adjust a first image area serving as a cover in the target image;
detecting an editing operation on an avatar area management component included in a management component in the image editing interface;
editing the avatar region management component according to the editing operation to adjust a second image region serving as an avatar in the target image;
wherein the editing operation comprises one or both of a position adjustment and a size adjustment.
4. The method of claim 1 or 2, wherein the detecting an editing operation on the target image and/or a management component in an image editing interface of an application comprises:
detecting an editing operation on a cover area management component included in a management component in an image editing interface of an application;
editing the cover area management component according to the editing operation so as to adjust a first image area serving as a cover in the target image;
detecting an editing operation on an avatar area management component included by the management component in the image editing interface;
editing the avatar region management component according to the editing operation to adjust a second image region serving as an avatar in the target image;
wherein the editing operation comprises one or both of a position adjustment and a size adjustment.
5. The method of claim 1, wherein the determining a first image area as a cover and a second image area as a head portrait from the target image according to the editing operation comprises:
after the target image and/or the management component are edited according to the editing operation, determining a first image area corresponding to the cover area management component from the target image;
determining a second image area corresponding to the head portrait area management component from the first image area;
the first image area is used as a cover, and the second image area is used as a head portrait.
6. The method of claim 5, wherein the using the first image area as a cover and the second image area as a head portrait comprises:
inputting the first image area as a parameter into a background saving method so as to save the first image area as a cover;
inputting the second image area as a parameter into a head portrait storage method to store the second image area as a head portrait.
7. The method of claim 5 or 6, wherein after the first image area is used as a cover and the second image area is used as a head portrait, the method further comprises:
acquiring the position information of the second image area in the first image area;
and displaying the cover and the head portrait in a personal homepage interface of the application according to the position information so as to perform fusion display on the cover and the head portrait.
8. The method of claim 2, wherein the avatar region management component comprises a stretch button for adjusting a size of the avatar region management component.
9. The method of claim 2, wherein the cover area management component includes a stretch button for resizing the cover area management component.
10. The method of claim 1, wherein the acquiring a target image comprises:
outputting an applied image editing interface through a display screen of the user terminal;
detecting an image selection operation input by a user on the image editing interface;
and determining a target image according to the image selection operation, wherein the target image comprises an image in a gallery or an image shot by a camera of the user terminal.
11. The method of claim 10, wherein after determining a target image according to the image selection operation, the method further comprises:
and displaying the target image and the management component in the image editing interface, wherein the management component comprises a cover area management component and a head portrait area management component.
12. The method of claim 1, wherein after the obtaining of the target image and before the detecting of the editing operation on the target image and/or a management component in the image editing interface of the application, the method further comprises:
outputting an applied image editing interface through a display screen of the user terminal;
and displaying the target image and the management component in the image editing interface, wherein the management component comprises a cover area management component and a head portrait area management component.
13. The method of claim 1, wherein the target image is displayed on a bottom layer of the image editing interface, the cover area management component is displayed on a middle layer of the image editing interface, and the avatar area management component is displayed on a top layer of the image editing interface.
14. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring a target image;
the system comprises a detection module, a display module and a display module, wherein the detection module is used for detecting the editing operation of the target image and/or the management component in the image editing interface of the application, and the management component comprises a cover area management component and a head portrait area management component;
and the processing module is used for determining a first image area serving as a cover and a second image area serving as a head portrait from the target image according to the editing operation.
15. A computer-readable storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions which are executed by a processor for performing the image processing method according to any one of claims 1 to 13.
CN202010368185.7A 2020-04-30 2020-04-30 Image processing method and device and computer readable storage medium Pending CN113592983A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010368185.7A CN113592983A (en) 2020-04-30 2020-04-30 Image processing method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010368185.7A CN113592983A (en) 2020-04-30 2020-04-30 Image processing method and device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113592983A true CN113592983A (en) 2021-11-02

Family

ID=78237722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010368185.7A Pending CN113592983A (en) 2020-04-30 2020-04-30 Image processing method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113592983A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134568A1 (en) * 2022-01-14 2023-07-20 北京字跳网络技术有限公司 Display method and apparatus, electronic device, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134568A1 (en) * 2022-01-14 2023-07-20 北京字跳网络技术有限公司 Display method and apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
US11706521B2 (en) User interfaces for capturing and managing visual media
DK180452B1 (en) USER INTERFACES FOR RECEIVING AND HANDLING VISUAL MEDIA
US20230319394A1 (en) User interfaces for capturing and managing visual media
JP7033152B2 (en) User interface camera effect
EP3792738B1 (en) User interfaces for capturing and managing visual media
EP3661187A1 (en) Photography method and mobile terminal
US9560414B1 (en) Method, apparatus and system for dynamic content
US9058095B2 (en) Method for displaying data in mobile terminal having touch screen and mobile terminal thereof
CN108334371B (en) Method and device for editing object
KR102368385B1 (en) User interfaces for capturing and managing visual media
CN113592983A (en) Image processing method and device and computer readable storage medium
WO2016188199A1 (en) Method and device for clipping pictures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40055393

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination