CN113822899A - Image processing method, image processing device, computer equipment and storage medium - Google Patents

Image processing method, image processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN113822899A
CN113822899A CN202110627369.5A CN202110627369A CN113822899A CN 113822899 A CN113822899 A CN 113822899A CN 202110627369 A CN202110627369 A CN 202110627369A CN 113822899 A CN113822899 A CN 113822899A
Authority
CN
China
Prior art keywords
image
picture
target
target picture
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110627369.5A
Other languages
Chinese (zh)
Inventor
卢佳琪
梁治刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202110627369.5A priority Critical patent/CN113822899A/en
Publication of CN113822899A publication Critical patent/CN113822899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application relates to an image processing method, an image processing device, a computer device and a storage medium. The method relates to an image recognition technology of artificial intelligence, and the method comprises the following steps: displaying a picture acquisition entry, wherein the picture acquisition entry comprises an album entry, and at least one candidate picture is displayed in response to a trigger operation on the album entry; responding to picture selection operation, and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including a target object in the target picture; displaying a cut image cut out from the target picture; the cut image is matched with an image area including the object. By adopting the method, the cutting effect and cutting efficiency of the target object in the picture can be improved.

Description

Image processing method, image processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology, images are used as an information carrier, information which can be carried by the images is richer and richer, and information transmitted by the images is more and more popular due to the intuitive and understandable information displayed by the images, so that the information can be transmitted and emotions can be expressed by displaying the images in various scenes. For example, a user may share pictures of daily life through a social application, the user may also set an avatar of a social account using the image, may also verify an identity using a certificate image, and so on.
Since the original sizes of the pictures are different when the pictures are generated, the pictures need to be cut and displayed so that the target objects in the pictures can be highlighted when the pictures are displayed.
However, when a picture is cut, a central area of the picture is usually cut, some areas are cut by a user manually, and some areas are required to detect a target object in the picture in real time, so that the problems of poor cutting effect, low cutting efficiency and the like caused by the lack of the target object exist, and the displayed cut image cannot effectively transmit key information in the picture.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image processing method, an image processing apparatus, a computer device, and a storage medium, which can improve the cropping effect and the cropping efficiency of an object in a picture.
A method of image processing, the method comprising:
displaying an album entry, and responding to a triggering operation on the album entry to display at least one candidate picture;
responding to picture selection operation, and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including a target object in the target picture;
displaying a cut image cut out from the target picture; and the cutting image is matched with the image area comprising the target object.
An image processing apparatus, the apparatus comprising:
the display module is used for displaying an album entry and responding to the triggering operation of the album entry to display at least one candidate picture;
the picture selection module is used for responding to picture selection operation and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including a target object in the target picture;
the display module is also used for displaying a cut image cut out from the target picture; and the cutting image is matched with the image area comprising the target object.
In one embodiment, the display module is further configured to display an image display interface, where the image display interface includes the album entry;
responding to the triggering operation of the album entrance, and entering a picture browsing interface;
and displaying at least one candidate picture in the picture browsing interface.
In the last embodiment, the display module is further configured to, in response to a picture selection operation in the picture browsing interface, return to the image display interface from the picture browsing interface after selecting a target picture from the displayed candidate pictures; and displaying a cut image cut out from the target picture in a preview area of the image display interface.
In one embodiment, the display module is further configured to display, in a preview area of an image presentation interface, a cropped image cropped from the target picture and adapted to the size of the preview area.
In the last embodiment, the image processing apparatus further includes a preview area adapting module, configured to cut out an image area including a target object from the target picture according to the cutting position information; when the size of the preview area is smaller than that of the image area, redrawing the image area in the preview area according to the size of the preview area to obtain the cut image; and when the size of the preview area is larger than that of the image area, carrying out scaling processing on the image area according to the scaling ratio determined according to the size of the preview area and the size of the image area to obtain the cutting image.
In the above embodiment, the image processing apparatus further includes:
the direction correction module is used for reading the attribute information of the target picture; when the attribute information comprises the cutting position information, cutting an image area from the corrected image according to the cutting position information after correcting the direction of the target image;
and the preview area adapting module is further configured to, when the attribute information does not include the clipping position information, directly clip the target picture according to the size of the preview area to obtain the clipped image.
In one embodiment, the picture acquisition portal comprises a camera portal, the image processing apparatus further comprises a picture acquisition module;
the display module is also used for responding to the trigger operation of the camera inlet and displaying a real-time shooting picture; displaying a prompt mark for a target object in the real-time shooting picture;
the picture acquisition module is used for responding to picture acquisition operation and acquiring a target picture comprising the target object;
the display module is also used for displaying a cut image cut out from the target picture; and the cutting image is matched with the image area which is prompted by the prompt mark of the target object and comprises the target object in the target picture.
In the above embodiment, the display module is further configured to display an image presentation interface, where the image presentation interface includes a camera inlet; responding to the trigger operation of the camera inlet, and entering a picture acquisition interface; and displaying a real-time shooting picture in the picture acquisition interface.
In the last embodiment, the picture collecting module is configured to collect a target picture including the target object in response to a picture collecting operation in the picture collecting interface, and return the target picture to the image display interface from the picture collecting interface;
the display module is further used for displaying the cut image cut out from the target picture in a preview area of the image display interface.
In one embodiment, the image processing apparatus further includes a position optimization module, configured to determine a camera type when a target picture including the target object is acquired; and carrying out fault-tolerant optimization processing on the position of the target object marked by the prompt mark when the target picture is acquired according to a preset size corresponding to the type of the camera, and obtaining cutting position information.
In the last embodiment, the position optimization module is further configured to, when the camera type is a front-facing camera, take a target area marked by the prompt mark when the target picture is acquired as a central area, expand the central area according to a first preset size, and obtain cutting position information according to the expanded first area; when the camera type is a rear camera, taking a target area marked by the prompt mark when the target picture is collected as a central area, expanding the central area according to a second preset size, and then obtaining cutting position information according to a second area obtained through expansion; wherein the first preset size is larger than the second preset size.
In one embodiment, the image processing device further comprises a picture storage module, which is used for determining the cutting position information of the target picture based on the prompt mark of the target object; writing the cutting position information into attribute information of the target picture; and storing the target picture carrying the cutting position information.
In one embodiment, the display module is further configured to display a content sharing interface of a social application;
and displaying a thumbnail which is cut out from the target picture and is matched with the size of the content editing area in the content editing area of the content sharing interface.
In the last embodiment, the display module is further configured to display the target picture corresponding to the thumbnail when the viewing operation on the thumbnail is triggered; and when the sharing operation of the content to be shared in the content editing area is triggered, sharing the target picture corresponding to the thumbnail according to the mode of displaying the thumbnail in the content sharing interface.
In one embodiment, the target picture is a certificate picture, and the display module is further configured to display a certificate image uploading interface; and displaying a cut-out image which is cut out from the certificate picture and is matched with the size of the preview area in the preview area of the certificate image uploading interface.
In one embodiment, the display module is further configured to display an avatar setting interface; and displaying a cut image which is cut from the target picture and is matched with the size of the preview area in the preview area of the head portrait setting interface.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
displaying an album entry, and responding to a triggering operation on the album entry to display at least one candidate picture;
responding to picture selection operation, and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including a target object in the target picture;
displaying a cut image cut out from the target picture; and the cutting image is matched with the image area comprising the target object.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
displaying an album entry, and responding to a triggering operation on the album entry to display at least one candidate picture;
responding to picture selection operation, and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including a target object in the target picture;
displaying a cut image cut out from the target picture; and the cutting image is matched with the image area comprising the target object.
A computer program comprising computer instructions stored in a computer readable storage medium, the computer instructions being read from the computer readable storage medium by a processor of a computer device, the processor executing the computer instructions to cause the computer device to perform the steps of the image processing method described above.
According to the image processing method, the image processing device, the computer equipment and the storage medium, the album entry is displayed, at least one candidate picture is displayed in response to the triggering operation of the album entry, when the picture selecting operation is triggered, a target picture is selected from the candidate pictures, the attribute information of the selected target picture comprises accurate cutting position information, and the cutting position information marks an image area including a target object in the target picture. Therefore, after the target picture is selected, the target picture can be directly cut according to the accurate cutting position information, and after a cutting image matched with the image area marked by the cutting position information is obtained, the cutting image is directly displayed. On one hand, compared with directly cutting the central area from the target picture, the cutting position information can accurately position the image area where the target object is located in the target picture, and the target object in the cut image is prevented from being partially lost; on the other hand, compared with the manual selection of the cutting area or the positioning of the cutting area after the real-time detection, the cutting is carried out only according to the cutting position information in the attribute information of the target picture, so that the problem of time delay caused by the manual selection of the cutting area or the real-time detection during each display is solved, and the generation efficiency and the display effect of the cutting image are improved.
Drawings
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a diagram illustrating a target picture in one embodiment;
FIG. 4 is a flow diagram illustrating displaying candidate pictures according to one embodiment;
FIG. 5 is an interface diagram of an image presentation interface in accordance with an embodiment;
FIG. 6 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 7 is an interface diagram that illustrates a content sharing interface, in accordance with one embodiment;
FIG. 8 is a schematic interface diagram of a credential image upload interface in one embodiment;
FIG. 9 is a flowchart illustrating an image processing method according to another embodiment;
FIG. 10 is an interface diagram of an image presentation interface in accordance with yet another embodiment;
FIG. 11 is a flowchart showing an image processing method according to still another embodiment;
FIG. 12 is a flow diagram illustrating a method for image processing in accordance with one illustrative embodiment;
FIG. 13 is a block diagram showing the configuration of an image processing apparatus according to an embodiment;
FIG. 14 is a block diagram showing a configuration of an image processing apparatus according to another embodiment;
FIG. 15 is a diagram showing an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image processing method provided by the application realizes image display including a target object by using a computer vision technology in an Artificial Intelligence (AI) technology.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition. It can be understood that the detection of the target object on the target picture when the target picture is acquired belongs to the image recognition technology in the computer vision technology.
Machine Learning (ML), which is a multi-domain cross discipline relating to probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other multi-disciplines, is used for specially researching how a computer simulates or realizes human Learning behaviors to acquire new knowledge or skills and reorganizes an existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. The artificial neural network is an important machine learning technology and has wide application prospects in the fields of system identification, pattern recognition, intelligent control and the like.
It can be understood that the present application can train and use the image recognition model by using the machine learning technology to detect the position of the target object in the target picture, so that the cropping position information of the target picture can be determined according to the position, and thus when the picture needs to be displayed, the cropping position information is directly utilized to obtain the cropping image and display the cropping image. In some embodiments, the target picture and the cropping position information of the target picture can be stored in a data block of the block chain, so that the safety of the picture data is guaranteed.
The image processing method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 may display an album entry, and in response to a trigger operation on the album entry, display at least one candidate picture; responding to a picture selection operation, selecting a target picture from the displayed candidate pictures, wherein the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area comprising a target object in the target picture; and then displaying a cut image cut out from the target picture, wherein the cut image is matched with the image area marked by the cut position information in the target picture.
In one embodiment, the picture obtaining entry includes a camera entry, and the terminal 102 may display a real-time shot picture in which a prompt mark for a target object in the real-time shot picture is displayed in response to a trigger operation on the camera entry, capture a target picture including the target object in response to a picture capturing operation, and then display a cropped image cropped from the target picture, the cropped image matching an image area marked in the target picture by the cropping position information determined by the prompt mark of the target object. The terminal 102 may also write the cropping position information into the attribute information of the collected original target picture. The terminal may also upload the original picture written with the clipping position information to the server 104 for cloud storage.
The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In one embodiment, as shown in fig. 2, an image processing method is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 202, displaying a picture acquisition entry, wherein the picture acquisition entry comprises an album entry, and responding to a triggering operation of the album entry to display at least one candidate picture.
The picture acquisition entry is an entry for acquiring and exporting pictures, and comprises an album entry and a camera entry. An album portal is a portal provided by the user interaction interface for selecting pictures. The album portal can be arranged at any position in the user interaction interface in the form of a touch control. The triggering operation of the album entry may be any one of a click operation, a slide operation, or a double click operation. The candidate pictures are a set of pictures for the user to select, and the number of the candidate pictures may be one or more than two. The candidate picture may be a picture stored locally in the terminal, or a picture stored in the cloud server.
Specifically, the terminal may display an album entry in the user interaction interface, and display at least one candidate picture in response to a trigger operation on the album entry. In some embodiments, the user interaction interface where the album entry is located and the user interaction interface where the candidate picture is located may be the same interface or different interfaces.
Step 204, responding to picture selection operation, and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture includes clipping position information that marks an image area including the target object in the target picture.
The picture selection operation is an operation of selecting one or more target pictures from the displayed at least one candidate picture. The picture selection operation may be any one of a click operation, a slide operation, or a double click operation on the candidate picture. The picture selection operation may be a confirmation operation after selecting a plurality of candidate pictures. And the terminal responds to the picture selection operation, and selects a picture from the candidate pictures as a target picture to be displayed.
The attribute information of the target picture is information for recording the attribute of the target picture. In some embodiments, the attributes of the target picture may be classified into picture attributes including at least one of an image direction, an image resolution, a resolution unit, an image capturing time, a last editing time, an image gamut space, and the like, and photographing attributes including at least one of a camera model, an exposure time, an aperture value, a photographing mode, a camera type, a photographic value, a lens focal length, an image size, and the like. The clipping position information may be divided into a picture attribute and a photographing attribute. Of course, the attribute information of the target picture may not be divided into the picture attribute and the shooting attribute. The attribute information of the target picture may be EXIF (Exchangeable image file format) information, for example.
The cropping position information is position information that specifies an image area in the target picture, the image area including the target object in the target picture. The clipping position information may include a width (width) and a height (height) of the image region to be clipped, and also include a pixel coordinate (posX, posY) of the upper left corner of the image region, and certainly may also be a pixel coordinate of the upper right corner of the image region or a pixel coordinate of the center of the image region. The cropping position information may be written into the attribute information of the target picture when the target picture is captured.
The target object in the target picture is an object in the target picture, which can highlight the content of the target picture intended to be expressed. For example, the object may be a human face, a daily necessity such as a cup, a car, clothes, food, a bag, and the like, and a natural landscape such as a tree, a flower, a grass, a blue sky, a white cloud, and the like. In some embodiments, the target objects in the target picture may be prioritized, for example, when the target picture includes a human face, the target objects in the target picture are human faces, when the target picture does not include a human face, the target objects in the target picture are salient objects in the target picture, and for example, when the target picture includes a human face, the target objects in the target picture are determined to be human faces, and when the target picture does not include a human face, the target objects in the target picture are determined not to exist. The terminal can detect the target object of the target picture when the target picture is collected.
The image area marked by the cutting position information in the target picture is an image area including the target object. In some embodiments, the target object may be in a central position in the image area. In some embodiments, in order to fully display the target object and improve the display effect of the target object, the range of the image area is larger than the area where the target object is located, that is, the image area may further include other content in the target picture as background content for highlighting the target object.
Fig. 3 is a schematic diagram of a target picture in an embodiment. Referring to fig. 3, the target picture 30 includes a target object 302, the target object is a human face, the attribute information of the target picture 30 includes cropping position information, the cropping position information marks an image region 304 in the target picture 30, and the image region 304 includes the target object 302.
Specifically, the terminal may take the selected candidate picture as the target picture in response to a picture selection operation on the candidate picture.
Step 206, displaying a cut image cut out from the target picture; the cut image is matched with an image area including the object.
The cutting image is an image cut from the target picture, and the cutting image is matched with an image area which is marked by the cutting position information and comprises the target object.
In one embodiment, the cropping image may be an image area in the target picture marked by the cropping position information. Specifically, after the target picture is selected, the terminal can read the attribute information of the target picture to obtain the cutting position information, and then, after an image area is positioned in the target picture directly according to the cutting position information in the attribute information of the target picture, the image area is cut out from the target picture to be used as a cutting image to be displayed.
In one embodiment, the cropping image may be an image obtained by scaling an image area in the target picture marked by the cropping position information. Specifically, after the target picture is selected, the terminal can read the attribute information of the target picture to obtain the cutting position information, then, an image area is cut out from the target picture after the image area is positioned in the target picture directly according to the cutting position information in the attribute information of the target picture, and then, an image obtained by scaling the image area is used as a cutting image to be displayed.
The image processing method comprises the steps of displaying an album entrance, responding to a trigger operation on the album entrance, displaying at least one candidate picture, and selecting a target picture from the candidate pictures when the picture selection operation is triggered, wherein the attribute information of the selected target picture comprises accurate cutting position information, and the cutting position information marks an image area of the target picture, which comprises a target object. Therefore, after the target picture is selected, the target picture can be directly cut according to the accurate cutting position information, and after a cutting image matched with the image area marked by the cutting position information is obtained, the cutting image is directly displayed. On one hand, compared with directly cutting the central area from the target picture, the cutting position information can accurately position the image area where the target object is located in the target picture, and the target object in the cut image is prevented from being partially lost; on the other hand, compared with the manual selection of the cutting area or the positioning of the cutting area after the real-time detection, the cutting is carried out only according to the cutting position information in the attribute information of the target picture, so that the problem of time delay caused by the manual selection of the cutting area or the real-time detection during each display is solved, and the generation efficiency and the display effect of the cutting image are improved.
In one embodiment, as shown in FIG. 4, step 202 comprises:
step 402, displaying an image display interface, wherein the image display interface comprises an album entrance.
The image display interface is a user interaction interface used for displaying the cutting image. The image display interface is provided with an album entrance, and when a user needs to display a target picture on the image display interface, a cut image obtained by cutting the target picture is displayed on the image display interface after the target picture is selected through the album entrance.
And step 404, responding to the triggering operation of the album entrance, and entering a picture browsing interface.
The triggering operation of the album entry may be any one of a click operation, a slide operation or a double click operation. The picture browsing interface is a user interaction interface used for displaying the candidate pictures for the user to browse, and the picture browsing interface can be a local photo album browsing interface or a cloud picture browsing interface. The picture browsing interface can be displayed on the image display interface in a floating mode, and can also jump from the image display interface to the picture browsing interface in a jumping mode.
Specifically, after the operation on the album entry is triggered, the picture browsing interface is accessed from the displayed image display interface.
And 406, displaying at least one candidate picture in the picture browsing interface.
In this embodiment, when a target picture needs to be displayed in an image display interface, after entering the picture browsing interface through a set album entry, a user selects one or more target pictures from the picture browsing interface.
In one embodiment, step 204 includes: responding to picture selection operation in the picture browsing interface, and returning to an image display interface from the picture browsing interface after selecting a target picture from the displayed candidate pictures; step 206 comprises: and displaying a cut image cut out from the target picture in a preview area of the image display interface.
The preview area of the image display interface is an area used for previewing the target picture. Besides the album entrance, the image display interface is also provided with a preview area. Because the cutting image is matched with the image area of the target picture including the target object, the cutting image displayed in the preview area can directly highlight the target object in the target picture, and the key content of the target picture is perfectly transferred, so that the target picture can be accurately and effectively previewed. And moreover, the cutting is carried out according to the cutting position information in the attribute information of the target picture, so that the problem of time delay caused by manual selection of a cutting area or real-time detection in each display is avoided, and the generation efficiency and the display effect of the cut image are improved.
In some embodiments, a cut image cut from the target picture is displayed in a preview area of the image presentation interface, and after a user views key content in the target picture through the cut image displayed in the preview area, the user can select further processing of the target picture. For example, in some scenes, the user considers that the target picture is not perfect enough, and may enter the picture browsing interface again through the album portal to select a new target picture, and in other scenes, the user considers that the target picture is perfect and may share the target picture to other users.
Fig. 5 is an interface diagram of an image presentation interface according to an embodiment. Referring to fig. 5, an album entry 504 and a preview area 506 are provided in an image display interface 502, and a user can enter a picture browsing interface 508 by triggering an operation on the album entry 504, and a plurality of candidate pictures 510 are displayed in the picture browsing interface 508. The user may return to the image display interface 502 by triggering an operation on the candidate picture 510, and display a cut image 512 cut out from the target picture according to the cutting position information in the attribute information of the target picture in the preview area 506 of the image display interface 502.
In this embodiment, a preview area is set in the image display interface, and a cut image including a target object cut out from the target picture is displayed in the preview area, so that accurate and effective preview of the target picture can be realized.
In one embodiment, step 206 includes: and displaying a cut image which is cut out from the target picture and is matched with the size of the preview area in the preview area of the image display interface.
The preview area of the image display interface is an area used for previewing the target picture. The preview area is usually fixedly arranged in the image presentation interface, and the size of the preview area is fixed. When the terminal displays the cut image through the preview area, the displayed cut image is adapted to the size of the preview area.
In one embodiment, the method further comprises:
cutting out an image area including a target object from the target picture according to the cutting position information;
and when the size of the preview area is smaller than that of the image area, redrawing the image area in the preview area according to the size of the preview area to obtain a cut image.
In one embodiment, when the size of the preview area is larger than the size of the image area, the image area is scaled according to a scaling ratio determined according to the size of the preview area and the size of the image area, and a cropping image is obtained.
Specifically, after the terminal selects the target picture according to the operation of the user, the cutting position information in the attribute information of the target picture is read, and the image area including the target object is cut out from the target picture according to the cutting position information. In order to improve the display effect, the terminal can also perform size adaptation processing on the image area, and then the image area can be finally displayed in the preview area.
The terminal can obtain the size of the preview area, determine the size of the image area according to the cutting position information, and redraw the image area according to the size of the preview area to obtain a cut image when the size of the image area is larger than that of the preview area, and the cut image is displayed in the preview area. When the size of the image area is smaller than the size of the preview area, in order to avoid distortion of the image area due to stretching, the terminal performs scaling processing on the image area to obtain a cropped image, and displays the cropped image in the preview area.
In one embodiment, cutting out an image area including a target object from a target picture according to cutting-out position information includes: reading attribute information of a target picture; when the attribute information includes the cutting position information, the direction of the target picture is corrected, and then the image area is cut out from the corrected image according to the cutting position information.
Specifically, when the attribute information includes the complete clipping position information, it is determined that the read clipping position information is valid. The target picture is an original shot picture, usually the target picture carries image direction information, and in order to avoid that the direction correction fails due to the loss of the image direction information after the cutting, the terminal can correct the direction according to the image direction information and then cut the target picture to obtain an image area.
In one embodiment, the terminal performs direction correction on the target picture, including mirroring the target picture shot by the front camera. And the terminal reads the camera type in the attribute information of the target picture, performs mirror image processing on the target picture when the target picture is a picture shot by a front camera, and cuts out an image area from the picture after the mirror image processing according to the cutting position information.
In one embodiment, the terminal performs direction correction on the target picture, including direction rotation on the target picture shot by the rear camera. Specifically, the terminal may read direction angle information in the attribute information of the target picture, for example, the rotation angle information is 1, which indicates that the target picture does not need to be rotated, the rotation angle information is 2, which indicates that the target picture needs to be rotated by 90 degrees clockwise, the rotation angle information is 3, which indicates that the target picture needs to be rotated by 90 degrees counterclockwise, and the rotation angle information is 4, which indicates that the target picture needs to be rotated by 180 degrees. And after the terminal rotates the target picture shot by the rear camera, cutting out an image area from the rotated picture according to the cutting position information.
In one embodiment, the method further comprises: and when the attribute information does not comprise the cutting position information, directly cutting the target picture according to the size of the preview area to obtain a cut image.
Specifically, when the terminal does not read the clipping position information from the attribute information of the target picture, the terminal directly clips the target picture according to the size of the preview area, obtains a clipping image, and displays the clipping image in the preview area.
Fig. 6 is a schematic flowchart of an image processing method in an embodiment. Referring to fig. 6, the process begins and step 602: the user opens the terminal album to select the target picture, and then step 604: reading the attribute information of the selected target picture, and step 606: determining whether the attribute information includes clipping position information, if yes, executing step 608: the direction of the target picture is corrected, and after step 608, step 610 is executed: clipping the corrected backward picture according to the read clipping position information, and step 612: and acquiring a cut image, and adapting the cut image to the size of the preview area so as to preview and display the cut image. If the attribute information of the target picture does not include the cropping position information, go to step 614: and directly adapting the target picture to the size of the preview area so as to preview and display the target picture.
In the embodiment, the cut image area is adapted to the preview area and then displayed, so that the display effect of the target object in the target picture can be improved.
In one embodiment, step 206 includes: displaying a content sharing interface of a social application; and displaying the thumbnail which is cut out from the target picture and is matched with the size of the content editing area in the content editing area of the content sharing interface.
In this embodiment, the interface to be displayed for the clip image is a content sharing interface of the social application. The content sharing interface of the social application is provided with a content editing area, the content editing area is used for editing the content to be shared, the target picture is used as the content to be shared, and the thumbnail of the target picture is displayed in the content editing area so as to be previewed by a user.
Specifically, after the terminal selects the target picture, an image area including the target object is cut out from the target picture according to the cutting position information in the attribute information of the target picture, and then the image area is zoomed according to the size of the preview area to obtain a thumbnail, and the thumbnail is displayed in a content editing area of the content sharing interface.
Due to the fact that the thumbnail is matched with the image area, including the target object, of the target picture, the thumbnail displayed in the content editing area can directly highlight the target object in the target picture, and key content of the target picture is completely transmitted, so that accurate and effective previewing of the target picture is achieved. The user can know the subject content of the target picture only through the thumbnail, and the target picture does not need to be confirmed by clicking the thumbnail, so that the picture analysis operation is simplified to a certain extent, and the picture sharing efficiency is improved. And moreover, the target picture is cut according to the cutting position information in the attribute information of the target picture, so that the problem of time delay caused by manual selection of a cutting area or real-time detection when the target picture is shared every time is solved, and the picture sharing efficiency is improved.
In one embodiment, the method further comprises: when the viewing operation of the thumbnail is triggered, displaying a target picture corresponding to the thumbnail; and when the sharing operation of the content to be shared in the content editing area is triggered, sharing the target picture corresponding to the thumbnail according to the mode of displaying the thumbnail in the content sharing interface.
Because the original target picture is shared in the content sharing interface, and the thumbnail displayed in the content editing area is only used for helping the user to preview the target object needing to be highlighted in the target picture, in this embodiment, the user can still display the original target picture by triggering the viewing operation of the thumbnail in the content editing area, so that the user can further confirm the target picture.
In addition, the content sharing interface can also be provided with a control for one-key sharing of the content to be shared in the content editing area, and when a user confirms that the target picture can be shared through the thumbnail displayed in the content editing area, the target picture corresponding to the thumbnail can be shared in a mode of displaying the thumbnail in the content sharing interface by triggering the sharing operation of the control, so that the target picture can be shared to the shared object.
Specifically, the terminal can display the thumbnail of the shared target picture through the content sharing interface, the thumbnail can directly highlight the target object in the target picture, and both the user and the shared object can know the content in the shared target picture through the thumbnail, so that the user and the shared object can pay attention to the target picture, and the user and the shared object can conveniently decide not to click the thumbnail to view the original target picture according to the thumbnail, so that resource consumption caused by clicking and viewing some invalid pictures is avoided.
In an embodiment, an album entry is further arranged in the content sharing interface of the social application, the terminal can enter the picture browsing interface in response to a trigger operation on the album entry in the content sharing interface, return to the content sharing interface after selecting a target picture carrying the clipping position information from the picture browsing interface, and display a thumbnail which is clipped from the target picture and is adapted to the size of the content editing area in the content editing area of the content sharing interface.
FIG. 7 is an interface diagram that illustrates a content sharing interface in one embodiment. Referring to part (a) of fig. 7, in the content sharing interface 702, an album entry 704 is set, and the user can enter the picture browsing interface 706 by triggering an operation on the album entry 704, as shown in part (b) of fig. 7. In the picture browsing interface 706, the target picture 708 is selected and then returned to the content sharing interface 702, and at this time, a content editing area 710 is displayed in the content sharing interface 702, as shown in part (c) of fig. 7. In the content edit area 710, a thumbnail 712 corresponding to the target picture 708 obtained from the trimming position information in the attribute information of the target picture 708 is displayed. Further, as shown in part (d) of fig. 7, the terminal may also share the target picture 708 with the shared object in a manner of displaying a thumbnail 712 in the content sharing interface 702.
In the embodiment, the thumbnail including the target object cut out from the target picture is displayed in the content editing area in the content sharing interface of the social application, so that the target picture can be accurately and effectively previewed, and the editing efficiency of the picture to be shared is improved.
In one embodiment, the target picture is a certificate picture, and step 206 includes: displaying a certificate image uploading interface; and displaying a cut image which is cut from the certificate picture and is matched with the size of the preview area in the preview area of the certificate image uploading interface.
In this embodiment, the interface where the cut image is to be displayed is a document image uploading interface for uploading a document image, where the document is, for example, an identity card, a passport, a pass, or the like. A preview area is arranged in the certificate image uploading interface and used for previewing the certificate image. The target object in the certificate picture can be the face, the certificate number or the certificate validity period in the certificate, and can also be the whole certificate in the certificate picture.
Specifically, after the certificate picture is selected by the terminal, an image area including a target object is cut out from the certificate picture according to cutting position information in attribute information of the certificate picture, and then the image area is zoomed according to the size of a preview area to obtain a cut image which is displayed in the preview area of the certificate image uploading interface.
Due to the fact that the cutting image is matched with the image area, including the target object, of the certificate picture, the cutting image displayed in the preview area can directly highlight the target object in the certificate picture, and the key content of the certificate picture is perfectly transferred, so that the certificate picture can be accurately and effectively previewed, and the uploading efficiency of the certificate picture is improved. Moreover, the certificate image is cut according to the cutting position information in the attribute information of the certificate image, so that the problem of time delay caused by manual selection of a cutting area or real-time detection during uploading at each time is avoided, and the uploading efficiency of the certificate image is improved.
In one embodiment, the certificate image uploading interface is further provided with an album entry, the terminal can enter the image browsing interface in response to the triggering operation of the album entry in the certificate image uploading interface, returns to the certificate image uploading interface after selecting the certificate image carrying the cutting position information from the image browsing interface, and displays the cut image which is cut from the certificate image and is matched with the size of the preview area in the preview area of the certificate image uploading interface.
FIG. 8 is an interface diagram of a document image upload interface in one embodiment. Referring to part (a) of fig. 8, in the document image upload interface 802, an album entry 804 and a preview area 806 are provided, and the user can enter a picture browsing interface 808 by triggering an operation on the album entry 804, as shown in part (b) of fig. 7. In the image browsing interface 808, after the certificate image 810 is selected, the certificate image uploading interface 802 returns, and in the preview area 806, a cut image 812 corresponding to the certificate image 810, which is obtained according to the cutting position information in the attribute information of the certificate image 810, is displayed.
In this embodiment, through show in the preview region in certificate image upload interface directly according to the certificate picture tailor the image of tailorring that position information was tailor from the certificate picture, can realize the accurate effectual preview to the certificate picture to promote the efficiency that the certificate image uploaded.
In one embodiment, step 206 includes: displaying a head portrait setting interface; and displaying a cut image which is cut out from the target picture and is matched with the size of the preview area in the preview area of the head portrait setting interface.
In this embodiment, the interface on which the cut-out image is to be displayed is an avatar setting interface for setting an avatar of the user. The head portrait setting interface is provided with a preview area used for previewing pictures to be used as head portraits. Specifically, after the terminal selects the target picture, an image area including the target object is cut out from the target picture according to the cutting position information in the attribute information of the target picture, and then the image area is zoomed according to the size of the preview area to obtain a cut image, and the cut image is displayed in the preview area of the head portrait setting interface.
Because the cutting image is matched with the image area of the target object in the target picture, the cutting image displayed in the preview area can directly highlight the target object in the target picture, and the key content of the target picture is perfectly transferred, so that the target picture can be accurately and effectively previewed, and the setting efficiency of the head portrait is improved. Moreover, the head portrait is cut according to the cutting position information in the attribute information of the target picture, so that the problem of time delay caused by manual selection of a cutting area or real-time detection when the head portrait is set every time is avoided, and the efficiency of setting the head portrait is improved.
In a specific application scenario, for a large number of pictures with different sizes in a webpage, cutting position information can be generated according to the position of a key target object when the pictures are generated, and the cutting position information is written into attribute information of the pictures, so that when the large number of pictures are displayed for a user, the pictures with different sizes can be cut according to the same size of a preview area to obtain an image area where the target object is located, and then the pictures are adapted to the preview area to obtain a cut image, and then preview display is performed. When the user triggers the viewing operation of the cut image, the original picture can be displayed, and when the user triggers the downloading operation of the cut image, the original picture can be downloaded.
In a specific application scenario, when a poster in news information is generated, clipping position information is generated according to the position of a more key target object in the poster, and is written into attribute information of an image of the poster. When the poster is displayed through the news client, the cut image highlighting the key target object can be displayed in the news browsing page according to the cut position information, and when the user views the cut image, the original poster image is displayed.
In the following embodiments, the picture-taking portal includes a camera portal through which the target picture is taken immediately after the camera is started.
Fig. 9 is a schematic flowchart of an image processing method in an embodiment.
And step 902, displaying a camera entrance, and displaying a real-time shooting picture in response to a trigger operation on the camera entrance.
Wherein the camera portal is a portal provided by the user interaction interface for taking pictures. The camera inlet can be arranged at any position in the user interaction interface in the form of a touch control. The trigger operation for the camera entry may be any one of a click operation, a slide operation, or a double-click operation. The real-time shooting picture is a real-time view-finding frame picture captured by the camera after shooting is started.
Specifically, the terminal may display a camera entry in the user interaction interface, start a camera to acquire an image in real time in response to a trigger operation on the camera entry, and display a real-time shooting picture. An album entrance can be arranged in the user interaction interface. In some scenes, when a user needs to select a target picture through an album, the target picture can be selected through the album entrance, and when the user needs to acquire the target picture in real time, the target picture can be acquired through the camera entrance.
In one embodiment, to capture pictures in real-time, the terminal may create a media management session that is used to manage data objects related to capturing pictures in real-time with a camera. Specifically, the terminal creates an initialized media management session AVCaptureSession by using AVFoundation (a framework for processing audio/video files), creates a device object AVCaptureDevice for representing a camera that can be switched back and forth, creates an input object AVCaptureDevice input representing input data, an output object avcapturehoooutput representing output picture data, and an output object avcapturemaoutput representing output target position data through the device object, and adds the input object and the output object to the media management session AVCaptureSession in a unified manner, wherein the AVCaptureSession is used for managing data streams obtained from a physical device and outputting the data streams to one or more destinations. In addition, the terminal needs to create a real-time preview layer object AVCaptureVideoPreviewLayer for displaying a real-time shooting picture.
And 904, displaying a prompt mark for the target object in the real-time shooting picture.
Specifically, for the captured real-time shot picture, the terminal tracks the target object therein. And the terminal detects whether a target object exists in the current real-time shooting picture, and if so, a prompt mark for the target object in the real-time shooting picture is displayed according to the position of the target object in the real-time shooting picture. If the terminal does not exist, the terminal can prompt the user to change the shooting angle in real time.
In one embodiment, step 904 comprises: detecting a target object in the real-time shooting picture to obtain a detection result of the target object; and displaying a prompt mark in real time according to the detection result of the target object in the real-time shooting picture.
The prompt mark of the target object may be a rectangular frame enclosing the target object, or may be four vertices for positioning the target object.
Optionally, the terminal may detect a face in the real-time shot picture, when the face is detected, the detected face is used as a target object, and in the real-time shot picture, the face prompt mark is displayed according to the detected face position. When the human face is not detected, the terminal can detect the salient object of the real-time shooting picture, and when the salient object is detected, the salient object is taken as a target object, and a corresponding prompt mark is displayed. Of course, the terminal may also determine that no target object exists in the current shooting picture and does not display the prompt sign when the face is not detected.
Optionally, the target object is a human face, and when the terminal detects the human face, metadata related to the human face is extracted, and human face position information is extracted according to the metadata. For example, the terminal may obtain a metadata array related to the faces by using the AVFoundation library, where the metadata array includes a number of AVMetadataObject objects, and each object includes a faceID for identifying each detected face; the human face leaning angle identification device also comprises an erollAngle which is used for identifying the leaning angle of the human face, namely the leaning angle of the head to the shoulder direction; yawAngle, the deflection angle, that is, the angle of rotation of the face around the y-axis; bounds are also included to identify the location of detected face regions. The terminal can draw and determine face position information according to the metadata related to the faces, and draw prompt marks used for tracking the faces in a real-time picture based on the face position information to assist a user in taking pictures. In addition, the terminal can also adopt frames such as OpenCV, Face + + and the like to acquire the position information of the target object in the real-time shooting picture.
Therefore, the target object is marked in the real-time shooting picture, so that the user is guided to shoot the picture more reasonably, the problem that the shot picture is inaccurate or fails in target object detection can be solved, and the preview effect and the cutting effect displayed after the target picture is cut are improved to a certain extent.
Step 906, in response to the picture collecting operation, collecting a target picture including the target object.
The terminal responds to the picture acquisition operation and takes the current real-time shooting picture as a target picture.
In an embodiment, after the terminal acquires the target picture, since the position of the target object is based on the device coordinate system, for example, the resolution of the camera is 1280 × 720 when the target picture is acquired, and the size of the acquired target picture is 1280 × 720px, in order to avoid an error in the clipping position, the resolution needs to be converted into a physical size, the ratio of the resolution to the physical size may be 2:1, or may be 3:1, and the setting may be performed according to design requirements. Therefore, the target object can be cut out from the target picture accurately after the target object is converted into the physical size.
Step 908, displaying a cut image cut out from the target picture; and cutting the image to match with the image area including the target object in the target picture prompted by the prompt mark of the target object.
The cutting image is an image cut from the target picture, the cutting image is matched with the image area in the target picture marked by the cutting position information, and the cutting position information is determined according to the prompt mark displayed when the target picture is collected. In some embodiments, after the terminal acquires the target picture, the direction of the target picture is corrected, and then the target object region determined by the prompt mark in the picture after the direction is corrected is optimized to obtain an image region, and the position information of the image region is used as the cutting position information, so that the image region is cut out from the target picture.
In one embodiment, the cropping image may be an image area in the target picture marked by the cropping position information. Specifically, after the target picture is collected, the terminal may determine an image area including the target object according to a prompt mark of the target object in the target picture, then generate clipping position information for marking the image area, clip the image area including the target object from the target picture according to the clipping position information, and then take the image area as a clipped image.
In one embodiment, the cropping image may be an image obtained by scaling an image area in the target picture marked by the cropping position information. Specifically, after the target picture is acquired, the terminal may determine an image region including the target object according to a prompt mark of the target object in the target picture, then generate clipping position information for marking the image region, clip the image region including the target object from the target picture according to the clipping position information, and then use an image obtained by scaling the image region as a clipped image to be displayed.
In one embodiment, step 902 comprises: displaying an image display interface, wherein the image display interface comprises a camera inlet; responding to the trigger operation of the camera entrance, and entering a picture acquisition interface; and displaying the real-time shooting picture in a picture acquisition interface.
The image display interface is a user interaction interface used for displaying the cutting image. The image display interface is provided with a camera inlet and an album inlet, when a user needs to display a target picture on the image display interface, the target picture can be collected through the camera inlet, a cut image obtained by cutting the target picture is displayed in the image display interface, and the cut image obtained by cutting the target picture can be displayed in the image display interface after the target picture is selected through the album inlet.
The trigger operation for the camera entry may be any one of a click operation, a slide operation, or a double-click operation. The picture acquisition interface is a user interactive interface for displaying real-time shot pictures. The image acquisition interface can be displayed on the image display interface in a floating mode, and can also jump from the image display interface to the image acquisition interface in a jumping mode.
Specifically, after the operation on the picture acquisition entry is triggered, the picture acquisition interface is entered from the displayed image presentation interface.
In one embodiment, step 906 includes: responding to picture acquisition operation in the picture acquisition interface, acquiring a target picture comprising a target object, and returning to an image display interface from the picture acquisition interface; step 908 comprises: and displaying a cut image cut out from the target picture in a preview area of the image display interface.
The preview area of the image display interface is an area used for previewing the target picture. Besides a picture acquisition inlet and an album inlet, a preview area is also arranged in the image display interface. Because the cutting image is matched with the image area of the target picture including the target object, the cutting image displayed in the preview area can directly highlight the target object in the target picture, and the key content of the target picture is perfectly transferred, so that the target picture can be accurately and effectively previewed.
In some embodiments, a cut image cut from the target picture is displayed in a preview area of the image presentation interface, and after a user views key content in the target picture through the cut image displayed in the preview area, the user can select further processing of the target picture. For example, in some scenarios, the user considers that the target picture is not perfect enough, and may enter the picture capturing interface again through the picture capturing entrance to capture a new target picture, and in other scenarios, the user considers that the target picture is perfect, and may share the target picture with other users.
In one embodiment, step 908 comprises: and displaying a cut image which is cut out from the target picture and is matched with the size of the preview area in the preview area of the image display interface.
The preview area of the image display interface is an area used for previewing the target picture. The preview area is usually fixedly arranged in the image presentation interface, and the size of the preview area is fixed. When the terminal displays the cut image through the preview area, the displayed cut image is adapted to the size of the preview area.
In one embodiment, the method further comprises: cutting out an image area including a target object from the target picture according to the cutting position information; and when the size of the preview area is smaller than that of the image area, redrawing the image area in the preview area according to the size of the preview area to obtain a cut image.
In one embodiment, when the size of the preview area is larger than the size of the image area, the image area is scaled according to a scaling ratio determined according to the size of the preview area and the size of the image area, and a cropping image is obtained.
Specifically, the terminal cuts out an image area including the target object from the target image according to the cutting position information determined based on the prompt mark of the target object after the target image is collected. In order to improve the display effect, the terminal can also perform size adaptation processing on the image area, and then the image area can be finally displayed in the preview area.
The terminal can obtain the size of the preview area, determine the size of the image area according to the cutting position information, and redraw the image area according to the size of the preview area to obtain a cut image when the size of the image area is larger than that of the preview area, and the cut image is displayed in the preview area. When the size of the image area is smaller than the size of the preview area, in order to avoid distortion of the image area due to stretching, the terminal performs scaling processing on the image area to obtain a cropped image, and displays the cropped image in the preview area.
Fig. 10 is an interface diagram of an image presentation interface according to an embodiment. Referring to fig. 10, in an image display interface 1002, a picture capture entry 1004, an album entry 1006 and a preview area 1008 are provided, a user can enter a picture capture interface 1010 by triggering an operation on the picture capture entry 1004, a real-time photographed picture is displayed in the picture capture interface 1010, a prompt mark 1012 of an object is displayed in the real-time photographed picture, when it is confirmed that a target picture is captured, the image display interface 1002 returns, and a cut-out image 1012 of the target picture is displayed in the preview area 1008 of the image display interface 1002. The user may also enter the picture browsing interface 1014 by triggering an operation on the album entry 1006, where a plurality of candidate pictures are displayed in the picture browsing interface 1014, the user may select a target picture by triggering a selection operation on any one or more candidate pictures, and then return to the image display interface 1002, and display a cropped image 1012 cropped from the target picture according to the cropping position information in the attribute information of the target picture in the preview area 1008 of the image display interface 1002.
In an embodiment, after the target picture is acquired, the target picture is an original shot picture, and usually the target picture carries image direction information, so as to avoid that the direction correction fails due to the loss of the image direction information after the cropping, the terminal may modify the direction of the target picture according to the image direction information, and then perform the cropping, so as to obtain an image area including the target object.
In one embodiment, the terminal performs direction correction on the target picture, including mirroring the target picture shot by the front camera. And the terminal cuts out an image area from the image after mirror image processing according to the cutting position information determined based on the lifting mark after mirror image processing is carried out on the target image when the target image is the image shot by the front camera by reading the camera type in the attribute information of the target image.
In one embodiment, the terminal performs direction correction on the target picture, including direction rotation on the target picture shot by the rear camera. Specifically, the terminal may read direction angle information in the attribute information of the target picture, for example, the rotation angle information is 1, which indicates that the target picture does not need to be rotated, the rotation angle information is 2, which indicates that the target picture needs to be rotated by 90 degrees clockwise, the rotation angle information is 3, which indicates that the target picture needs to be rotated by 90 degrees counterclockwise, and the rotation angle information is 4, which indicates that the target picture needs to be rotated by 180 degrees. And after the terminal rotates the target picture shot by the rear camera, cutting out an image area from the rotated picture according to the cutting position information determined based on the lifting mark.
In addition, in order to completely present the target object in the target picture by cutting the image, the terminal can also optimize the position of the target object determined by the prompt mark, so that the terminal has certain fault-tolerant capability. The target in the target picture collected by the front camera is generally larger, the target in the target picture collected by the rear camera is generally smaller, and the fault-tolerant optimization processing is further performed on the target position determined by the prompt mark. The method further comprises the following steps: determining a camera type when a target picture including a target object is acquired; and carrying out fault-tolerant optimization processing on the position of the target object marked by the prompt mark when the target picture is collected according to the preset size corresponding to the type of the camera, and obtaining cutting position information.
In one embodiment, according to a preset size corresponding to a camera type, performing fault-tolerant optimization processing on a target object position marked by a prompt mark when a target picture is collected to obtain cutting position information, including: when the camera type is a front camera, a target area marked by a prompt mark when a target picture is collected is taken as a central area, the central area is expanded according to a first preset size, and then cutting position information is obtained according to the expanded first area; when the camera type is a rear camera, a target area marked by a prompt mark when a target picture is collected is taken as a central area, the central area is expanded according to a second preset size, and then cutting position information is obtained according to a second area obtained through expansion; wherein the first preset size is larger than the second preset size.
For example, the target object is a face, the face region marked by the prompt mark is a face five sense organ region, and if the face is cut out from the target picture as a cut-out image according to the prompt mark directly, the whole face image obviously cannot be presented accurately, resulting in face loss. Therefore, the terminal can take the face area as a central area, expand the central area, take the expanded area as an area to be cut, and take the position information corresponding to the expanded area as cutting position information, so that a complete portrait can be cut. In addition, for the target picture collected by the rear camera, the size of the target picture in expansion is larger than that of the target picture collected by the front camera.
In one embodiment, the method further comprises: determining cutting position information of the target picture based on the prompt mark of the target object; writing the cutting position information into the attribute information of the target picture; and storing the target picture carrying the cutting position information.
In the embodiment, for the original target picture, after determining the cutting position information for marking the image area including the target object, the cutting position information is written into the attribute information of the original target picture, and then the original target picture is stored to the local.
In one embodiment, after the terminal acquires the target picture, the terminal performs resolution size to physical size conversion on the target picture, performs direction correction on the target picture, determines a prompt mark of the target object after the direction correction according to the prompt mark of the target object when the target picture is acquired, expands a target object region marked by the prompt mark to obtain an optimized image region, generates position information of the image region as final clipping position information, writes the clipping position information into attribute information of an original target picture, and then stores the original target picture.
Fig. 11 is a schematic flowchart of an image processing method in an embodiment. Referring to FIG. 11, the process begins and step 1102: starting a camera on the terminal to start to collect pictures; next, step 1104, determining whether a target object is detected; if yes, go to step 1106: displaying a prompt mark of the target object in the real-time shooting picture; if not, go to step 1108: prompting a user to change a shooting angle in real time; after step 1106, step 1110 is performed: the user confirms shooting and acquires a target picture; step 1112: correcting the direction of the target picture; step 1114: expanding the position of the target object determined by the prompt mark when the target picture is collected to obtain optimized cutting position information, and cutting the corrected target picture according to the cutting position information to obtain a cut image; step 1116, storing the cutting position information into the attribute information of the original target picture; step 1118, adapting the size of the preview area to display the cut image; the flow ends.
In an embodiment, the target picture may be a video cover of a video, and the terminal may perform clipping and previewing on the video cover by using the clipping method provided in the embodiment of the present application. For example, after the video recording is completed, object detection is performed on a video cover of the video, and cutting position information corresponding to an object in the detected video cover is written into attribute information of the video. And in the user interactive interface needing to display the video, cutting the video cover according to the cutting position information in the attribute information of the video to obtain a cut image, and then displaying the cut image in a preview area of the user interactive interface. The video cover may be the first screen video frame of the video. Therefore, the cut image can directly highlight the target object in the video cover and perfectly transmit the key content of the video cover, so that the video cover can be accurately and effectively previewed, and other people can attract attention to the video.
Fig. 12 is a schematic flowchart of an image processing method in a specific embodiment. Referring to fig. 12, in order to capture a target picture, a terminal creates a media management session, creates a device object representing a camera that can be switched back and forth, creates an input object representing input data, an output object representing output picture data, and an output object representing output target face position data from the device object, adds the input object and the output object to the media management session in a unified manner, and finally creates a real-time preview layer object, and acquires face-related metadata in the captured target picture. Next, the cue markers of the face are rendered based on the face-related metadata (face tilt angle, face rotation angle, and face region position). Then, confirming shooting, acquiring a target picture and a prompt mark related to the face in the target picture, converting the size of the target picture from the resolution size to the physical size, correcting the direction of the picture, correcting the position of a face area according to the prompt mark, and acquiring cutting position information according to the coordinate and the size of the corrected area. And then, cutting the corrected picture according to the cutting position information to obtain a cut image, and displaying the cut image in a manner of adapting to the size of the preview area. In addition, after the cutting position information is written into EXIF information of the originally collected target picture, the target picture is stored in a local album.
When a target picture stored in a local photo album needs to be cut and previewed, the target picture is selected from the local photo album, EXIF information of the target picture is obtained, cutting position information in the EXIF information of the target picture is read, whether the cutting position information is effective or not is judged, if the cutting position information is effective, the direction of the target picture is corrected according to the direction angle information of the target picture, cutting is carried out according to the cutting position information to obtain a cut image, and the cut image is matched with the size of a preview area to be displayed. And if the target picture is invalid, directly matching the size of the selected target picture with the size of the preview area, and filling the preview area with the selected target picture.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the above-mentioned flowcharts may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or the stages is not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a part of the steps or the stages in other steps.
In one embodiment, as shown in fig. 13, there is provided an image processing apparatus 1300, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, the apparatus specifically includes: a display module 1302 and a picture selection module 1304, wherein:
a display module 1302, configured to display an album entry, and in response to a trigger operation on the album entry, display at least one candidate picture;
a picture selection module 1304, configured to select a target picture from the displayed candidate pictures in response to a picture selection operation; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including the target object in the target picture;
a display module 1302, configured to display a cut image cut out from the target picture; and cutting the image to match with the image area marked by the cutting position information in the target picture.
In one embodiment, the display module 1302 is further configured to display an image display interface, where the image display interface includes an album entry;
responding to triggering operation of an album entrance, and entering a picture browsing interface;
and displaying at least one candidate picture in the picture browsing interface.
In the above embodiment, the display module 1302 is further configured to, in response to a picture selection operation in the picture browsing interface, return to the image display interface from the picture browsing interface after selecting a target picture from the displayed candidate pictures; and displaying a cut image cut out from the target picture in a preview area of the image display interface.
In one embodiment, the display module 1302 is further configured to display, in the preview area of the image presentation interface, a cropped image cropped from the target picture and adapted to the size of the preview area.
In the above embodiment, the image processing apparatus 1300 further includes a preview area adapting module, configured to cut out an image area including the target object from the target picture according to the cutting position information; when the size of the preview area is smaller than that of the image area, redrawing the image area in the preview area according to the size of the preview area to obtain a cut image; and when the size of the preview area is larger than that of the image area, carrying out scaling processing on the image area according to the scaling ratio determined by the size of the preview area and the size of the image area to obtain a cut image.
In the above embodiment, the image processing apparatus 1300 further includes:
the direction correction module is used for reading the attribute information of the target picture; when the attribute information comprises cutting position information, cutting an image area from the corrected image according to the cutting position information after correcting the direction of the target image;
and the preview area adapting module is also used for directly cutting the target picture according to the size of the preview area to obtain a cut image when the attribute information does not include the cutting position information.
In one embodiment, the picture taking portal includes a camera portal, as shown in fig. 14, the image processing apparatus 1300 further includes a picture taking module 1306;
the display module 1302 is further configured to display a real-time shooting picture in response to a trigger operation on the camera entry; displaying a prompt mark for a target object in the real-time shooting picture;
a picture collecting module 1306, configured to collect a target picture including a target object in response to a picture collecting operation;
a display module 1302, configured to display a cut image cut out from the target picture; and cutting the image to match with the image area including the target object in the target picture prompted by the prompt mark of the target object.
In the above embodiment, the display module 1302 is further configured to display an image presentation interface, where the image presentation interface includes a camera inlet; responding to the trigger operation of the camera entrance, and entering a picture acquisition interface; and displaying the real-time shooting picture in a picture acquisition interface.
In the above embodiment, the image capturing module 1306 is configured to capture a target image including a target object in response to an image capturing operation in the image capturing interface, and return the target image to the image display interface from the image capturing interface;
the display module 1302 is further configured to display a cut image cut out from the target picture in a preview area of the image display interface.
In one embodiment, the image processing apparatus 1300 further includes a position optimization module, configured to determine a camera type when acquiring a target picture including a target object; and carrying out fault-tolerant optimization processing on the position of the target object marked by the prompt mark when the target picture is collected according to the preset size corresponding to the type of the camera, and obtaining cutting position information.
In the last embodiment, the position optimization module is further configured to, when the camera type is a front-facing camera, use a target object area marked by a prompt marker when a target picture is collected as a central area, expand the central area according to a first preset size, and obtain cutting position information according to the expanded first area; when the camera type is a rear camera, a target area marked by a prompt mark when a target picture is collected is taken as a central area, the central area is expanded according to a second preset size, and then cutting position information is obtained according to a second area obtained through expansion; wherein the first preset size is larger than the second preset size.
In one embodiment, referring to fig. 14, the image processing apparatus 1300 further includes a picture storage module 1308, configured to determine cropping position information of the target picture based on the cue mark of the target object; writing the cutting position information into the attribute information of the target picture; and storing the target picture carrying the cutting position information.
In one embodiment, the display module 1302 is further configured to display a content sharing interface of a social application; and displaying the thumbnail which is cut out from the target picture and is matched with the size of the content editing area in the content editing area of the content sharing interface.
In the above embodiment, the display module 1302 is further configured to display a target picture corresponding to the thumbnail when a viewing operation on the thumbnail is triggered; and when the sharing operation of the content to be shared in the content editing area is triggered, sharing the target picture corresponding to the thumbnail according to the mode of displaying the thumbnail in the content sharing interface.
In one embodiment, the target picture is a certificate picture, and the display module 1302 is further configured to display a certificate image uploading interface; and displaying a cut image which is cut from the certificate picture and is matched with the size of the preview area in the preview area of the certificate image uploading interface.
In one embodiment, the display module 1302 is further configured to display an avatar setting interface; and displaying a cut image which is cut out from the target picture and is matched with the size of the preview area in the preview area of the head portrait setting interface.
The image processing apparatus 1300 displays the album entry, displays at least one candidate picture in response to a trigger operation on the album entry, and selects a target picture from the candidate pictures when a picture selection operation is triggered, wherein attribute information of the selected target picture includes accurate clipping position information, and the clipping position information marks an image area including a target object in the target picture. Therefore, after the target picture is selected, the target picture can be directly cut according to the accurate cutting position information, and after a cutting image matched with the image area marked by the cutting position information is obtained, the cutting image is directly displayed. On one hand, compared with directly cutting the central area from the target picture, the cutting position information can accurately position the image area where the target object is located in the target picture, and the target object in the cut image is prevented from being partially lost; on the other hand, compared with the manual selection of the cutting area or the positioning of the cutting area after the real-time detection, the cutting is carried out only according to the cutting position information in the attribute information of the target picture, so that the problem of time delay caused by the manual selection of the cutting area or the real-time detection during each display is solved, and the generation efficiency and the display effect of the cutting image are improved.
For specific limitations of the image processing apparatus 1300, reference may be made to the above limitations of the image processing method, which are not described herein again. The respective modules in the image processing apparatus 1300 described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 15. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 15 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer-readable storage medium. The computer instructions are read by a processor of a computer device from a computer-readable storage medium, and the computer instructions are executed by the processor to cause the computer device to perform the steps in the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (17)

1. An image processing method, characterized in that the method comprises:
displaying a picture acquisition entry, wherein the picture acquisition entry comprises an album entry;
responding to a triggering operation of the album entrance, and displaying at least one candidate picture;
responding to picture selection operation, and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including a target object in the target picture;
displaying a cut image cut out from the target picture; and the cutting image is matched with the image area comprising the target object.
2. The method according to claim 1, wherein the displaying the cropped image cropped from the target picture comprises:
and displaying a cut image which is cut out from the target picture and is matched with the size of the preview area in the preview area of the image display interface.
3. The method of claim 2, further comprising:
cutting out an image area including a target object from the target picture according to the cutting position information;
when the size of the preview area is smaller than that of the image area, redrawing the image area in the preview area according to the size of the preview area to obtain the cut image;
and when the size of the preview area is larger than that of the image area, carrying out scaling processing on the image area according to the scaling ratio determined according to the size of the preview area and the size of the image area to obtain the cutting image.
4. The method according to claim 3, wherein the cropping out of the target picture according to the cropping position information to obtain an image area including a target object comprises:
reading attribute information of the target picture;
and when the attribute information comprises the cutting position information, cutting out an image area from the corrected image according to the cutting position information after correcting the direction of the target image.
5. The method of claim 4, further comprising:
and when the attribute information does not comprise the cutting position information, directly cutting the target picture according to the size of the preview area to obtain the cutting image.
6. The method of any of claims 1 to 5, wherein the picture acquisition portal comprises a camera portal, the method further comprising:
responding to the trigger operation of the camera inlet, and displaying a real-time shooting picture;
displaying a prompt mark for a target object in the real-time shooting picture;
acquiring a target picture including the target object in response to a picture acquisition operation;
displaying a cut image cut out from the target picture; and the cutting image is matched with the image area which is prompted by the prompt mark of the target object and comprises the target object in the target picture.
7. The method according to claim 6, wherein displaying, in the live-action picture, a prompt mark for a target object in the live-action picture comprises:
detecting a target object in the real-time shooting picture to obtain a detection result of the target object;
and displaying a prompt mark in real time in the real-time shooting picture according to the detection result of the target object.
8. The method of claim 6, further comprising:
determining a camera type when a target picture including the target object is acquired;
and carrying out fault-tolerant optimization processing on the position of the target object marked by the prompt mark when the target picture is acquired according to a preset size corresponding to the type of the camera, and obtaining cutting position information.
9. The method according to claim 8, wherein the performing fault-tolerant optimization processing on the target object position marked by the prompt mark when the target picture is collected according to a preset size corresponding to the camera type to obtain the cutting position information comprises:
when the camera type is a front camera, taking a target area marked by the prompt mark when the target picture is collected as a central area, expanding the central area according to a first preset size, and then obtaining cutting position information according to a first area obtained through expansion;
when the camera type is a rear camera, taking a target area marked by the prompt mark when the target picture is collected as a central area, expanding the central area according to a second preset size, and then obtaining cutting position information according to a second area obtained through expansion;
wherein the first preset size is larger than the second preset size.
10. The method of claim 6, further comprising:
determining cutting position information of the target picture based on the prompt mark of the target object;
writing the cutting position information into attribute information of the target picture;
and storing the target picture carrying the cutting position information.
11. The method according to claim 1, wherein the displaying the cropped image cropped from the target picture comprises:
displaying a content sharing interface of a social application;
and displaying a thumbnail which is cut out from the target picture and is matched with the size of the content editing area in the content editing area of the content sharing interface.
12. The method of claim 11, further comprising:
when the viewing operation of the thumbnail is triggered, displaying the target picture corresponding to the thumbnail;
and when the sharing operation of the content to be shared in the content editing area is triggered, sharing the target picture corresponding to the thumbnail according to the mode of displaying the thumbnail in the content sharing interface.
13. The method of claim 1, wherein the target picture is a document picture, and wherein displaying the cropped image cropped from the target picture comprises:
displaying a certificate image uploading interface;
and displaying a cut-out image which is cut out from the certificate picture and is matched with the size of the preview area in the preview area of the certificate image uploading interface.
14. The method according to claim 1, wherein the displaying the cropped image cropped from the target picture comprises:
displaying a head portrait setting interface;
and displaying a cut image which is cut from the target picture and is matched with the size of the preview area in the preview area of the head portrait setting interface.
15. An image processing apparatus, characterized in that the apparatus comprises:
the display module is used for displaying an album entry and responding to the triggering operation of the album entry to display at least one candidate picture;
the picture selection module is used for responding to picture selection operation and selecting a target picture from the displayed candidate pictures; the attribute information of the target picture comprises cutting position information, and the cutting position information marks an image area including a target object in the target picture;
the display module is also used for displaying a cut image cut out from the target picture; and the cutting image is matched with the image area comprising the target object.
16. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 14.
17. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 14.
CN202110627369.5A 2021-06-04 2021-06-04 Image processing method, image processing device, computer equipment and storage medium Pending CN113822899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110627369.5A CN113822899A (en) 2021-06-04 2021-06-04 Image processing method, image processing device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110627369.5A CN113822899A (en) 2021-06-04 2021-06-04 Image processing method, image processing device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113822899A true CN113822899A (en) 2021-12-21

Family

ID=78912501

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110627369.5A Pending CN113822899A (en) 2021-06-04 2021-06-04 Image processing method, image processing device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113822899A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697386A (en) * 2022-02-24 2022-07-01 深圳绿米联创科技有限公司 Information notification method, device, terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114697386A (en) * 2022-02-24 2022-07-01 深圳绿米联创科技有限公司 Information notification method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
US10810454B2 (en) Apparatus, method and program for image search
CN108401112B (en) Image processing method, device, terminal and storage medium
CN109348276B (en) video picture adjusting method and device, computer equipment and storage medium
CN107168619B (en) User generated content processing method and device
CN103916587A (en) Photographing device for producing composite image and method using the same
WO2022161260A1 (en) Focusing method and apparatus, electronic device, and medium
CN111914775A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111353965B (en) Image restoration method, device, terminal and storage medium
CN114598819A (en) Video recording method and device and electronic equipment
CN112639870B (en) Image processing device, image processing method, and image processing program
CN115494987A (en) Video-based interaction method and device, computer equipment and storage medium
CN111290659A (en) Writing board information recording method and system and writing board
CN113727039B (en) Video generation method and device, electronic equipment and storage medium
CN112822394B (en) Display control method, display control device, electronic equipment and readable storage medium
CN113822899A (en) Image processing method, image processing device, computer equipment and storage medium
Chang et al. Panoramic human structure maintenance based on invariant features of video frames
CN116363725A (en) Portrait tracking method and system for display device, display device and storage medium
KR101909994B1 (en) Method for providing 3d animating ar contents service using nano unit block
CN115514887A (en) Control method and device for video acquisition, computer equipment and storage medium
WO2018192244A1 (en) Shooting guidance method for intelligent device
Wang et al. Spatial-invariant convolutional neural network for photographic composition prediction and automatic correction
JP4550460B2 (en) Content expression control device and content expression control program
US20180189602A1 (en) Method of and system for determining and selecting media representing event diversity
TWM589834U (en) Augmented Reality Integration System
CN104463839A (en) Image processing device and image processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination