CN111432122B - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN111432122B
CN111432122B CN202010237683.8A CN202010237683A CN111432122B CN 111432122 B CN111432122 B CN 111432122B CN 202010237683 A CN202010237683 A CN 202010237683A CN 111432122 B CN111432122 B CN 111432122B
Authority
CN
China
Prior art keywords
information
target
auxiliary line
line
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010237683.8A
Other languages
Chinese (zh)
Other versions
CN111432122A (en
Inventor
泮婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010237683.8A priority Critical patent/CN111432122B/en
Publication of CN111432122A publication Critical patent/CN111432122A/en
Application granted granted Critical
Publication of CN111432122B publication Critical patent/CN111432122B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses an image processing method and electronic equipment, relates to the technical field of communication, and can solve the problem of poor shooting effect of the electronic equipment. The method comprises the following steps: under the condition that the shooting interface displays the shooting image, acquiring target picture information of the shooting image through a semantic segmentation network, wherein the target picture information comprises at least one of the following items: first information and second information, the first information is the information of the target object in the shot image, the second information is the information of the background picture in the shot image; determining a target mapping method corresponding to the shot image according to the target picture information; and processing the shot image by adopting a target mapping method to obtain a processed image. The embodiment of the invention is applied to the process of processing the shot image through the patterning method.

Description

Image processing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and electronic equipment.
Background
Generally, as electronic devices have been developed, more and more electronic devices have a photographing function. After a user opens a camera (for example, a wide-angle camera with an ultra-large visual angle range) of the electronic device, shooting input can be performed on the electronic device, so that the electronic device can acquire images through the camera, and a shot picture required by the user can be shot.
However, during the shooting process, for some reasons, for example, when the user holds the electronic device for shooting, the user may shake his hand or may not know how to compose the picture, and therefore the obtained shot picture cannot meet the user's requirement, that is, the shooting effect of the electronic device is poor.
Disclosure of Invention
The embodiment of the invention provides an image processing method and electronic equipment, which can solve the problem of poor shooting effect of the electronic equipment.
In order to solve the technical problem, the embodiment of the invention adopts the following technical scheme:
in a first aspect of the embodiments of the present invention, an image processing method is provided, which is applied to an electronic device, and includes: under the condition that the shooting interface displays the shooting image, acquiring target picture information of the shooting image through a semantic segmentation network, wherein the target picture information comprises at least one of the following items: first information and second information, the first information is the information of the target object in the shot image, the second information is the information of the background picture in the shot image; determining a target mapping method corresponding to the shot image according to the target picture information; and processing the shot image by adopting a target mapping method to obtain a processed image.
In a second aspect of the embodiments of the present invention, there is provided an electronic device, including: the device comprises an acquisition module, a determination module and a processing module. The acquisition module is used for acquiring target picture information of the shot image through a semantic segmentation network under the condition that the shot image is displayed on a shooting interface, wherein the target picture information comprises at least one of the following items: the first information is information of a target object in the photographed image, and the second information is information of a background picture in the photographed image. And the determining module is used for determining a target mapping method corresponding to the shot image according to the target picture information acquired by the acquiring module. And the processing module is used for processing the shot image by adopting the target mapping method determined by the determining module to obtain a processed image.
In a third aspect of the embodiments of the present invention, an electronic device is provided, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the electronic device implements the steps of the image processing method according to the first aspect.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the invention, the electronic device may acquire target picture information (including information of a target object in the captured image and/or information of a background picture) of the captured image through a semantic segmentation network, so as to process the captured image by using a corresponding target mapping method according to the target picture information, thereby obtaining a processed image. After the shot image is obtained, the electronic device can acquire the information of the target object and/or the information of the background picture in the shot image, and then determine a proper patterning method according to the information, so as to process the shot image by adopting the patterning method, thereby obtaining the processed image, namely the electronic device can flexibly process the shot image based on the picture information of each shot image to obtain the shot image with better effect, so that the shooting effect of the electronic device is better.
Drawings
Fig. 1 is a schematic structural diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 3 is a second schematic diagram of an image processing method according to an embodiment of the present invention;
fig. 4 is a third schematic diagram of an image processing method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an example of image processing according to an embodiment of the present invention;
FIG. 6 is a second exemplary diagram of image processing according to the embodiment of the present invention;
FIG. 7 is a third exemplary diagram of an image processing method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating an example of image processing according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 10 is a second schematic structural diagram of an electronic apparatus according to a second embodiment of the present invention;
fig. 11 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of embodiments of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first information, the second information, and the like are for distinguishing different information, not for describing a specific order of information.
In the description of the embodiments of the present invention, the meaning of "a plurality" means two or more unless otherwise specified. For example, a plurality of elements refers to two elements or more.
The term "and/or" herein is an association relationship describing an associated object, and means that there may be three relationships, for example, a display panel and/or a backlight, which may mean: there are three cases of a display panel alone, a display panel and a backlight at the same time, and a backlight alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, input/output denotes input or output.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides an image processing method and electronic equipment, wherein the electronic equipment can acquire target picture information (including information of a target object and/or information of a background picture in a shot image) of the shot image through a semantic segmentation network, and process the shot image by adopting a corresponding target mapping method according to the target picture information to obtain a processed image. After the shot image is obtained, the electronic device can acquire the information of the target object and/or the information of the background picture in the shot image, and then determine a proper patterning method according to the information, so as to process the shot image by adopting the patterning method, thereby obtaining the processed image, namely the electronic device can flexibly process the shot image based on the picture information of each shot image to obtain the shot image with better effect, so that the shooting effect of the electronic device is better.
The image processing method and the electronic device provided by the embodiment of the invention can be applied to the process of processing the shot image by the electronic device through a patterning method. Specifically, the method can be applied to the process that the electronic equipment performs cutting processing or rotation processing on the shot image by adopting a corresponding patterning method according to the picture information of the shot image.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The electronic device in the embodiment of the invention can be a mobile electronic device or a non-mobile electronic device. For example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
In the embodiment of the invention, after the user triggers the electronic device to start the camera (for example, the wide-angle camera/the wide-angle lens), the electronic device can display the shooting interface, and then the user can hold the electronic device by hand to perform shooting input, so that the electronic device shoots a shot object through the wide-angle camera, and the obtained shot picture is displayed on the shooting interface. At this time, the electronic device may perform image recognition detection on the captured picture to determine a target object (a main scene picture/a foreground picture) and a background picture in the captured picture, and then the electronic device may acquire picture information of the captured picture, that is, information of the target object and information of the background picture, and determine a corresponding composition method according to the information of the target object and the information of the background picture, so that the electronic device may add corresponding auxiliary lines (different auxiliary lines corresponding to different composition methods) in a capturing interface by using the composition method to process (for example, crop processing or rotation processing, etc.) the captured picture based on the auxiliary lines to obtain a processed image. Therefore, according to the scheme, the electronic equipment can process the shot picture based on the picture information of the shot picture instead of the cutting operation and other operations of the shot picture by the user, so that the operation of the user can be simplified and the time consumption can be saved.
It should be noted that, in the embodiment of the present invention, the wide-angle camera may be a camera of an electronic device, or may also be an external camera connected to the electronic device. The wide-angle camera has a wide visual angle range, can cover a large range of scenery, and can shoot more scenery from top to bottom and from left to right when shooting/framing through the wide-angle camera. For example, in a person group-shot scene, a picture including all the subjects can be taken by the feature that the angle of view of the wide-angle camera is wide.
An image processing method and an electronic device provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 2 shows a flowchart of an image processing method provided in an embodiment of the present invention, and the method may be applied to an electronic device having an android operating system as shown in fig. 1. As shown in fig. 2, the image processing method provided by the embodiment of the present invention may include steps 201 to 203 described below.
Step 201, in the case that the shot image is displayed on the shooting interface, the electronic device acquires the target picture information of the shot image through a semantic segmentation network.
In an embodiment of the present invention, the target screen information includes at least one of the following items: the first information is information of a target object in the photographed image, and the second information is information of a background picture in the photographed image.
In the embodiment of the invention, a user can trigger the electronic device to start a camera (for example, a wide-angle camera) so that the electronic device displays a shooting interface, then the user can perform shooting input so that the electronic device shoots and displays a shot image on the shooting interface, and then the electronic device can input the shot image into a semantic segmentation network so as to perform semantic segmentation on the shot image so as to determine a target object and a background picture in the shot image, thereby acquiring target picture information.
It should be noted that the target object may be understood as a foreground object/foreground picture/main scene object in the captured image, that is, an image portion of the captured image other than the background picture.
Optionally, in an embodiment of the present invention, the first information includes information on a barycentric position of the target object.
Optionally, in an embodiment of the present invention, the second information includes position information of a target background boundary line of the background picture, where the target background boundary line is a background boundary line corresponding to a target object type in the background picture.
Optionally, in the embodiment of the present invention, the electronic device determines the barycentric position information of the target object and the target background boundary line of the background picture based on the semantic segmentation network. Specifically, referring to fig. 2, as shown in fig. 3, the step 201 can be realized by the following steps 201a and 201 b.
Step 201a, in the case that the shooting interface displays the shot image, the electronic device acquires third information of the target object and object category information of the background picture through a semantic segmentation network.
In an embodiment of the present invention, the third information includes at least one of the following: size information of the target object, shape information of the target object, color information of the target object, position information of the target object, category information of the target object, and the like.
In the embodiment of the invention, the electronic equipment can perform semantic segmentation on the shot image by a semantic segmentation network to determine the target object and the background picture, and then analyze the target object and the background picture to acquire the third information of the target object and the object category information of the background picture.
It should be noted that the size information may be understood as a size of a frame occupied by the target object in the captured image; the position information may be used to indicate a position of the target object in the captured image, for example, the position information may be a position coordinate corresponding to the target object; the above-mentioned category information may be used to indicate the image category to which the target object belongs, such as a character picture type (e.g., a multi-character picture type), a landscape picture type, or a specific object picture type; the object type information may be used to indicate the type of the object in the background picture, for example, the background picture includes objects such as sky, cloud, trees, and the like, and the types of the objects are different.
Optionally, in the embodiment of the present invention, the electronic device may further obtain at least one of the following items through a semantic segmentation network: size information of the background screen, size information of the object in the background screen, position information of the object in the background screen, shape information of the object in the background screen, color information of the object in the background screen, and the like.
Step 201b, the electronic device determines the gravity center position information according to the third information, and determines the target background boundary line from at least one background boundary line of the background picture according to the category information of the object in the background picture based on the preset model.
Optionally, in an embodiment of the present invention, the barycentric location information may be coordinate values of a barycenter of the target object.
Optionally, in the embodiment of the present invention, the electronic device may use a vertex of the shooting interface (for example, a vertex of an upper left corner of the shooting interface in a landscape state) as an origin, use one edge line of the shooting interface as a horizontal axis (X axis), and use another edge line perpendicular to the one edge line as a vertical axis (Y axis), establish a coordinate system, obtain the coordinate value of at least one pixel point in the target object according to the third information, and then determine the coordinate value of the center of gravity of the target object according to the coordinate value of the at least one pixel point.
Optionally, in this embodiment of the present invention, if the shape of the target object is a regular shape (for example, a triangle, a rectangle, or a circle), the electronic device may obtain coordinate values of at least one feature pixel in the target object (for example, three vertices of the target object in the shape of the triangle, four vertices of the target object in the shape of the rectangle, or a center of the target object in the shape of the circle), and then calculate an average value of the coordinate values of the at least one feature pixel, so as to determine the average value as a coordinate value of a center of gravity of the target object.
Optionally, in this embodiment of the present invention, if the shape of the target object is a regular shape or an irregular shape, the electronic device may obtain coordinate values of all pixel points in the target object, and then calculate an average value of the coordinate values of all pixel points, so as to determine the average value as a coordinate value of the center of gravity of the target object, for example, the coordinate value of the center of gravity is (x0, y 0).
In an embodiment of the invention, the at least one background boundary line includes a boundary line between every two adjacent objects in the plurality of objects in the background picture.
It will be understood that when a plurality of objects are included in the background picture, a boundary line exists between the objects, that is, a boundary line exists between every two adjacent objects may be a background boundary line, and one or more background boundary lines exist in the background picture.
Optionally, in the embodiment of the present invention, the background boundary line may also be understood as a boundary line of different picture areas in the background picture, and a difference between image parameters of two adjacent picture areas is greater than or equal to a preset value.
Optionally, in an embodiment of the present invention, the image parameter may include at least one of: brightness information of an image, color information of an image, category information of an image, and the like.
Optionally, in this embodiment of the present invention, the electronic device may use a vertex of the shooting interface (e.g., a vertex at the top left corner of the shooting interface in the landscape state) as an origin, use one edge line of the shooting interface as a horizontal axis (X axis), and use another edge line perpendicular to the one edge line as a vertical axis (Y axis), establish a coordinate system, and then determine the position information of the background boundary line based on the coordinate system, for example, the position information of the background boundary line may be a set of two-dimensional array [ xi, yi ].
Optionally, in the embodiment of the present invention, a user may select a data set of some pictures in advance, for example, the data set may include a plurality of landscape pictures and a plurality of people pictures, and calibrate the data set, and then input the data set into the semantic segmentation network to train the data set to obtain a model (i.e., a preset model, i.e., a model that can output information such as positions, shapes, and categories of a foreground and a background), so that subsequent electronic devices may determine information of a target object and information of a background picture (e.g., position information of a boundary line between the target background) based on the model.
It is to be understood that, for the target background boundary line, the electronic device may obtain, through a preset model, which types of objects are included in the background picture in the captured image, and then select, from the objects, the background boundary line corresponding to the object of the target object type to determine the target background boundary line of the background picture, that is, determine the target background boundary line from at least one background boundary line (for example, when the background picture includes objects of the types of sky, mountain peak, cloud, and the like, the electronic device may determine the background boundary line corresponding to the sky as the target background boundary line).
In the embodiment of the invention, the electronic equipment can accurately and quickly acquire the third information of the target object and the object type information of the background picture through the semantic segmentation network, so that the barycentric position information of the target object and the target background boundary line of the background picture are accurately determined.
Step 202, the electronic device determines a target mapping method corresponding to the shot image according to the target picture information.
Optionally, in this embodiment of the present invention, as shown in fig. 4 in combination with fig. 2, the step 202 may be specifically implemented by a step 202a described below.
Step 202a, the electronic device determines a target mapping method according to target picture information through a logistic regression classification model.
In an embodiment of the present invention, the target patterning method may be any one of: a center mapping method, a horizontal line mapping method, a vertical line mapping method, a trisection mapping method, and a symmetrical mapping method.
Optionally, in this embodiment of the present invention, the electronic device may input the target screen information into the logistic regression classification model to perform logistic regression classification on the target screen information, so as to determine the target mapping method from a plurality of mapping methods pre-stored in the electronic device (that is, an output result of the logistic regression classification model is the target mapping method).
Optionally, in the embodiment of the present invention, the electronic device may input the category information of the target object, the position information of the target object, the category information of the background picture, the position information of the target background boundary line, and the like into the logistic regression classification model to determine the target composition method.
It should be noted that, for the description of the above target patterning method, it will be described in the following embodiments, which are not repeated herein.
It can be understood that, in the embodiment of the present invention, different image information corresponds to different mapping methods, that is, the electronic device may determine, according to the acquired image information of the captured image, an appropriate mapping method corresponding to the image information from among the plurality of mapping methods, so as to reasonably process the captured image, so as to obtain an image with a better effect.
And 203, processing the shot image by the electronic equipment by adopting a target mapping method to obtain a processed image.
Optionally, in this embodiment of the present invention, the electronic device may add/draw a corresponding auxiliary line through a matching composition method (e.g., a target composition method), so as to perform a cropping process or a rotation process on the captured image.
Optionally, in an implementation manner of the embodiment of the present invention, the step 203 may be specifically implemented by a step 203a described below.
And 203a, adding a horizontal auxiliary line and a vertical auxiliary line based on the gravity center of the target object by the electronic equipment, and cutting the shot image according to the vertical distances between the horizontal auxiliary line and the edge line of the display screen and the vertical auxiliary line respectively to obtain the cut image.
Wherein, after the cropping processing, the center of gravity of the target object is located in the central region of the image after the cropping processing.
It is to be understood that, in the case where the object patterning method is the center patterning method, the electronic apparatus may place a subject (main subject/object) at the center of the screen for composition, that is, add corresponding auxiliary lines based on the center-of-gravity position information of the main subject to perform the clipping process on the captured image so that the object in the clipped captured image is located at the center position.
Optionally, in the embodiment of the present invention, the electronic device may add a horizontal auxiliary line and a vertical auxiliary line (i.e., a vertical line) on the captured image according to the target pixel point (i.e., the center of gravity point) indicated by the center of gravity position information, where the horizontal auxiliary line and the vertical auxiliary line intersect at the center of gravity point and are perpendicular to each other; then, the electronic device may acquire a vertical distance between the horizontal auxiliary line and two edge lines of the display screen (both of which are parallel to the horizontal auxiliary line), and a vertical distance between the vertical auxiliary line and the other two edge lines of the display screen (both of which are parallel to the vertical auxiliary line), to perform a cropping process on the captured image according to the four vertical distances.
Optionally, in this embodiment of the present invention, if the first distance (the vertical distance between the horizontal auxiliary line and the first edge line of the display screen) is greater than the second distance (the vertical distance between the horizontal auxiliary line and the second edge line of the display screen), the electronic device may perform a horizontal cropping process on a portion of the picture close to the first edge line in the captured image, so that a difference between the vertical distance between the horizontal auxiliary line and the first edge line of the display screen and the second distance is within a preset range (e.g., equal). Similarly, if the first distance is smaller than the second distance, the electronic device may perform horizontal cropping on the portion of the picture near the second edge line in the captured image, so that the difference between the vertical distance between the horizontal auxiliary line and the second edge line of the display screen and the first distance is within a preset range (e.g., equal). If the first distance is equal to the second distance, the electronic device may not perform the cropping processing in the horizontal direction on the captured image.
Optionally, in this embodiment of the present invention, if the third distance (the vertical distance between the vertical auxiliary line and the third edge line of the display screen) is greater than the fourth distance (the vertical distance between the vertical auxiliary line and the fourth edge line of the display screen), the electronic device may perform a vertical cropping process on a portion of the picture close to the third edge line in the captured image, so that a difference between the vertical distance between the vertical auxiliary line and the third edge line of the display screen and the fourth distance is within a preset range (e.g., equal). Similarly, if the third distance is smaller than the fourth distance, the electronic device may perform a cropping process in the vertical direction on the portion of the picture near the fourth edge line in the captured image, so that the difference between the vertical distance between the vertical auxiliary line and the fourth edge line of the display screen and the third distance is within a preset range (e.g., equal). If the third distance is equal to the fourth distance, the electronic device may not perform the cropping processing in the vertical direction on the captured image.
It should be noted that, in the embodiment of the present invention, in the horizontal screen state, the horizontal auxiliary line is a straight line parallel to a long edge line of the display screen of the electronic device, and in the vertical screen state, the horizontal auxiliary line is a straight line parallel to a short edge line of the display screen of the electronic device; in the horizontal screen state, the vertical auxiliary line is a straight line perpendicular to a long edge line of the display screen of the electronic device, and in the vertical screen state, the vertical auxiliary line is a straight line perpendicular to a short edge line of the display screen of the electronic device.
The electronic device is taken as a mobile phone for illustration. As shown in fig. 5 (a), the mobile phone in the landscape state displays a shot image including a target object 10 and a background picture 11, a center of gravity of the target object 10 is a point a, the mobile phone can add a horizontal auxiliary line 12 and a vertical auxiliary line 13 based on the center of gravity, the horizontal auxiliary line 12 and the vertical auxiliary line 13 intersect at the point a, and the horizontal auxiliary line 12 and the vertical auxiliary line 13 are perpendicular; as shown in (B) of fig. 5, the mobile phone may acquire a vertical distance d1 between the horizontal auxiliary line 12 and the first edge line of the display screen, a vertical distance d2 between the horizontal auxiliary line 12 and the second edge line of the display screen, and acquire a vertical distance d3 between the vertical auxiliary line 13 and the third edge line of the display screen, and a vertical distance d4 between the vertical auxiliary line 13 and the fourth edge line of the display screen; as shown in (C) of fig. 5, if D1 is greater than D2 and D4 is greater than D3, the cellular phone may perform a cropping process in the horizontal direction on a screen portion (e.g., L1 portion) near the first edge line in the captured image and perform a cropping process in the vertical direction on a screen portion (e.g., L2 portion) near the fourth edge line in the captured image, to obtain a captured image as shown in (D) of fig. 5.
Optionally, in another implementation manner of the embodiment of the present invention, the step 203 may be specifically implemented by a step 203b described below.
And step 203b, adding a horizontal auxiliary line on the basis of the target background boundary line by the electronic equipment, and rotating the shot image according to an included angle value between the horizontal auxiliary line and the target background boundary line to obtain the image after the rotation processing.
In an embodiment of the present invention, the horizontal auxiliary line in step 203b is a straight line drawn by using an intersection point of the target background boundary line and an edge line of the display screen as a reference point, and the horizontal auxiliary line is perpendicular to the edge line. After the rotation process, the target background boundary line is parallel to the horizontal auxiliary line.
It is understood that, in the case that the target mapping method is a horizontal line mapping method, the electronic device may add a corresponding auxiliary line according to the position information of the target background boundary line to perform rotation processing on the captured image so that the target object and/or the background picture in the captured image after the rotation processing is in a horizontal state (e.g., the target background boundary line is parallel to the horizontal auxiliary line).
In the embodiment of the invention, the electronic device can draw a horizontal auxiliary line based on the intersection point of the target background boundary line and an edge line of the display screen as a reference point, and then rotate the shot image along the clockwise or counterclockwise direction by taking the included angle value between the horizontal auxiliary line and the target background boundary line as a rotation angle value.
The electronic device may determine a direction in which the target object is deflected, using the target object as a reference object, and then perform rotation processing on the captured image in a direction opposite to the direction.
Illustratively, as shown in fig. 6 (a), the mobile phone in the landscape state displays a shot image, the shot image includes the target object 10 and the background picture 11, an intersection point of the background boundary line 14 of the background picture 11 and the third edge line is a point B, the mobile phone may draw a horizontal auxiliary line 15 perpendicular to the third edge line with the intersection point as a reference point, an included angle between the horizontal auxiliary line 15 and the background boundary line 14 is β, and the mobile phone may perform rotation processing on the shot image along the direction 16 based on the included angle β to obtain the shot image shown in fig. 6 (B).
Optionally, in another implementation manner of the embodiment of the present invention, the step 203 may be specifically implemented by a step 203c described below.
And step 203c, adding a first auxiliary line, a horizontal auxiliary line, a second auxiliary line and a vertical auxiliary line by the electronic equipment based on the gravity center of the target object and the shape information of the target object, and rotating the shot image according to the included angle value between the first auxiliary line and the horizontal auxiliary line to obtain the image after the rotation processing.
The first auxiliary line is perpendicular to the second auxiliary line, and the horizontal auxiliary line is perpendicular to the vertical auxiliary line. After the rotation processing, the value of the included angle between the second auxiliary line and the vertical auxiliary line is smaller than or equal to a preset threshold value. The first auxiliary line, the horizontal auxiliary line, the second auxiliary line, and the vertical auxiliary line intersect at the center of gravity of the target object.
It should be noted that an included angle between the first auxiliary line and the horizontal auxiliary line is an acute angle, an included angle between the second auxiliary line and the vertical auxiliary line is an acute angle, and in the counterclockwise direction, an included angle value from the horizontal auxiliary line to the first auxiliary line is smaller than an included angle value from the horizontal auxiliary line to the second auxiliary line, that is, the first auxiliary line may be understood as an auxiliary line in the horizontal direction of the target object (that is, the first auxiliary line is parallel to one characteristic edge line of the target object), and the second auxiliary line may be understood as an auxiliary line in the vertical direction of the target object (that is, the second auxiliary line is perpendicular to one characteristic edge line of the target object). Under the condition that an included angle exists between the first auxiliary line and the horizontal auxiliary line, an included angle also exists between the second auxiliary line and the vertical auxiliary line, and the value of the included angle between the second auxiliary line and the vertical auxiliary line is equal to the value of the included angle between the first auxiliary line and the horizontal auxiliary line.
It is to be understood that, in the case where the target composition is a vertical line composition, the electronic device may add a corresponding auxiliary line according to the center of gravity point and the shape information of the main subject (i.e., the target object) to perform rotation processing on the captured image so that the target object and/or the background picture in the captured image after the rotation processing are in a vertical state (e.g., one characteristic edge line of the target object is perpendicular to the vertical auxiliary line (i.e., an included angle value between the second auxiliary line and the vertical auxiliary line is 0), and/or a target background boundary line of the background picture is perpendicular to the vertical auxiliary line).
Alternatively, in this embodiment of the present invention, the electronic device may draw a horizontal auxiliary line and a vertical auxiliary line based on the center of gravity of the target object, determine a characteristic edge line of the target object according to the shape information of the target object, draw an auxiliary line (i.e., a first auxiliary line) parallel to the characteristic edge line and an auxiliary line (i.e., a second auxiliary line) perpendicular to the characteristic edge line based on the characteristic edge line of the target object, calculate an included angle value between the horizontal auxiliary line and the first auxiliary line (and/or an included angle value between the vertical auxiliary line and the second auxiliary line), and rotate the captured image based on the included angle value.
It should be noted that the feature edge line can be understood as: one edge line of the target object may reflect the shape of the target object and the current state (e.g., horizontal state, inclined state, or vertical state), for example, when the shape of the target object is a triangle, a feature edge line of the target object is the base of the triangle. For the description of the rotation direction, reference may be made to the description in step 203b in the above embodiment, and details are not repeated here.
Illustratively, as shown in (a) of fig. 7, the mobile phone in the landscape state displays a captured image including a target object 10 and a background picture 11, the center of gravity of the target object 10 is a point a, the mobile phone may add a horizontal auxiliary line 12 and a vertical auxiliary line 13 based on the center of gravity, and add a first auxiliary line 17 and a second auxiliary line 18 based on the shape information of the target object 10, an included angle value between the horizontal auxiliary line 12 and the first auxiliary line 17 is α (an included angle value between the vertical auxiliary line 13 and the second auxiliary line 18 is α), and the mobile phone may perform rotation processing on the captured image along a direction 16 based on the included angle value α to obtain the captured image shown in (B) of fig. 7.
Optionally, in another implementation manner of the embodiment of the present invention, the step 203 may be specifically implemented by a step 203d described below.
And step 203d, adding a plurality of horizontal auxiliary lines and a plurality of vertical auxiliary lines in the shooting interface by the electronic equipment to divide the shooting interface into a plurality of areas, and performing clipping processing on the shot image according to the distance between the center of gravity of the target object and each vertex of the central area to obtain a clipped image.
In this embodiment of the present invention, the central area in step 203d is an area located at a central position among the plurality of areas. After the clipping process, the center of gravity of the target object is located in the center region.
It is to be understood that, in the case where the target mapping method is a trimap mapping method, the electronic device may perform a cropping process on the captured image by adding corresponding auxiliary lines in the capturing interface to divide the capturing interface into a plurality of regions, so that the target object in the cropped captured image is located in the center region.
Optionally, in this embodiment of the present invention, the electronic device may add two horizontal auxiliary lines and two vertical auxiliary lines to divide the shooting interface into nine regions (for example, rectangular regions), then determine whether a center of gravity point of the target object is within the central region, and if the center of gravity point is not within the central region, obtain distances between the center of gravity point and each vertex of the central region (for example, each vertex of four vertices of the central region), and determine a region to be cropped in the shot image according to the distances, so as to perform cropping processing on the shot image, so that the center of gravity point of the target object in the cropped shot image is within the central region. Of course, if the center of gravity of the target object is within the center region, the electronic device may not perform the cropping process on the captured image.
Optionally, in the embodiment of the present invention, the electronic device may scale the tri-box along the direction of the barycentric point and the longest connecting line of one vertex, so as to implement clipping processing on the captured image.
Illustratively, as shown in fig. 8 (a), the cellular phone in the landscape state displays a photographed image, the shot image comprises a target object 10 and a background picture 11, the center of gravity of the target object 10 is a point a, the mobile phone can add a horizontal auxiliary line 19, a horizontal auxiliary line 20, a vertical auxiliary line 21 and a vertical auxiliary line 22 to divide the shooting interface into nine areas, then, in the case that the point a is not in the central area, the mobile phone can obtain the distances d5, d6, d7 and d8 (not shown in the figure) between the point a and each vertex of the central area (i.e. the point c, the point d, the point e and the point f), respectively, and d8> d7> d5> d6, and determines regions to be cut as a portion L3 and a portion L4 as shown in (B) of fig. 8 based on these distances, the cell phone can then perform a cropping process on the L3 part and the L4 part to obtain a captured image as shown in (C) in fig. 8.
Optionally, in another implementation manner of the embodiment of the present invention, the step 203 may be specifically implemented by a step 203e described below.
And step 203e, adding a vertical auxiliary line based on the gravity center of the target object by the electronic equipment, and cutting the shot image according to the vertical distance between the vertical auxiliary line and the edge line of the display screen to obtain a cut image.
Wherein, after the cropping process, the target object is symmetric based on the vertical auxiliary line.
It is to be understood that, in a case where the target patterning method is a symmetric patterning method, the electronic device may draw a corresponding vertical auxiliary line based on a center of gravity point of the main subject to perform a clipping process on the photographed image so that the clipped photographed image is bilaterally symmetric based on the vertical auxiliary line.
It should be noted that, for the method of the clipping processing in step 203e, reference may be made to the description in step 203a in the foregoing embodiment, and details are not repeated here.
Optionally, in the embodiment of the present invention, after the electronic device performs cropping processing on the captured image, the electronic device may determine whether target objects and/or background pictures on both sides of the vertical auxiliary line are symmetric, and if the target objects and/or background pictures are asymmetric, turn over the edge that is not cropped by using the vertical auxiliary line as a reference line to obtain a symmetric captured image.
In the embodiment of the invention, after the electronic equipment shoots through the wide-angle camera, the main scenery and the background picture in the shot picture, including the shape, the color, the category and the like of the main scenery and the background picture, can be determined according to the semantic segmentation network, and the most appropriate composition method is matched so as to perform auxiliary composition on the obtained shot picture, and the shot picture is automatically cut or rotationally adjusted so as to provide the adjusted result for a user, thereby improving the shooting experience of the user on the wide-angle lens.
The embodiment of the invention provides an image processing method, wherein electronic equipment can acquire target picture information (including information of a target object and/or information of a background picture in a shot image) of the shot image through a semantic segmentation network so as to process the shot image by adopting a corresponding target mapping method according to the target picture information to obtain a processed image. After the shot image is obtained, the electronic device can acquire the information of the target object and/or the information of the background picture in the shot image, and then determine a proper patterning method according to the information, so as to process the shot image by adopting the patterning method, thereby obtaining the processed image, namely the electronic device can flexibly process the shot image based on the picture information of each shot image to obtain the shot image with better effect, so that the shooting effect of the electronic device is better.
Optionally, in the embodiment of the present invention, after the step 203a, the step 203b, the step 203c, the step 203d, or the step 203e, the image processing method provided in the embodiment of the present invention further includes the following step 301 and step 302.
Step 301, the electronic device receives a target input of a user when the target auxiliary line is displayed on the shooting interface.
In an embodiment of the present invention, the target auxiliary line includes at least one of: a horizontal auxiliary line and a vertical auxiliary line, the target input being input by a user to capture an image based on the target auxiliary line.
In the embodiment of the invention, after the electronic device processes the shot image, the added auxiliary lines can be reserved, so that the user can perform adjustment operation on the shot image based on the auxiliary lines, so that the electronic device processes the shot image again.
Optionally, in an embodiment of the present invention, the target input may be a drag input, an interactive input, a rotation input, or the like of the user on the captured image.
And step 302, the electronic equipment responds to the target input, and performs cutting processing or rotation processing on the shot image again to obtain a processed image.
It should be noted that, for the method for performing the cropping processing or the rotation processing on the shot image by the electronic device, reference may be made to the description in the foregoing embodiment, and details are not described here again.
In the embodiment of the invention, the electronic equipment can reserve the added auxiliary line, so that a user can flexibly adjust the shot image according to the use requirement, and the image with better effect is obtained.
Fig. 9 shows a schematic diagram of a possible structure of an electronic device involved in the embodiment of the present invention. As shown in fig. 9, the electronic device 90 may include: an acquisition module 91, a determination module 92 and a processing module 93.
The acquiring module 91 is configured to acquire target picture information of the captured image through a semantic segmentation network when the captured image is displayed on the capturing interface, where the target picture information includes at least one of the following items: the first information is information of a target object in the photographed image, and the second information is information of a background picture in the photographed image. And a determining module 92, configured to determine a target mapping method corresponding to the captured image according to the target screen information acquired by the acquiring module 91. And the processing module 93 is configured to process the captured image by using the target mapping method determined by the determining module 92 to obtain a processed image.
In one possible implementation, the first information includes center-of-gravity position information of the target object, and the second information includes position information of a target background boundary line of the background screen. The target background boundary line is a background boundary line corresponding to the target object type in the background picture.
In a possible implementation manner, the determining module 92 is specifically configured to determine the target composition method according to the target screen information through a logistic regression classification model. Wherein the target patterning method is any one of: a center mapping method, a horizontal line mapping method, a vertical line mapping method, a trisection mapping method, and a symmetrical mapping method.
In a possible implementation manner, the processing module 93 is specifically configured to add a horizontal auxiliary line and a vertical auxiliary line based on a center of gravity of the target object, and perform a cropping process on the captured image according to vertical distances between the horizontal auxiliary line and an edge line of the display screen and the vertical auxiliary line, so as to obtain a cropped image; wherein, after the cropping processing, the center of gravity of the target object is located in the central region of the image after the cropping processing.
In a possible implementation manner, the processing module 93 is specifically configured to add a horizontal auxiliary line based on the target background boundary line, and perform rotation processing on the captured image according to an included angle value between the horizontal auxiliary line and the target background boundary line, so as to obtain a rotated image. The horizontal auxiliary line is a straight line drawn by taking the intersection point of a target background boundary line and one edge line of the display screen as a reference point, and is vertical to the edge line; after the rotation processing, the target background boundary line is parallel to the horizontal auxiliary line.
In a possible implementation manner, the processing module 93 is specifically configured to add a first auxiliary line, a horizontal auxiliary line, a second auxiliary line, and a vertical auxiliary line based on the center of gravity of the target object and the shape information of the target object, and perform rotation processing on the captured image according to an included angle value between the first auxiliary line and the horizontal auxiliary line, so as to obtain an image after the rotation processing. The first auxiliary line is vertical to the second auxiliary line, and the horizontal auxiliary line is vertical to the vertical auxiliary line; after the rotation processing, the value of the included angle between the second auxiliary line and the vertical auxiliary line is smaller than or equal to a preset threshold value.
In a possible implementation manner, the processing module 93 is specifically configured to add a plurality of horizontal auxiliary lines and a plurality of vertical auxiliary lines in the shooting interface to divide the shooting interface into a plurality of regions, and perform cropping processing on the shot image according to a distance between a center of gravity of the target object and each vertex of the central region, so as to obtain a cropped image. Wherein the central region is a region located at a central position among the plurality of regions; wherein, after the cropping processing, the center of gravity of the target object is located in the central region.
In a possible implementation manner, the processing module 93 is specifically configured to add a vertical auxiliary line based on a center of gravity of the target object, and perform a cropping process on the captured image according to a vertical distance between the vertical auxiliary line and an edge line of the display screen, so as to obtain a cropped image; wherein, after the cropping process, the target object is symmetric based on the vertical auxiliary line.
In a possible implementation manner, the target screen information at least includes first information. The acquiring module 91 is specifically configured to acquire third information of the target object and object category information of the background picture through a semantic segmentation network, where the third information includes at least one of the following: size information of the target object, shape information of the target object, color information of the target object, position information of the target object, and category information of the target object; determining gravity center position information according to the third information, and determining a target background boundary line from at least one background boundary line of the background picture according to the class information of the object in the background picture based on a preset model; wherein the at least one background boundary line includes a boundary line of every two adjacent objects in the plurality of objects in the background picture.
In a possible implementation manner, referring to fig. 9 and as shown in fig. 10, an electronic device 90 provided in an embodiment of the present invention further includes: a receiving module 94. The receiving module 91 is configured to receive a target input from a user in a case that a target auxiliary line is displayed on the shooting interface after the processing module 93 processes the shot image to obtain a processed image, where the target auxiliary line includes at least one of: a horizontal auxiliary line and a vertical auxiliary line, the target input being input by a user to capture an image based on the target auxiliary line. The processing module 93 is further configured to perform a cropping process or a rotation process on the captured image again in response to the target input received by the receiving module 94, so as to obtain a processed image.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and for avoiding repetition, detailed descriptions are not repeated here.
The embodiment of the invention provides electronic equipment, after a shot image is obtained, the electronic equipment can firstly acquire the information of a target object and/or the information of a background picture in the shot image, then determine a proper patterning method according to the information, and process the shot image by adopting the patterning method, so that a processed image is obtained, namely the electronic equipment can flexibly process the shot image based on the picture information of each shot image to obtain the shot image with a better effect, and therefore, the shooting effect of the electronic equipment is better.
Fig. 11 is a hardware schematic diagram of an electronic device implementing various embodiments of the invention. As shown in fig. 11, electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111.
It should be noted that the electronic device structure shown in fig. 11 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown in fig. 11, or combine some components, or arrange different components, as will be understood by those skilled in the art. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 is configured to, in a case that the captured image is displayed on the capturing interface, obtain target picture information of the captured image through the semantic segmentation network, where the target picture information includes at least one of: first information and second information, the first information is the information of the target object in the shot image, the second information is the information of the background picture in the shot image; determining a target mapping method corresponding to the shot image according to the target picture information; and processing the shot image by adopting a target mapping method to obtain a processed image.
The embodiment of the invention provides electronic equipment, after a shot image is obtained, the electronic equipment can firstly acquire the information of a target object and/or the information of a background picture in the shot image, then determine a proper patterning method according to the information, and process the shot image by adopting the patterning method, so that a processed image is obtained, namely the electronic equipment can flexibly process the shot image based on the picture information of each shot image to obtain the shot image with a better effect, and therefore, the shooting effect of the electronic equipment is better.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 11, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes the processor 110 shown in fig. 11, the memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements the processes of the foregoing method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be, for example, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
under the condition that a shooting image is displayed on a shooting interface, acquiring target picture information of the shooting image through a semantic segmentation network, wherein the target picture information comprises at least one of the following items: first information and second information, wherein the first information is information of a target object in the shot image, and the second information is information of a background picture in the shot image;
determining a target mapping method corresponding to the shot image according to the target picture information;
processing the shot image by adopting the target mapping method to obtain a processed image;
the first information comprises gravity center position information of the target object and category information of the target object, and the second information comprises category information of a background picture and position information of a target background boundary line of the background picture;
the target background boundary line is a background boundary line corresponding to a target object type in the background picture;
the determining of the target mapping method corresponding to the shot image according to the target picture information comprises the following steps:
and inputting the gravity center position information of the target object, the category information of the background picture and the position information of the target background boundary line into a logistic regression classification model to determine the target mapping method.
2. The method of claim 1, wherein the target patterning is any one of: a center mapping method, a horizontal line mapping method, a vertical line mapping method, a trisection mapping method, and a symmetrical mapping method.
3. The method according to claim 1, wherein the processing the captured image using the object mapping method to obtain a processed image comprises:
adding a horizontal auxiliary line and a vertical auxiliary line based on the gravity center of the target object, and cutting the shot image according to the horizontal auxiliary line and the vertical auxiliary line to obtain a cut image; wherein, after the cropping processing, the center of gravity of the target object is located in the central region of the image after the cropping processing;
alternatively, the first and second electrodes may be,
adding a horizontal auxiliary line based on the target background boundary line, and performing rotation processing on the shot image according to an included angle value between the horizontal auxiliary line and the target background boundary line to obtain a rotated image; the horizontal auxiliary line is a straight line drawn by taking the intersection point of the target background boundary line and one edge line of the display screen as a reference point, and the horizontal auxiliary line is perpendicular to the edge line; after the rotation processing, the target background boundary line is parallel to the horizontal auxiliary line;
alternatively, the first and second electrodes may be,
adding a first auxiliary line, a horizontal auxiliary line, a second auxiliary line and a vertical auxiliary line based on the gravity center of the target object and the shape information of the target object, and performing rotation processing on the shot image according to an included angle value between the first auxiliary line and the horizontal auxiliary line to obtain a rotated image; wherein the first auxiliary line is perpendicular to the second auxiliary line, and the horizontal auxiliary line is perpendicular to the vertical auxiliary line; after the rotation processing, the included angle value between the second auxiliary line and the vertical auxiliary line is smaller than or equal to a preset threshold value;
alternatively, the first and second electrodes may be,
adding a plurality of horizontal auxiliary lines and a plurality of vertical auxiliary lines in the shooting interface to divide the shooting interface into a plurality of areas, and performing clipping processing on the shot image according to the distance between the center of gravity of the target object and each vertex of the central area to obtain a clipped image; the central area is an area located at a central position among the plurality of areas; wherein, after the cropping processing, the center of gravity of the target object is located in the central region;
alternatively, the first and second electrodes may be,
adding a vertical auxiliary line based on the gravity center of the target object, and cutting the shot image according to the vertical distance between the vertical auxiliary line and the edge line of the display screen to obtain a cut image; wherein, after the cropping process, the target object is symmetric based on the vertical auxiliary line.
4. The method according to claim 1, wherein the obtaining of the target picture information of the captured image through the semantic segmentation network comprises:
acquiring third information of the target object and object category information of the background picture through the semantic segmentation network, wherein the third information comprises at least one of the following items: size information of the target object, shape information of the target object, color information of the target object, position information of the target object, and category information of the target object;
according to the third information, the gravity center position information is determined, and based on a preset model, the target background boundary line is determined from at least one background boundary line of the background picture according to the class information of the object in the background picture;
wherein the at least one background boundary line includes a boundary line of every two adjacent objects in the plurality of objects in the background picture.
5. An electronic device, characterized in that the electronic device comprises: the device comprises an acquisition module, a determination module and a processing module;
the acquisition module is used for acquiring target picture information of the shot image through a semantic segmentation network under the condition that the shot image is displayed on a shooting interface, wherein the target picture information comprises at least one of the following items: first information and second information, wherein the first information is information of a target object in the shot image, and the second information is information of a background picture in the shot image;
the determining module is used for determining a target mapping method corresponding to the shot image according to the target picture information acquired by the acquiring module;
the processing module is used for processing the shot image by adopting the target mapping method determined by the determining module to obtain a processed image;
the first information comprises gravity center position information of the target object and category information of the target object, and the second information comprises category information of a background picture and position information of a target background boundary line of the background picture;
the target background boundary line is a background boundary line corresponding to a target object type in the background picture; the determining module is specifically configured to input the barycentric location information of the target object, the category information of the background picture, and the location information of the target background boundary line into a logistic regression classification model to determine the target mapping method.
6. The electronic device of claim 5, wherein the target patterning method is any one of: a center mapping method, a horizontal line mapping method, a vertical line mapping method, a trisection mapping method, and a symmetrical mapping method.
7. The electronic device of claim 5, wherein the processing module is specifically configured to:
adding a horizontal auxiliary line and a vertical auxiliary line based on the gravity center of the target object, and cutting the shot image according to the vertical distances between the horizontal auxiliary line and the edge line of the display screen and the vertical auxiliary line respectively to obtain a cut image; wherein, after the cropping processing, the center of gravity of the target object is located in the central region of the image after the cropping processing;
alternatively, the first and second electrodes may be,
adding a horizontal auxiliary line based on the target background boundary line, and performing rotation processing on the shot image according to an included angle value between the horizontal auxiliary line and the target background boundary line to obtain a rotated image; the horizontal auxiliary line is a straight line drawn by taking the intersection point of the target background boundary line and one edge line of the display screen as a reference point, and the horizontal auxiliary line is perpendicular to the edge line; after the rotation processing, the target background boundary line is parallel to the horizontal auxiliary line;
alternatively, the first and second electrodes may be,
adding a first auxiliary line, a horizontal auxiliary line, a second auxiliary line and a vertical auxiliary line based on the gravity center of the target object and the shape information of the target object, and performing rotation processing on the shot image according to an included angle value between the first auxiliary line and the horizontal auxiliary line to obtain a rotated image; wherein the first auxiliary line is perpendicular to the second auxiliary line, and the horizontal auxiliary line is perpendicular to the vertical auxiliary line; after the rotation processing, the included angle value between the second auxiliary line and the vertical auxiliary line is smaller than or equal to a preset threshold value;
alternatively, the first and second electrodes may be,
adding a plurality of horizontal auxiliary lines and a plurality of vertical auxiliary lines in the shooting interface to divide the shooting interface into a plurality of areas, and performing clipping processing on the shot image according to the distance between the center of gravity of the target object and each vertex of the central area to obtain a clipped image; the central area is an area located at a central position among the plurality of areas; wherein, after the cropping processing, the center of gravity of the target object is located in the central region;
alternatively, the first and second electrodes may be,
adding a vertical auxiliary line based on the gravity center of the target object, and cutting the shot image according to the vertical distance between the vertical auxiliary line and the edge line of the display screen to obtain a cut image; wherein, after the cropping process, the target object is symmetric based on the vertical auxiliary line.
8. The electronic device according to claim 5, wherein the target screen information includes at least the first information;
the obtaining module is specifically configured to obtain, through the semantic segmentation network, third information of the target object and object category information of the background picture, where the third information includes at least one of the following information: size information of the target object, shape information of the target object, color information of the target object, position information of the target object, and category information of the target object; determining the gravity center position information according to the third information, and determining the target background boundary line from at least one background boundary line of the background picture according to the class information of the object in the background picture based on a preset model;
wherein the at least one background boundary line includes a boundary line of every two adjacent objects in the plurality of objects in the background picture.
9. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 4.
CN202010237683.8A 2020-03-30 2020-03-30 Image processing method and electronic equipment Active CN111432122B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010237683.8A CN111432122B (en) 2020-03-30 2020-03-30 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010237683.8A CN111432122B (en) 2020-03-30 2020-03-30 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN111432122A CN111432122A (en) 2020-07-17
CN111432122B true CN111432122B (en) 2021-11-30

Family

ID=71549307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010237683.8A Active CN111432122B (en) 2020-03-30 2020-03-30 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111432122B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787230A (en) * 2020-07-31 2020-10-16 维沃移动通信(杭州)有限公司 Image display method and device and electronic equipment
CN112135047A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5873007A (en) * 1997-10-28 1999-02-16 Sony Corporation Picture composition guidance system
CN106131418A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of composition control method, device and photographing device
CN106534669A (en) * 2016-10-25 2017-03-22 华为机器有限公司 Shooting composition method and mobile terminal
WO2018133305A1 (en) * 2017-01-19 2018-07-26 华为技术有限公司 Method and device for image processing
CN108366203B (en) * 2018-03-01 2020-10-13 北京金山安全软件有限公司 Composition method, composition device, electronic equipment and storage medium
CN108810418B (en) * 2018-07-16 2020-09-11 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111432122A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
EP4047922A1 (en) Object tracking method and electronic device
CN107592466B (en) Photographing method and mobile terminal
CN109743498B (en) Shooting parameter adjusting method and terminal equipment
CN107977652B (en) Method for extracting screen display content and mobile terminal
CN109241832B (en) Face living body detection method and terminal equipment
CN111031234B (en) Image processing method and electronic equipment
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN111145087B (en) Image processing method and electronic equipment
CN111083386B (en) Image processing method and electronic device
US20220286622A1 (en) Object display method and electronic device
CN110830713A (en) Zooming method and electronic equipment
CN109246351B (en) Composition method and terminal equipment
CN111031178A (en) Video stream clipping method and electronic equipment
CN108174110B (en) Photographing method and flexible screen terminal
CN110769154B (en) Shooting method and electronic equipment
CN110602390B (en) Image processing method and electronic equipment
CN111432122B (en) Image processing method and electronic equipment
CN109859718B (en) Screen brightness adjusting method and terminal equipment
CN109639981B (en) Image shooting method and mobile terminal
CN109104573B (en) Method for determining focusing point and terminal equipment
CN108259756B (en) Image shooting method and mobile terminal
WO2021136181A1 (en) Image processing method and electronic device
CN111131706B (en) Video picture processing method and electronic equipment
CN110913133B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant