CN115358919A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN115358919A
CN115358919A CN202210988925.6A CN202210988925A CN115358919A CN 115358919 A CN115358919 A CN 115358919A CN 202210988925 A CN202210988925 A CN 202210988925A CN 115358919 A CN115358919 A CN 115358919A
Authority
CN
China
Prior art keywords
image
target object
dividing line
pixel value
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210988925.6A
Other languages
Chinese (zh)
Inventor
谢一杰
周栩彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210988925.6A priority Critical patent/CN115358919A/en
Publication of CN115358919A publication Critical patent/CN115358919A/en
Priority to PCT/CN2023/113234 priority patent/WO2024037556A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The embodiment of the disclosure provides an image processing method, an image processing device, image processing equipment and a storage medium. Identifying a target object in an original image to obtain a target object image and a dividing line of the target object; the target object image is divided by the dividing line, and an image on any one side of the dividing line is used as a first side image, and an image on the other side of the dividing line is used as a second side image; replacing the pixel values of the first side image with set pixel values; carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images of each other. According to the image processing method provided by the embodiment of the disclosure, the target object at one side of the dividing line is subjected to mirror image processing based on the dividing line of the target object, so that the mirror image of the target object is obtained, an image with a mirror image special effect can be generated, and the display content of the image is enriched.

Description

Image processing method, device, equipment and storage medium
Technical Field
The embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
In recent years, image processing Applications (APPs) have been rapidly developed, and have advanced to the lives of users, and have gradually enriched the amateur lives of users. The user can record life in the modes of videos, photos and the like, and can reprocess the images through a special effect technology provided on the image processing APP, so that the images are expressed in richer forms, such as beauty, stylization, expression editing and the like.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method, an image processing device, an image processing apparatus and a storage medium, which can generate an image with a special mirror image effect, enrich the display content of the image, and improve the interestingness of the image.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including:
identifying a target object in an original image to obtain a target object image and a segmentation line of the target object; wherein the target object image is divided into a first side image and a second side image by the dividing line;
replacing the pixel values of the first side image with set pixel values;
carrying out mirror image processing on the second side image based on the segmentation line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images.
In a second aspect, an embodiment of the present disclosure further provides an image processing apparatus, including:
the target object identification module is used for identifying a target object in an original image to obtain a target object image and a dividing line of the target object; wherein the target object image is divided into a first side image and a second side image by the dividing line;
the pixel value replacing module is used for replacing the pixel value of the first side image with a set pixel value;
the mirror image processing module is used for carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images of each other.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement an image processing method according to an embodiment of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are used to perform the image processing method according to the disclosed embodiments.
The embodiment of the disclosure discloses an image processing method, an image processing device, an image processing apparatus and a storage medium, wherein a target object in an original image is identified to obtain a target object image and a dividing line of the target object; wherein the target object image is divided into a first side image and a second side image by a dividing line; replacing the pixel value of the first side image with a set pixel value; carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images. According to the image processing method provided by the embodiment of the disclosure, the target object at one side of the dividing line is subjected to mirror image processing based on the dividing line of the target object, so that the mirror image of the target object is obtained, an image with a mirror image special effect can be generated, and the display content of the image is enriched.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic flowchart of an image processing method provided in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of determining a segmentation line of a target object according to an embodiment of the disclosure;
FIG. 3a is a schematic diagram of a mirror target object provided by an embodiment of the present disclosure;
FIG. 3b is a schematic diagram of a mirror target object provided by an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, in response to receiving a user's active request, prompt information is sent to the user to explicitly prompt the user that the requested operation to be performed would require acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution, according to the prompt information.
As an alternative but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window manner, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the popup.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
Fig. 1 is a schematic flowchart of an image processing method provided in an embodiment of the present disclosure, where the embodiment of the present disclosure is applicable to a situation where a target object in an image is subjected to mirroring, and the method may be executed by an image processing apparatus, where the apparatus may be implemented in the form of software and/or hardware, and optionally implemented by an electronic device, where the electronic device may be a mobile terminal, a PC terminal, or a server.
As shown in fig. 1, the method includes:
s110, identifying the target object in the original image to obtain the target object image and the segmentation line of the target object.
The target image is divided by a dividing line, and an image on any one side of the dividing line is used as a first side image, and an image on the other side is used as a second side image. The target object may be any object such as a human figure, an animal, a building, and the like. The target object image can be understood as an image obtained by deducting the target object from the original image. The dividing line of the target object may be a central axis of the target object, serving as a symmetrical line for a subsequent mirroring process.
In this embodiment, the target object in the original image is identified, and the manner of obtaining the target object image may be: identifying a target object in the original image to obtain a target object mask image; and fusing the target object mask image and the original image to obtain a target object image.
The pixel value of each pixel point in the mask (mask) image of the target object represents the confidence that the pixel point belongs to the target object, and can be any value between 0 and 1, wherein 1 is white, and 0 is black. Can be stored in a set color channel in the target object mask image, such as: may be a red channel (R), a green channel (G) or a blue channel (B). Specifically, the method for identifying the target object in the original image to obtain the mask map of the target object may be: and inputting the original image into a target object identification model to identify whether each pixel belongs to the target object or not, and obtaining the confidence coefficient that each pixel belongs to the target object, thereby obtaining a target object mask image. The method for fusing the target object mask image and the original image may be as follows: and multiplying the pixel value of the target object mask image by the color value (RGBA four-channel value) of the original image to obtain a target object image. In this embodiment, the target object image is acquired based on the target object mask map, and the accuracy of extracting the target object from the original image can be improved.
Optionally, identifying the target object in the original image, and obtaining the dividing line of the target object may be: determining two set key points of a target object in an original image; and determining a connecting line of the two set key points as a dividing line of the target object.
The set key points may be two points located on the central axis of the target object. The way to determine the two set key points of the target object in the original image may be: and inputting the original image into a central axis determining module, and outputting coordinate information of two set key points. The mode of determining the connecting line of the two set key points as the dividing line of the target object may be: and determining a partition line expression based on the coordinate information of the two set key points. For example, assuming that the coordinate information of one set key point is (x 1, y 1) and the coordinate information of the other set key point is (x 2, y 2), the expression of the dividing line may be: (x-x 1)/(x 2-x 1) = (y-y 1)/(y 2-y 1). In this embodiment, the dividing line is determined based on the two set key points, so that the calculation amount can be reduced, and the calculation efficiency can be improved.
Optionally, if the target object is a portrait, and the portrait includes a face and/or a human body, the manner of determining two set key points of the target object in the original image may be: determining an eyebrow center key point and a chin key point in the image surface of a person in an original image; correspondingly, the manner of determining the connecting line of the two set key points as the dividing line of the target object may be: and determining a connecting line of the key points of the eyebrows and the chin as a segmentation line of the portrait.
Specifically, 68 face key points are detected on the face of the portrait, an eyebrow key point (key point No. 49) and a chin key point (key point No. 16) are extracted from the 68 face key points, and finally, a connecting line of the eyebrow key point and the chin key point is determined as a segmentation line of the portrait. In the embodiment, the dividing line of the portrait is determined through the key points of the eyebrow center and the key points of the chin, so that the accuracy of determining the dividing line of the portrait can be improved.
Optionally, identifying the target object in the original image, and obtaining the dividing line of the target object may be: determining a detection frame of a target object in an original image; and determining a dividing line of the target object according to the central axis of the detection frame.
The method for determining the detection frame of the target object in the original image may be as follows: and inputting the original image into a target detection model, and outputting a detection frame of a target object. The mode of determining the central axis of the detection frame as the dividing line of the target object may be: acquiring a horizontal central axis or a vertical central axis of the detection frame, then acquiring the posture information of the target object in the original image, then rotating the horizontal central axis or the vertical central axis according to the posture information of the target object, and determining the rotated horizontal central axis or the rotated vertical central axis as a dividing line of the target object. For example, fig. 2 is a schematic diagram of determining a dividing line of the target object in the present embodiment, and as shown in fig. 2, the dividing line of the target object is determined according to the vertical central axis of the detection frame. In this embodiment, the dividing line of the target object is determined according to the central axis of the detection frame, so that the efficiency of determining the dividing line can be improved.
And S120, replacing the pixel value of the first side image with the set pixel value.
Wherein the first side image may be an image of either side of the dividing line. For example: assuming that the dividing line is a vertical dividing line, the first side image may be a right side image or a left side image of the dividing line; assuming that the dividing line is a horizontal dividing line, the first side image may be an upper side image or a lower side image of the dividing line. The set pixel value may be a pixel value arbitrarily set by a user or a pixel value selected from a background region of the original image.
In this embodiment, the process of replacing the pixel value of the first side image with the set pixel value may be: firstly, pixel points belonging to a first side image in a target object image are extracted, and then pixel values of the pixel points belonging to the first side image are replaced by set pixel values.
Optionally, the manner of replacing the pixel value of the first side image with the set pixel value may be: traversing pixel points of the target object image, and inputting coordinate information of the traversed pixel points into an expression of a segmentation line to obtain a result value; and if the result value is greater than the set value, replacing the pixel value of the traversed pixel point with the set pixel value.
The expression of the partition line may be understood as a functional expression of the partition line, which is a linear binary equation and may be expressed as f (x, y) = a x + b y + c, where a, b, and c are constants. The set value may be 0. Inputting the coordinate information of the traversed pixel point into the expression of the partition line can be understood as substituting the coordinate information of the traversed pixel point into the functional expression of the partition line, if the result value is greater than 0, the traversed pixel point belongs to the first side image, if the result value is equal to 0, the pixel point is located on the partition line, and if the result value is less than 0, the pixel point belongs to the second side image. For example, assuming that the coordinate information of the traversed pixel is (x 0, y 0), the result value is f (x 0, y 0) = a x0+ b y0+ c, and if f (x 0, y 0) > 0, the traversed pixel belongs to the first side image, and the pixel value of the traversed pixel is replaced with the set pixel value.
Optionally, the manner of replacing the pixel value of the first side image with the set pixel value may be: traversing pixel points of the target object image, and inputting coordinate information of the traversed pixel points into an expression of the partition line to obtain a result value; and if the result value is smaller than the set value, replacing the pixel value of the traversed pixel point with the set pixel value.
The expression of the partition line may be understood as a functional expression of the partition line, which is a linear binary equation and may be expressed as f (x, y) = a x + b y + c, where a, b, and c are constants. The set value may be 0. Inputting the coordinate information of the traversed pixel point into the expression of the partition line can be understood as substituting the coordinate information of the traversed pixel point into the functional expression of the partition line, if the result value is less than 0, the traversed pixel point belongs to the first side image, if the result value is equal to 0, the pixel point is located on the partition line, and if the result value is greater than 0, the pixel point belongs to the second side image. For example, assuming that the coordinate information of the traversed pixel is (x 0, y 0), the result value is f (x 0, y 0) = a x0+ b y0+ c, and if f (x 0, y 0) < 0, the traversed pixel belongs to the first side image, and the pixel value of the traversed pixel is replaced with the set pixel value. In this embodiment, the coordinate information of the traversed pixel point is input into the result value obtained by the expression of the partition line to determine whether the pixel point belongs to the first-side image, so that the accuracy of replacing the pixel value can be improved.
Optionally, the manner of replacing the pixel value of the first side image with the set pixel value may be: selecting a pixel value from a background area of an original image as a set pixel value; the pixel values of the first-side image are replaced with the set pixel values.
The method for selecting a pixel value from the background area of the original image as the set pixel value may be: randomly selecting a pixel point from the background area, and taking the pixel value of the pixel point as a set pixel value; or, calculating the average value of the pixel values of the pixel points in the background area, and taking the average pixel value as a set pixel value; or selecting a pixel point from the pixel points which are positioned at the edge of the target object and are positioned in the background area, and taking the pixel value of the pixel point as a set pixel value; or averaging the pixel values of the pixel points which are positioned at the edge of the target object and are positioned in the background area, and taking the average pixel value as a set pixel value. In this embodiment, the pixel value of the first-side image is replaced with the pixel value selected from the background region of the original image, so that the subsequently generated first-side mirror image and the background region are in smooth transition, and the display effect of the mirror image target object is improved.
Optionally, after replacing the pixel value of the first side image with the set pixel value, the method further includes the following steps: and carrying out blurring processing on the first side image after the pixel value replacement.
Wherein, the mode of the fuzzy processing can be Gaussian fuzzy. In this embodiment, the way of performing the blurring processing on the first-side image after replacing the pixel value may be: and performing multiple Gaussian blur on the first side image after the pixel value replacement to obtain a first side image after the blur processing. In this embodiment, the first side image after the pixel value replacement is blurred, so that the first side image can be hidden, and the first side image can be prevented from affecting the display effect of the first side mirror image.
And S130, carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image.
The first side mirror image and the second side image are mirror images, namely the second side image and the first side mirror image are symmetrical relative to the dividing line.
In this embodiment, the manner of performing mirror image processing on the second side image based on the dividing line may be: and determining pixel points of which the pixel points are symmetrical relative to the segmentation line in the second-side image, then taking the pixel values of the pixel points of the second-side image as the pixel values of the pixel points symmetrical to the pixel points, and rendering the first-side mirror image based on the determined pixel values.
Optionally, the second side image is subjected to mirror image processing based on the split line, and the manner of obtaining the first side mirror image may be: for the pixel points of the second side image, determining first side pixel points of the second side pixel points which are symmetrical relative to the segmentation line; determining the pixel value of the second side pixel point as the target pixel value of the first side pixel point; and rendering the first side image based on the target pixel value to obtain a first side mirror image.
The method for determining the first side pixel point symmetric to the dividing line of the second side pixel point can be as follows: and acquiring coordinate information of the second-side pixel points and the Korean expression of the dividing line, and determining the coordinate information of the first-side pixel points of the second-side pixel points, which are symmetrical relative to the dividing line, based on the symmetry principle. The specific implementation process of the symmetry principle is not described herein again. For example, fig. 3 a-3 b are schematic diagrams of the mirror image target object generated in the present embodiment, as shown in fig. 3 a-3 b, the target object is a portrait, the right side of the portrait is the generated first side mirror image, and the left side of the portrait is the portrait of the original image. In this embodiment, the second side image is subjected to mirroring based on the dividing line, so that the accuracy of obtaining the mirrored image can be improved.
Optionally, before performing mirroring on the second-side image based on the segmentation line, the method further includes the following steps: acquiring a screen ratio; transforming the dividing line based on the screen proportion; correspondingly, the mirror image processing is carried out on the second side image on the dividing line, and the mirror image processing method comprises the following steps: and carrying out mirror image processing on the second side image based on the converted segmentation line.
The screen ratio may be understood as a ratio between a height and a width of a current terminal device screen, and may be, for example: 16, 9, 4, 3 and the like. In this embodiment, the manner of transforming the dividing line based on the screen ratio may be: the slope of the dividing line is transformed based on the screen scale. Assuming that the slope of the segment line is k and the screen ratio is 16. Optionally, the further way of transforming the dividing line based on the screen ratio may be: and transforming coordinates of two points on the segmentation line based on the screen proportion, and generating a new segmentation line based on the two points after coordinate transformation. For example, assuming that coordinates of two points on the dividing line are a (x 1, y 1) and B (x 2, y 2), respectively, the screen ratio is 16. In this embodiment, the dividing line is transformed based on the screen scale, so that the mirror image target object can be prevented from being stretched when being displayed on the current screen.
According to the technical scheme of the embodiment of the disclosure, a target object in an original image is identified, and a target object image and a dividing line of the target object are obtained; the target object image is divided into a first side image and a second side image by a dividing line; replacing the pixel value of the first side image with a set pixel value; carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images. According to the image processing method provided by the embodiment of the disclosure, the target object on one side of the dividing line is subjected to mirror image processing based on the dividing line of the target object, so that a mirror image of the target object is obtained, an image with a mirror image special effect can be generated, and the display content of the image can be enriched.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the disclosure, and as shown in fig. 4, the apparatus includes:
a target object identification module 410, configured to identify a target object in an original image, to obtain a target object image and a dividing line of the target object; wherein the target object image is divided into a first side image and a second side image by the dividing line;
a pixel value replacing module 420, configured to replace a pixel value of the first side image with a set pixel value;
a mirror image processing module 430, configured to perform mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images of each other.
Optionally, the target object identifying module 410 is further configured to:
identifying a target object in the original image to obtain a target object mask image;
and fusing the target object mask image and the original image to obtain a target object image.
Optionally, the target object identifying module 410 is further configured to:
determining two set key points of a target object in an original image;
and determining a connecting line of the two set key points as a dividing line of the target object.
Optionally, if the target object is a portrait, the portrait includes a face and/or a human body, the target object identifying module 410 is further configured to:
determining an eyebrow center key point and a chin key point in a human image surface part in an original image;
determining a connecting line of two set key points as a dividing line of a target object, comprising the following steps:
and determining a connecting line of the key points of the eyebrows and the chin as a segmentation line of the portrait.
Optionally, the target object identifying module 410 is further configured to:
determining a detection frame of a target object in an original image;
and determining a dividing line of the target object according to the central axis of the detection frame.
Optionally, the pixel value replacing module 420 is further configured to:
traversing pixel points of the target object image, and inputting coordinate information of the traversed pixel points into an expression of a segmentation line to obtain a result value;
if the result value is larger than the set value, replacing the pixel value of the traversed pixel point with the set pixel value; alternatively, the first and second liquid crystal display panels may be,
and if the result value is smaller than the set value, replacing the pixel value of the traversed pixel point with the set pixel value.
Optionally, the pixel value replacing module 420 is further configured to:
selecting a pixel value from a background area of an original image as a set pixel value;
the pixel values of the first-side image are replaced with the set pixel values.
Optionally, the method further includes: a blur processing module to:
and carrying out blurring processing on the first side image after the pixel value replacement.
Optionally, the mirror processing module 430 is further configured to:
for the pixel points of the second-side image, determining first-side pixel points of which the second-side pixel points are symmetrical relative to the segmentation line;
determining the pixel value of the second side pixel point as the target pixel value of the first side pixel point;
and rendering the first side image based on the target pixel value to obtain a first side mirror image.
Optionally, the method further includes: a transformation module to:
acquiring a screen ratio;
transforming the dividing line based on the screen ratio;
optionally, the mirror processing module 430 is further configured to:
and carrying out mirror image processing on the second side image based on the converted segmentation line.
The image processing device provided by the embodiment of the disclosure can execute the image processing method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 5, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 5) 500 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An editing/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The electronic device provided by the embodiment of the present disclosure and the image processing method provided by the above embodiment belong to the same inventive concept, and technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the above embodiment.
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the image processing method provided by the above-described embodiments.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: identifying a target object in an original image to obtain a target object image and a dividing line of the target object; the target object image is divided by the dividing line, and an image on any one side of the dividing line is used as a first side image, and an image on the other side of the dividing line is used as a second side image; replacing the pixel values of the first side image with set pixel values; carrying out mirror image processing on the second side image based on the segmentation line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, including conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first obtaining unit may also be described as a "unit obtaining at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image processing method including:
identifying a target object in an original image to obtain a target object image and a dividing line of the target object; the target object image is divided by the dividing line, and an image on any one side of the dividing line is used as a first side image, and an image on the other side of the dividing line is used as a second side image;
replacing the pixel value of the first side image with a set pixel value;
carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images of each other.
Further, identifying the target object in the original image to obtain a target object image, including:
identifying a target object in the original image to obtain a target object mask image;
and fusing the target object mask image and the original image to obtain a target object image.
Further, identifying a target object in the original image to obtain a segmentation line of the target object, including:
determining two set key points of a target object in the original image;
and determining a connecting line of the two set key points as a dividing line of the target object.
Further, if the target object is a portrait, determining two set key points of the target object in the original image, including:
determining an eyebrow center key point and a chin key point in a human image surface part in the original image;
determining a connecting line of the two set key points as a dividing line of the target object, wherein the step of determining the connecting line of the two set key points as the dividing line of the target object comprises the following steps:
and determining a connecting line of the key point of the eyebrow center and the key point of the chin as a segmentation line of the portrait.
Further, identifying a target object in the original image to obtain a segmentation line of the target object, including:
determining a detection frame of a target object in the original image;
and determining a dividing line of the target object according to the central axis of the detection frame.
Further, replacing the pixel values of the first side image with set pixel values includes:
traversing pixel points of the target object image, and inputting coordinate information of the traversed pixel points into the expression of the dividing line to obtain a result value;
if the result value is larger than the set value, replacing the pixel value of the traversed pixel point with the set pixel value; alternatively, the first and second liquid crystal display panels may be,
and if the result value is smaller than the set value, replacing the pixel value of the traversed pixel point with the set pixel value.
Further, replacing the pixel values of the first side image with set pixel values includes:
selecting a pixel value from a background area of the original image as a set pixel value;
and replacing the pixel value of the first side image with a set pixel value.
Further, after replacing the pixel value of the first side image with a set pixel value, the method further includes:
and carrying out blurring processing on the first side image after the pixel value replacement.
Further, the mirroring processing on the second side image based on the dividing line to obtain a first side mirror image includes:
for the pixel points of the second side image, determining first side pixel points with symmetrical second side pixel points;
determining the pixel value of the second side pixel point as the target pixel value of the first side pixel point;
rendering the first side image based on the target pixel value to obtain a first side mirror image.
Further, before the mirroring process is performed on the second side image based on the dividing line, the method further includes:
acquiring a screen ratio;
transforming the dividing line based on the screen scale;
performing mirroring on the second-side image based on the dividing line, including:
and carrying out mirror image processing on the second side image based on the converted segmentation line.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (13)

1. An image processing method, characterized by comprising:
identifying a target object in an original image to obtain a target object image and a segmentation line of the target object; the target object image is divided by the dividing line, and an image on any one side of the dividing line is used as a first side image, and an image on the other side of the dividing line is used as a second side image;
replacing the pixel value of the first side image with a set pixel value;
carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images of each other.
2. The method of claim 1, wherein identifying the target object in the original image to obtain the target object image comprises:
identifying a target object in the original image to obtain a target object mask image;
and fusing the target object mask image and the original image to obtain a target object image.
3. The method of claim 1, wherein identifying a target object in an original image and obtaining a segmentation line of the target object comprises:
determining two set key points of a target object in the original image;
and determining a connecting line of the two set key points as a dividing line of the target object.
4. The method of claim 3, wherein if the target object is a human image; if the portrait includes a face and/or a human body, determining two set key points of the target object in the original image, including:
determining an eyebrow center key point and a chin key point in a human image surface part in the original image;
determining a connecting line of the two set key points as a dividing line of the target object, wherein the method comprises the following steps:
and determining a connecting line of the key points of the eyebrow center and the chin as a segmentation line of the portrait.
5. The method of claim 1, wherein identifying a target object in an original image and obtaining a segmentation line of the target object comprises:
determining a detection frame of a target object in the original image;
and determining a dividing line of the target object according to the central axis of the detection frame.
6. The method of claim 1, wherein replacing the pixel values of the first side image with set pixel values comprises:
traversing pixel points of the target object image, and inputting coordinate information of the traversed pixel points into an expression of the partition line to obtain a result value;
if the result value is larger than the set value, replacing the pixel value of the traversed pixel point with the set pixel value; alternatively, the first and second electrodes may be,
and if the result value is smaller than the set value, replacing the pixel value of the traversed pixel point with the set pixel value.
7. The method of claim 1, wherein replacing the pixel values of the first side image with set pixel values comprises:
selecting a pixel value from a background area of the original image as a set pixel value;
and replacing the pixel value of the first side image with a set pixel value.
8. The method according to claim 1 or 7, further comprising, after replacing the pixel values of the first side image with set pixel values:
and carrying out blurring processing on the first side image after the pixel value replacement.
9. The method of claim 1, wherein mirroring the second-side image based on the split line to obtain a first-side mirrored image comprises:
for the pixel points of the second side image, determining first side pixel points of the second side pixel points which are symmetrical relative to the dividing line;
determining the pixel value of the second side pixel point as the target pixel value of the first side pixel point;
rendering the first side image based on the target pixel value to obtain a first side mirror image.
10. The method of claim 1, further comprising, prior to mirroring the second side image based on the split line:
acquiring a screen ratio;
transforming the dividing line based on the screen scale;
performing mirror image processing on the second side image based on the dividing line, including:
and carrying out mirror image processing on the second side image based on the converted segmentation line.
11. An image processing apparatus characterized by comprising:
the target object identification module is used for identifying a target object in an original image to obtain a target object image and a dividing line of the target object; wherein the target object image is divided into a first side image and a second side image by the dividing line;
the pixel value replacing module is used for replacing the pixel value of the first side image with a set pixel value;
the mirror image processing module is used for carrying out mirror image processing on the second side image based on the dividing line to obtain a first side mirror image; the first side mirror image and the second side image are mirror images.
12. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the image processing method of any one of claims 1-10.
13. A storage medium containing computer-executable instructions for performing the image processing method of any one of claims 1-10 when executed by a computer processor.
CN202210988925.6A 2022-08-17 2022-08-17 Image processing method, device, equipment and storage medium Pending CN115358919A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210988925.6A CN115358919A (en) 2022-08-17 2022-08-17 Image processing method, device, equipment and storage medium
PCT/CN2023/113234 WO2024037556A1 (en) 2022-08-17 2023-08-16 Image processing method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210988925.6A CN115358919A (en) 2022-08-17 2022-08-17 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115358919A true CN115358919A (en) 2022-11-18

Family

ID=84001852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210988925.6A Pending CN115358919A (en) 2022-08-17 2022-08-17 Image processing method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN115358919A (en)
WO (1) WO2024037556A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037556A1 (en) * 2022-08-17 2024-02-22 北京字跳网络技术有限公司 Image processing method and apparatus, and device and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959564B (en) * 2016-06-15 2018-11-30 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN111145189B (en) * 2019-12-26 2023-08-08 成都市喜爱科技有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium
CN111754528A (en) * 2020-06-24 2020-10-09 Oppo广东移动通信有限公司 Portrait segmentation method, portrait segmentation device, electronic equipment and computer-readable storage medium
EP3982288A1 (en) * 2020-10-09 2022-04-13 Fresenius Medical Care Deutschland GmbH Method for identifying an object, computer program for carrying out the method and system
CN115358919A (en) * 2022-08-17 2022-11-18 北京字跳网络技术有限公司 Image processing method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037556A1 (en) * 2022-08-17 2024-02-22 北京字跳网络技术有限公司 Image processing method and apparatus, and device and storage medium

Also Published As

Publication number Publication date
WO2024037556A1 (en) 2024-02-22

Similar Documents

Publication Publication Date Title
CN111242881B (en) Method, device, storage medium and electronic equipment for displaying special effects
CN111243049B (en) Face image processing method and device, readable medium and electronic equipment
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110991373A (en) Image processing method, image processing apparatus, electronic device, and medium
CN112381717A (en) Image processing method, model training method, device, medium, and apparatus
CN111784712A (en) Image processing method, device, equipment and computer readable medium
CN115311178A (en) Image splicing method, device, equipment and medium
WO2024037556A1 (en) Image processing method and apparatus, and device and storage medium
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN114913061A (en) Image processing method and device, storage medium and electronic equipment
CN114742856A (en) Video processing method, device, equipment and medium
CN114782659A (en) Image processing method, device, equipment and storage medium
CN114339447B (en) Method, device and equipment for converting picture into video and storage medium
CN114331823A (en) Image processing method, image processing device, electronic equipment and storage medium
WO2024016923A1 (en) Method and apparatus for generating special effect graph, and device and storage medium
CN116596748A (en) Image stylization processing method, apparatus, device, storage medium, and program product
CN115358958A (en) Special effect graph generation method, device and equipment and storage medium
CN111833459A (en) Image processing method and device, electronic equipment and storage medium
CN115526796A (en) Image processing method, device, equipment and storage medium
CN115272060A (en) Transition special effect diagram generation method, device, equipment and storage medium
CN115358959A (en) Generation method, device and equipment of special effect graph and storage medium
CN115454306A (en) Display effect processing method and device, electronic equipment and storage medium
CN114866706A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114422698A (en) Video generation method, device, equipment and storage medium
CN115082368A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination