WO2024037556A1 - Appareil et procédé de traitement d'image, dispositif et support de stockage - Google Patents

Appareil et procédé de traitement d'image, dispositif et support de stockage Download PDF

Info

Publication number
WO2024037556A1
WO2024037556A1 PCT/CN2023/113234 CN2023113234W WO2024037556A1 WO 2024037556 A1 WO2024037556 A1 WO 2024037556A1 CN 2023113234 W CN2023113234 W CN 2023113234W WO 2024037556 A1 WO2024037556 A1 WO 2024037556A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target object
dividing line
pixel value
pixel
Prior art date
Application number
PCT/CN2023/113234
Other languages
English (en)
Chinese (zh)
Inventor
谢一杰
周栩彬
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024037556A1 publication Critical patent/WO2024037556A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the embodiments of the present disclosure relate to the field of image processing technology, such as an image processing method, device, equipment and storage medium.
  • Image processing applications have developed rapidly, entering users' lives and gradually enriching their spare time. Users can record their lives through videos, photos, etc., and process images through the special effects technology provided on the image processing APP, so that the images can be expressed in richer forms, such as beautification, stylization, expression editing, etc.
  • Embodiments of the present disclosure provide an image processing method, device, equipment and storage medium, which can generate images with mirror effects, enrich the display content of the image, and improve the interest of the image.
  • an embodiment of the present disclosure provides an image processing method, including:
  • Identify the target object in the original image to obtain the target object image and the dividing line of the target object; wherein the target object image is divided into a first side image and a second side image by the dividing line;
  • the second side image is mirrored based on the dividing line to obtain a first side mirror image; wherein the first side mirror image and the second side image are mirror images of each other.
  • embodiments of the present disclosure also provide an image processing device, including:
  • the target object recognition module is configured to identify the target object in the original image and obtain the target object image and the dividing line of the target object; wherein the target object image is divided into a first side image and a third side image by the dividing line. Bilateral image;
  • a pixel value replacement module configured to replace the pixel value of the first side image with a set pixel value
  • the mirror processing module is configured to perform mirror processing on the second side image based on the dividing line to obtain a first side mirror image; wherein the first side mirror image and the second side image are mirror images of each other.
  • embodiments of the present disclosure also provide an electronic device, where the electronic device includes:
  • processors one or more processors
  • a storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the image processing method as described in the embodiment of the present disclosure.
  • embodiments of the present disclosure also provide a storage medium containing computer-executable instructions, which, when executed by a computer processor, are used to perform the image processing method as described in the embodiments of the present disclosure.
  • Figure 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic diagram of determining a dividing line of a target object provided by an embodiment of the present disclosure
  • Figure 3a is a schematic diagram of a mirror target object provided by an embodiment of the present disclosure.
  • Figure 3b is a schematic diagram of another mirror target object provided by an embodiment of the present disclosure.
  • Figure 4 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” “Examples” means “at least some examples”. Relevant definitions of other terms will be given in the description below.
  • a prompt message is sent to the user to clearly remind the user that the operation requested will require the acquisition and use of the user's personal information. Therefore, users can autonomously choose whether to provide personal information to software or hardware such as electronic devices, applications, servers or storage media that perform the operations of the technical solution of the present disclosure based on the prompt information.
  • the method of sending prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in the form of text in the pop-up window.
  • the pop-up window can also contain a selection control for the user to choose "agree” or "disagree” to provide personal information to the electronic device.
  • the data involved in this technical solution shall comply with the requirements of corresponding laws, regulations and related regulations.
  • Figure 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure is suitable for mirroring a target object in an image.
  • the method can be executed by an image processing device, and the device can It is implemented in the form of software and/or hardware, and optionally, it is implemented through electronic equipment.
  • the electronic equipment can be a mobile terminal, a personal computer (Personal Computer, PC) or a server, etc.
  • the method includes:
  • the target object image is divided by a dividing line, and the image on either side of the dividing line is regarded as the first side image, and the image on the other side of the dividing line is regarded as the second side image.
  • the target object can be any object such as human figures, animals, buildings, etc.
  • the target object image can be understood as an image in which the target object is extracted from the original image.
  • the dividing line of the target object can be the central axis of the target object, which is used as the symmetry line for subsequent mirroring processing.
  • the target object in the original image is identified and the target object image is obtained by: identifying the target object in the original image to obtain a mask image of the target object; and combining the mask image of the target object with The original images are fused to obtain the target object image.
  • the pixel value of each pixel in the mask image of the target object represents the confidence that the pixel belongs to the target object. It can be any value between 0 and 1, with 1 being white and 0 being black. It can be stored in the set color channel in the mask image of the target object, for example: it can be the red channel (R), green channel (G) or blue channel (B).
  • R red channel
  • G green channel
  • B blue channel
  • the method of fusing the mask image of the target object and the original image may be: multiplying the pixel value of the mask image of the target object by the color value (RGBA four-channel value) of the original image to obtain the target object image.
  • the target object image is obtained based on the mask image of the target object, which can improve the accuracy of extracting the target object from the original image.
  • the method of identifying the target object in the original image and obtaining the dividing line of the target object can be: determining two set key points of the target object in the original image; connecting the two set key points Determine the dividing line for the target object.
  • the key points can be two points located on the central axis of the target object.
  • the method of determining the two set key points of the target object in the original image may be: input the original image into the central axis determination module and output the coordinate information of the two set key points.
  • the method of determining the two key points of the target object in the original image may be: determining the key point between the eyebrows on the face of the portrait in the original image. and the chin key point; the way to determine the connection line between the two set key points as the dividing line of the target object can be: determine the connection line between the eyebrow key point and the chin key point as the dividing line of the portrait.
  • 68 facial key points are detected on the face of the portrait, and the eyebrow center is extracted from the 68 facial key points.
  • the dividing line of the portrait is determined through the key points of the eyebrows and the chin key point, which can improve the accuracy of determining the dividing line of the portrait.
  • the method of identifying the target object in the original image and obtaining the dividing line of the target object may be: determining the detection frame of the target object in the original image; determining the dividing line of the target object according to the central axis of the detection frame.
  • the method of determining the detection frame of the target object in the original image may be: input the original image into the target detection model and output the detection frame of the target object.
  • the way to determine the central axis of the detection frame as the dividing line of the target object can be: obtain the horizontal central axis or vertical central axis of the detection frame, then obtain the posture information of the target object in the original image, and then obtain the posture information of the target object based on the posture information of the target object.
  • FIG. 2 is a schematic diagram of determining the dividing line of the target object in this embodiment.
  • the dividing line of the target object is determined according to the vertical central axis of the detection frame.
  • the dividing line of the target object is determined based on the central axis of the detection frame, which can improve the efficiency of determining the dividing line.
  • the first side image can be an image on either side of the dividing line.
  • the first side image can be the image on the right side or the left side of the dividing line;
  • the dividing line is a horizontal dividing line, the first side image can be the image on the upper side of the dividing line Or the image below.
  • the set pixel value can be a pixel value set arbitrarily by the user or a pixel value selected from the background area of the original image.
  • the process of replacing the pixel values of the first side image with the set pixel values may be: first extracting the pixels belonging to the first side image in the target object image, and then converting the pixels belonging to the first side image into The pixel value is replaced with the set pixel value.
  • the method of replacing the pixel value of the first side image with the set pixel value can be: traverse the pixel points of the target object image, enter the coordinate information of each traversed pixel point into the expression of the dividing line, and obtain The result value of the traversed coordinate information of each pixel point; if the result value of the traversed coordinate information of each pixel point is greater than the set value, the pixel of each traversed pixel point will be The value is replaced with the set pixel value.
  • the expression of the dividing line can be understood as the functional expression of the dividing line.
  • the setting value can be 0.
  • the expression of inputting the coordinate information of the traversed pixel point into the dividing line can be understood as the functional expression of substituting the coordinate information of the traversed pixel point into the dividing line. If the result value is greater than 0, the traversed pixel point belongs to the first Side image, if the result value is equal to 0, the pixel is located on the dividing line, if the result value is less than 0, the pixel belongs to the second side image.
  • the method of replacing the pixel value of the first side image with the set pixel value can be: traverse the pixel points of the target object image, enter the coordinate information of each traversed pixel point into the expression of the dividing line, and obtain Result value; if the result value of the traversed coordinate information of each pixel is less than the set value, then the pixel value of each traversed pixel is replaced with the set pixel value.
  • the expression of the dividing line can be understood as the functional expression of the dividing line.
  • the setting value can be 0.
  • the expression of inputting the coordinate information of the traversed pixel point into the dividing line can be understood as the functional expression of substituting the coordinate information of the traversed pixel point into the dividing line. If the result value is less than 0, the traversed pixel point belongs to the first For the side image, if the result value is equal to 0, the pixel is located on the dividing line. If the result value is greater than 0, the pixel belongs to the second side image.
  • the coordinate information of the traversed pixel is input into the expression of the dividing line to obtain the result value to determine whether the pixel belongs to the first side image, which can improve the accuracy of replacing the pixel value.
  • the method of replacing the pixel value of the first side image with the set pixel value may be: selecting a pixel value from the background area of the original image as the set pixel value; replacing the pixel value of the first side image with the set pixel value. To set the pixel value.
  • replacing the pixel values of the first side image with pixel values selected from the background area of the original image can make the subsequently generated first side mirror image and the background area smoothly transition and improve the display effect of the mirror target object. .
  • the following step is also included: blurring the first side image after replacing the pixel values.
  • the blur processing method can be Gaussian blur.
  • the method of blurring the first side image after replacing the pixel values may be: performing multiple Gaussian blur on the first side image after replacing the pixel values to obtain a blurred first side image.
  • the first side image after replacing the pixel values is blurred, so that the first side image can be hidden and the first side image can be prevented from affecting the second side image. Display effect of mirrored image on one side.
  • the first side mirror image and the second side image are mirror images of each other, that is, the second side image and the first side mirror image are symmetrical with respect to the dividing line.
  • the method of mirroring the second side image based on the dividing line may be: determining the pixel points of each pixel in the second side image that are symmetrical with respect to the dividing line, and then converting the pixels of the second side image to The pixel value is used as the pixel value of the pixel point that is symmetrical with the pixel point of the second side image, and the first side image is rendered based on the determined pixel value to obtain the first side mirror image.
  • the second side image is mirrored based on the dividing line.
  • the method of obtaining the first side mirror image may be: for the pixel points of the second side image, determine the pixel points of the second side image that are symmetrical with respect to the dividing line. pixels of the first side image; determine the pixel values of the pixels of the second side image as target pixel values of the pixels of the first side image; render the first side image based on the target pixel value to obtain the first side mirror image.
  • the method of determining the pixel points of the first side image in which the pixel points of the second side image are symmetrical with respect to the dividing line may be: obtaining the coordinate information of the pixel points of the second side image and the functional expression of the dividing line, and determining the first side image based on the symmetry principle.
  • Figures 3a-3b are schematic diagrams of the mirror target object generated in this embodiment. As shown in Figures 3a-3b, the target object is a portrait, and the right side of the portrait is the generated first side mirror image. The portrait The left side of is the portrait of the original image.
  • mirror processing is performed on the second side image based on the dividing line, which can improve the accuracy of obtaining the mirror image.
  • the method before mirroring the second side image based on the dividing line, the method further includes the following steps: obtaining the screen ratio; transforming the dividing line based on the screen ratio; and mirroring the second side image based on the dividing line. , including: mirroring the second side image based on the transformed dividing line.
  • the screen ratio can be understood as the ratio between the height and width of the current terminal device screen.
  • the screen ratio can be: 16:9 or 4:3, etc.
  • the method of transforming the dividing line based on the screen ratio may be: transforming the slope of the dividing line based on the screen ratio. Assuming that the slope of the dividing line is k and the screen ratio is 16:9, the transformed slope is k*16/9.
  • the method of transforming the dividing line based on the screen ratio can also be: transforming the coordinates of two points on the dividing line based on the screen ratio, and generating a new dividing line based on the two points after the transformed coordinates.
  • transforming the dividing line based on the screen ratio can prevent the mirror target object from being stretched when displayed on the current screen.
  • the technical solution of the embodiment of the present disclosure is to identify the target object in the original image and obtain the target object image and the dividing line of the target object; wherein, the target object image is divided into a first side image and a second side image by the dividing line; The pixel value of the first side image is replaced with the set pixel value; the second side image is mirrored based on the dividing line to obtain the first side mirror image; wherein the first side mirror image and the second side image are mirror images of each other.
  • the image processing method provided by the embodiment of the present disclosure mirrors the target object on one side of the dividing line based on the dividing line of the target object to obtain a mirror image of the target object, which can generate an image with mirroring special effects and enrich the display content of the image. .
  • Figure 4 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • the device includes: a target object recognition module 410, configured to identify target objects in the original image, and obtain the target object.
  • the pixel value replacement module 420 is configured to replace the first side image The pixel value of is replaced with the set pixel value;
  • the mirror processing module 430 is configured to perform mirror processing on the second side image based on the dividing line to obtain a first side mirror image; wherein the first side mirror image and The second side images are mirror images of each other.
  • the target object identification module 410 is configured to identify the target object in the original image and obtain the target object image in the following manner: identify the target object in the original image and obtain a mask image of the target object; The mask image of the target object is fused with the original image to obtain the target object image.
  • the target object recognition module 410 is configured to identify the target object in the original image and obtain the dividing line of the target object in the following manner: determine two set key points of the target object in the original image; combine the two Set the connecting line of key points as the dividing line of the target object.
  • the target object identification module 410 is configured to determine two set key points of the target object in the original image in the following manner: determine The key point between the eyebrows and the key point on the chin in the portrait face; the target object recognition module 410 is configured to determine the connection line between the two set key points as the dividing line of the target object in the following way: the key point between the eyebrows and the key point on the chin The connecting line is determined as the dividing line of the portrait.
  • the target object identification module 410 is configured to identify the target object in the original image and obtain the dividing line of the target object in the following manner: determine the detection frame of the target object in the original image; determine the target according to the central axis of the detection frame. The dividing line of the object.
  • the pixel value replacement module 420 is configured to: traverse the pixel points of the target object image, input the coordinate information of each traversed pixel point into the expression of the dividing line, and obtain each of the traversed pixel points. The result value of the coordinate information of the pixel point; if the result value of the coordinate information of each pixel point traversed is greater than the set value, then the pixel value of each pixel point traversed is replaced with the set pixel value; or , if the result value of the traversed coordinate information of each pixel point is less than the set value, then the pixel value of each traversed pixel point is replaced with the set pixel value.
  • the pixel value replacement module 420 is configured to: select a pixel value from the background area of the original image as the set pixel value; and replace the pixel value of the first side image with the set pixel value.
  • the device further includes: a blur processing module configured to: blur the first side image after replacing the pixel values.
  • a blur processing module configured to: blur the first side image after replacing the pixel values.
  • the mirror processing module 430 is configured to: for the pixel points of the second side image, determine the pixel points of the first side image in which the pixel points of the second side image are symmetrical with respect to the dividing line; The pixel value of the point is determined as the target pixel value of the pixel point of the first side image; the first side image is rendered based on the target pixel value to obtain the first side mirror image.
  • the device also includes: a transformation module, configured to: obtain the screen ratio; transform the dividing line based on the screen ratio; optionally, the mirror processing module 430 is configured to: based on the transformed dividing line, the second The side image is mirrored.
  • a transformation module configured to: obtain the screen ratio; transform the dividing line based on the screen ratio
  • the mirror processing module 430 is configured to: based on the transformed dividing line, the second The side image is mirrored.
  • the image processing device provided by the embodiments of the present disclosure can execute the image processing method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to the execution method.
  • the multiple units and modules included in the above-mentioned device are only divided according to functional logic, but are not limited to the above-mentioned divisions, as long as they can achieve the corresponding functions; in addition, the names of the multiple functional units are only for the convenience of distinguishing each other. , are not used to limit the protection scope of the embodiments of the present disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players Mobile terminals such as (Portable Media Player, PMP), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital television (television, TV), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PMP Portable Multimedia Player
  • vehicle-mounted terminals such as vehicle-mounted navigation terminals
  • fixed terminals such as digital television (television, TV), desktop computers, etc.
  • the electronic device shown in FIG. 5 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 500 may include a processing device (such as a central processing unit, a graphics processor, etc.) 501 , which may process data according to a program stored in a read-only memory (Read-Only Memory, ROM) 502 or from a storage device. 508 loads the program in the random access memory (Random Access Memory, RAM) 503 to perform various appropriate actions and processes. In the RAM 503, various programs and data required for the operation of the electronic device 500 are also stored.
  • Processing device 501, ROM 502 and RAM 503 are connected to each other via bus 504.
  • An input/output (I/O) interface 505 is also connected to bus 504.
  • input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 507 such as a speaker, a vibrator, etc.; a storage device 508 including a magnetic tape, a hard disk, etc.; and a communication device 509.
  • Communication device 509 may allow electronic device 500 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 5 illustrates electronic device 500 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 509, or from storage device 508, or from ROM 502.
  • the processing device 501 When the computer program is executed by the processing device 501, the above-mentioned functions defined in the method of the embodiment of the present disclosure are performed.
  • Embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the image processing method provided by the above embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: electrical connections having one or more wires, portable computer disks, hard drives, RAM, ROM, Erasable Programmable Read-Only Memory Memory, EPROM) or flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, which It contains computer-readable program code. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code contained on a computer-readable medium can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
  • HTTP HyperText Transfer Protocol
  • Communications e.g., communications network
  • Examples of communication networks include local area networks (LANs), wide area networks (WANs), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any current network for knowledge or future research and development.
  • LANs local area networks
  • WANs wide area networks
  • the Internet e.g., the Internet
  • end-to-end networks e.g., ad hoc end-to-end networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device identifies the target object in the original image and obtains the target object image and the target.
  • the dividing line of the object wherein the target object image is divided by the dividing line, and the image on any side of the dividing line is used as the first side image, and the image on the other side of the dividing line is used as the second side image.
  • side image replace the pixel value of the first side image with a set pixel value; perform mirror processing on the second side image based on the dividing line to obtain a first side mirror image; wherein, the first side The mirror image and the second side image are mirror images of each other.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, through the Internet using an Internet service provider).
  • each box in the flowchart or block diagram may represent a module, segment, or portion of code that Or a portion of the code contains one or more executable instructions for implementing specified logical functions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units/modules described in the embodiments of the present disclosure may be implemented in software or hardware.
  • the name of the unit/module does not constitute a limitation on the unit itself under certain circumstances.
  • the target object recognition module can also be described as "identifying the target object in the original image, obtaining the target object image and the target Module of the object's dividing line".
  • exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Parts (ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include an electrical connection based on one or more wires, a portable computer disk, a hard disk, RAM, ROM, EPROM or flash memory, optical fiber, portable CD-ROM, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • an image processing method including: identifying a target object in an original image, and obtaining an image of the target object and a dividing line of the target object; wherein, the target object The image is divided by the dividing line, and the image on either side of the dividing line is taken as the first side image, and the image on the other side of the dividing line is taken as the second side image; The pixel value is replaced with the set pixel value; the second side image is mirrored based on the dividing line to obtain a first side mirror image; wherein the first side mirror image and the second side image are mutually exclusive Mirror.
  • identifying the target object in the original image and obtaining the target object image includes: identifying the target object in the original image and obtaining a mask image of the target object; The mask image is fused with the original image to obtain the target object image.
  • identifying the target object in the original image and obtaining the dividing line of the target object includes: determining two set key points of the target object in the original image; The line connecting the points is determined as the dividing line of the target object.
  • determining two set key points of the target object in the original image includes: determining the key point between the eyebrows and the chin key point on the face of the portrait in the original image; Determining the connection line between the two set key points as the dividing line of the target object includes: determining the connection line between the eyebrow key point and the chin key point as the dividing line of the portrait.
  • identifying the target object in the original image and obtaining the dividing line of the target object includes: determining the detection frame of the target object in the original image; determining the target object according to the central axis of the detection frame dividing line.
  • replacing the pixel value of the first side image with the set pixel value includes: traversing the pixel points of the target object image, and inputting the coordinate information of the traversed pixel point into the expression of the dividing line. , obtain the result value; if the result value is greater than the set value, replace the pixel value of the traversed pixel with the set pixel value; or, if the result value is less than the set value, replace the traversed pixel The pixel value of the point is replaced with the set pixel value.
  • replacing the pixel value of the first side image with a set pixel value includes: selecting a pixel value from the background area of the original image as the set pixel value; replacing the first side image with a set pixel value.
  • the pixel value is replaced with the set pixel value.
  • the method further includes: blurring the first side image after replacing the pixel values.
  • performing mirror processing on the second side image based on the dividing line to obtain the first side mirror image includes: for the pixel points of the second side image, determining the symmetry of the pixel points of the second side image. pixel points of the first side image; determining the pixel value of the pixel point of the second side image as the target pixel value of the pixel point of the first side image; rendering the first side image based on the target pixel value , obtain the first side mirror image.
  • the method before mirroring the second side image based on the dividing line, the method further includes: obtaining a screen ratio; transforming the dividing line based on the screen ratio; Mirroring the second side image includes mirroring the second side image based on the transformed dividing line.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

Sont divulgués dans les modes de réalisation de la présente divulgation un procédé et un appareil de traitement d'image, ainsi qu'un dispositif et un support de stockage. Le procédé consiste à : identifier un objet cible dans une image d'origine pour obtenir une image de l'objet cible et une ligne de segmentation de l'objet cible, l'image de l'objet cible étant divisée par la ligne de segmentation, une image de n'importe quel côté de la ligne de segmentation servant d'image de premier côté et l'image d'autre côté de la ligne de segmentation servant d'image de second côté ; remplacer une valeur de pixel de l'image de premier côté par une valeur de pixel définie ; et effectuer un traitement d'image miroir sur l'image de second côté sur la base de la ligne de segmentation pour obtenir une image miroir de premier côté, l'image miroir de premier côté et l'image de second côté étant des images miroir.
PCT/CN2023/113234 2022-08-17 2023-08-16 Appareil et procédé de traitement d'image, dispositif et support de stockage WO2024037556A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210988925.6 2022-08-17
CN202210988925.6A CN115358919A (zh) 2022-08-17 2022-08-17 图像处理方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024037556A1 true WO2024037556A1 (fr) 2024-02-22

Family

ID=84001852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/113234 WO2024037556A1 (fr) 2022-08-17 2023-08-16 Appareil et procédé de traitement d'image, dispositif et support de stockage

Country Status (2)

Country Link
CN (1) CN115358919A (fr)
WO (1) WO2024037556A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115358919A (zh) * 2022-08-17 2022-11-18 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959564A (zh) * 2016-06-15 2016-09-21 维沃移动通信有限公司 一种拍照方法及移动终端
CN111145189A (zh) * 2019-12-26 2020-05-12 成都市喜爱科技有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN111754528A (zh) * 2020-06-24 2020-10-09 Oppo广东移动通信有限公司 人像分割方法、装置、电子设备和计算机可读存储介质
EP3982288A1 (fr) * 2020-10-09 2022-04-13 Fresenius Medical Care Deutschland GmbH Procédé d'identification d'un objet, programme informatique permettant de mettre en uvre le procédé et système
CN115358919A (zh) * 2022-08-17 2022-11-18 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959564A (zh) * 2016-06-15 2016-09-21 维沃移动通信有限公司 一种拍照方法及移动终端
CN111145189A (zh) * 2019-12-26 2020-05-12 成都市喜爱科技有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
CN111754528A (zh) * 2020-06-24 2020-10-09 Oppo广东移动通信有限公司 人像分割方法、装置、电子设备和计算机可读存储介质
EP3982288A1 (fr) * 2020-10-09 2022-04-13 Fresenius Medical Care Deutschland GmbH Procédé d'identification d'un objet, programme informatique permettant de mettre en uvre le procédé et système
CN115358919A (zh) * 2022-08-17 2022-11-18 北京字跳网络技术有限公司 图像处理方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115358919A (zh) 2022-11-18

Similar Documents

Publication Publication Date Title
WO2021139408A1 (fr) Procédé et appareil pour afficher un effet spécial, et support d'enregistrement et dispositif électronique
WO2022166872A1 (fr) Procédé et appareil d'affichage à effet spécial, ainsi que dispositif et support
CN110070496B (zh) 图像特效的生成方法、装置和硬件装置
US20230005194A1 (en) Image processing method and apparatus, readable medium and electronic device
EP4258165A1 (fr) Procédé et appareil d'affichage de code bidimensionnel, dispositif, et support
US20240119082A1 (en) Method, apparatus, device, readable storage medium and product for media content processing
WO2024037556A1 (fr) Appareil et procédé de traitement d'image, dispositif et support de stockage
WO2023232056A1 (fr) Procédé et appareil de traitement d'image, support de stockage et dispositif électronique
US11893770B2 (en) Method for converting a picture into a video, device, and storage medium
WO2024104248A1 (fr) Procédé et appareil de rendu pour panorama virtuel, dispositif, et support de stockage
WO2024016930A1 (fr) Procédé et appareil de traitement d'effets spéciaux, dispositif électronique et support de stockage
WO2023193642A1 (fr) Procédé et appareil de traitement vidéo, dispositif, et support de stockage
US20220139016A1 (en) Sticker generating method and apparatus, and medium and electronic device
WO2024041637A1 (fr) Procédé et appareil de génération d'image d'effet spécial, dispositif et support de stockage
WO2024051639A1 (fr) Procédé, appareil et dispositif de traitement d'image, support de stockage et produit
WO2023231918A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2024032752A1 (fr) Procédé et appareil pour générer une image d'effet spécial de transition, dispositif, et support de stockage
WO2024061064A1 (fr) Procédé et appareil de traitement d'effets d'affichage, dispositif électronique et support de stockage
WO2024051540A1 (fr) Procédé et appareil de traitement d'effets spéciaux, dispositif électronique et support de stockage
WO2024051541A1 (fr) Procédé et appareil de génération d'images à effet spécial, dispositif électronique et support d'enregistrement
WO2024016923A1 (fr) Procédé et appareil de génération de graphe à effets spéciaux, dispositif et support de stockage
WO2023231926A1 (fr) Procédé et appareil de traitement d'image, dispositif, et support de stockage
WO2023193639A1 (fr) Procédé et appareil de rendu d'image, support lisible et dispositif électronique
WO2023169287A1 (fr) Procédé et appareil de génération d'effet spécial de maquillage de beauté, dispositif, support d'enregistrement et produit de programme
WO2023098576A1 (fr) Procédé et appareil de traitement d'image, dispositif et support

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854456

Country of ref document: EP

Kind code of ref document: A1