CN111192276B - Image processing method, device, electronic equipment and storage medium - Google Patents

Image processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111192276B
CN111192276B CN201911306168.4A CN201911306168A CN111192276B CN 111192276 B CN111192276 B CN 111192276B CN 201911306168 A CN201911306168 A CN 201911306168A CN 111192276 B CN111192276 B CN 111192276B
Authority
CN
China
Prior art keywords
target
detection line
detection
detected
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911306168.4A
Other languages
Chinese (zh)
Other versions
CN111192276A (en
Inventor
李怡枝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN201911306168.4A priority Critical patent/CN111192276B/en
Publication of CN111192276A publication Critical patent/CN111192276A/en
Application granted granted Critical
Publication of CN111192276B publication Critical patent/CN111192276B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Abstract

An image processing method, an image processing device, an electronic device and a storage medium based on image recognition, wherein the method comprises the steps of constructing an image processing interface and importing an original image; acquiring outline identification points in an original image; constructing an outer contour according to the outer contour identification points; constructing a plurality of detection lines in an original image according to the outer contour; identifying an inflection point detection line and a normal detection line in the detection lines; converting the inflection point detection line into a normal detection line; identifying a background pixel and a target pixel on a normal detection line; extracting a target image in the original image according to the target pixel; and synthesizing the target image with a preset background. According to the image processing method, the outline identification points in the image are intelligently identified through the AI at the image processing interface, and the original image is automatically subjected to fine matting, so that the difficulty of matting operation is reduced.

Description

Image processing method, device, electronic equipment and storage medium
Technical Field
The invention relates to an image processing method, an image processing device, an electronic device and a storage medium.
Background
Most of the image processing applications on the market, such as Photoshop with relatively strong professionals, can perform very fine operations of matting, beautifying and synthesizing images on images, but the professional requirements are particularly strong, are not suitable for the general public, and depend on the level of image operation technology of users. In addition, an image processing application program, for example, an application program such as a beauty image show, can perform operations such as simple cropping and beautification of an image to synthesize a new image, but cannot perform operations such as fine matting and background synthesis. Most of the image processing software is based on manual visual recognition, the manual operation processing is carried out on the pictures, the manpower time consumed on the picture processing software is very much, and the intelligent degree is far from enough.
Disclosure of Invention
In view of this, it is necessary to provide an image processing method that improves the degree of intelligence in image processing.
An image processing method, comprising:
when a construction instruction is detected, constructing an image processing interface and importing an original image;
when an acquisition instruction is detected, acquiring an outline identification point in the original image;
when a contour construction instruction is detected, constructing an outer contour according to the outer contour identification points;
when the detection line is detected to be established, a plurality of detection lines are established in the original image according to the outer contour;
when a first identification instruction is detected, identifying an inflection point detection line and a normal detection line in the detection lines;
converting the inflection point detect line into the normal detect line when a conversion instruction is detected;
when a second identification instruction is detected, identifying a background pixel and a target pixel on the normal detection line;
when an extraction instruction is detected, extracting a target image in the original image according to the target pixel;
and when the synthesis instruction is detected, synthesizing the target image with a preset background.
Preferably, the step of constructing an outer contour according to the outer contour identification points includes:
Establishing a pixel coordinate system according to the original image, and taking the point of the upper left corner of the original image as an origin;
assigning a first function to each outer contour identification point to obtain a first parameter;
identifying a maximum value and a minimum value in a plurality of first parameters as a contour start point and a contour end point of the target image respectively;
taking the pixel coordinates of the contour identification points as starting points and sequencing the pixel coordinates clockwise by taking the contour starting points and the contour ending points as starting points;
and connecting lines between the two adjacent outline identification points after sequencing to form the outer outline.
Preferably, the original image is composed of a plurality of pixel points; the first function is (point.y. ImgW) + (point.x+1); wherein, point.y represents the coordinate of the pixel point along the Y axis in the pixel coordinate system, point.x represents the coordinate of the pixel point along the X axis in the pixel coordinate system, and imgW represents the width of the original image.
Preferably, the step of identifying an inflection point detection line and a normal detection line among the detection lines includes:
acquiring one detection line as a target detection line;
calculating the intersection point coordinates of the target detection line and the outer contour as detection intersection points;
Acquiring the number of detection intersection points corresponding to the target detection lines as a target parameter, and setting detection lines adjacent to the target detection lines as comparison detection lines;
identifying the number of intersection points of the comparison detection line and the outer contour as comparison parameters;
judging whether the target parameter is consistent with the control parameter;
when the target parameter is inconsistent with the control parameter, marking the target detection line as an inflection point detection line;
and when the target parameter is consistent with the contrast parameter line, identifying the target detection line as a normal detection line.
Preferably, the step of converting the inflection point detect line into the normal detect line includes:
extracting the detection intersection point corresponding to the inflection point detection line;
taking the detection intersection point, of which the difference value between the detection intersection point and at least two comparison intersection points on the X axis is within a preset difference value range, as an inflection point;
and adding a detection intersection point consistent with the inflection point coordinates in the inflection point detection line.
Preferably, the step of identifying the background pixel and the target pixel on the normal detection line further includes, before:
sequencing all the detection intersection points according to an X coordinate;
Sequentially connecting the detection intersection points to form a plurality of line segments;
alternately setting a plurality of line segments into a background line segment and a target line segment according to the arrangement sequence;
setting pixels on the background line segment as the background pixels and setting pixels on the target line segment as the target pixels.
Preferably, the step of acquiring the target image in the original image from the target pixel includes:
converting the original image into texture data according to a specified interface program;
adjusting the transparency of the texture data corresponding to the background pixels to a preset value;
and inputting the adjusted texture data into a target program to form the target image.
In order to achieve the above object, the present invention also proposes an image processing apparatus including:
the image processing apparatus includes:
the interface construction module is used for constructing an image processing interface and importing an original image when a construction instruction is detected;
the acquisition module is used for acquiring outline identification points in the original image when an acquisition instruction is detected;
the outline construction module is used for constructing an outline according to the outline identification points when an outline construction instruction is detected;
The detection line building module is used for building a plurality of detection lines in the original image according to the outer contour when the detection line is detected to be built;
the first identification module is used for identifying an inflection point detection line and a normal detection line in the detection lines when a first identification instruction is detected;
the conversion module is used for converting the inflection point detection line into the normal detection line when a conversion instruction is detected;
the second identification module is used for identifying background pixels and target pixels on the normal detection line when a second identification instruction is detected;
the extraction module is used for extracting a target image in the original image according to the target pixel when an extraction instruction is detected;
and the synthesis module is used for synthesizing the target image with a preset background when a synthesis instruction is detected.
In order to achieve the above object, the present invention also proposes an electronic device comprising a processor and a memory, the processor executing the following steps when executing a computer program stored in the memory:
when a construction instruction is detected, constructing an image processing interface and importing an original image;
when an acquisition instruction is detected, acquiring an outline identification point in the original image;
When a contour construction instruction is detected, constructing an outer contour according to the outer contour identification points;
when the detection line is detected to be established, a plurality of detection lines are established in the original image according to the outer contour;
when a first identification instruction is detected, identifying an inflection point detection line and a normal detection line in the detection lines;
converting the inflection point detect line into the normal detect line when a conversion instruction is detected;
when a second identification instruction is detected, identifying a background pixel and a target pixel on the normal detection line;
when an extraction instruction is detected, extracting a target image in the original image according to the target pixel;
and when the synthesis instruction is detected, synthesizing the target image with a preset background.
In addition, in order to achieve the above object, the present invention also proposes a storage medium, which is a computer-readable storage medium, storing at least one instruction, which when executed by a processor, implements the steps of:
when a construction instruction is detected, constructing an image processing interface and importing an original image;
when an acquisition instruction is detected, acquiring an outline identification point in the original image;
When a contour construction instruction is detected, constructing an outer contour according to the outer contour identification points;
when the detection line is detected to be established, a plurality of detection lines are established in the original image according to the outer contour;
when a first identification instruction is detected, identifying an inflection point detection line and a normal detection line in the detection lines;
converting the inflection point detect line into the normal detect line when a conversion instruction is detected;
when a second identification instruction is detected, identifying a background pixel and a target pixel on the normal detection line;
when an extraction instruction is detected, extracting a target image in the original image according to the target pixel;
and when the synthesis instruction is detected, synthesizing the target image with a preset background.
According to the image processing method, the image processing device, the electronic equipment and the storage medium, the outline identification points in the image are intelligently identified through the AI in the image processing interface, fine matting is automatically carried out on the original image, and the difficulty of matting operation is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an image processing method according to the present invention.
Fig. 2 is a schematic diagram of a refinement flow of step S12 in fig. 1.
Fig. 3 is a schematic diagram of a refinement flow of step S13 in fig. 1.
Fig. 4 is a schematic diagram of a refinement flow of step S14 in fig. 1.
Fig. 5 is a schematic diagram of a refinement flow of step S15 in fig. 1.
Fig. 6 is a schematic diagram of a refinement flow of step S16 in fig. 1.
Fig. 7 is a schematic diagram of a refinement flow of step S17 in fig. 1.
Fig. 8 is a functional block diagram of the image processing apparatus of the present invention.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Description of the main reference signs
Image processing apparatus 1
Interface construction Module 10
Acquisition module 20
Contour construction Module 30
Detection line building module 40
First identification module 50
Conversion module 60
Second identification module 70
Extraction module 80
Synthesis module 90
Memory 102
Communication bus 104
Processor 106
The invention will be further described in the following detailed description in conjunction with the above-described figures.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The terms first, second, third and the like in the description and in the claims of the invention and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
A specific embodiment of the image processing method of the present invention will be described below with reference to the accompanying drawings.
In at least one embodiment of the present invention, the image processing method is applied to an image processing system formed by at least one electronic device and a server. The image processing system provides a visual interface. The visual interface is used for providing a man-machine interaction interface for a user, and the user can be connected to the image processing system through electronic equipment such as a mobile phone or a computer. And carrying out data transmission between the electronic equipment and the server according to a preset protocol. Preferably, the preset protocol includes, but is not limited to, any one of the following: HTTP protocol (Hyper Text Transfer Protocol ), HTTPs protocol (Hyper Text Transfer Protocol over Secure Socket Layer, HTTP protocol targeted for security), etc. In at least one embodiment of the present invention, the server may be a single server, or may be a server group formed by several functional servers together. The electronic device may be any terminal having a network connection function, for example, the electronic device may be a mobile device such as a personal computer, a tablet computer, a smart phone, a personal digital assistant (Personal Digital Assistant, PDA), a game console, an interactive internet protocol television (Internet Protocol Television, IPTV), an intelligent wearable device, a navigation apparatus, or the like, or a stationary device such as a desktop computer, a digital TV, or the like. The electronic device has a memory (as shown in fig. 9). The memory may be used to store image data. The image processing method is used for matting the target image in the original image at the image processing interface and synthesizing the target image with the appointed background.
Please refer to fig. 1, which is a schematic diagram illustrating an image processing method according to the present invention.
S10, when a construction instruction is detected, constructing an image processing interface and importing an original image.
In at least one embodiment of the present invention, the image processing interface is an H5 page. In other embodiments, the image processing interface may be other types of web interfaces.
S11, when an acquisition instruction is detected, acquiring outer contour identification points of a target image in the original image.
In at least one embodiment of the present invention, the original image is composed of a plurality of pixel points. The outer contour identification points are identified by utilizing an artificial intelligent identification technology. The artificial intelligent recognition technology is to establish a target image model by using an artificial neural network method, train the target image model by using a sample set, judge whether the original image has a target image or not when the original image is input into the image model, and acquire the contour identification point of the target image when the original image has the target image. The outline identification point is an inflection point and can be set according to the requirement of a user.
S12, when a contour construction instruction is detected, constructing the outer contour of the target image according to the outer contour identification points.
Referring to fig. 2 together, in at least one embodiment of the present invention, the step of constructing all outline points of the target image according to the outline identification points may further include:
s121, establishing a pixel coordinate system according to the original image, and taking a point at the upper left corner of the original image as an origin;
s122, assigning a first function to each outer contour identification point to obtain a first parameter;
s123, identifying the maximum value and the minimum value in the first parameters as a contour starting point and a contour ending point of the target image respectively;
s124, taking the pixel coordinates of the contour identification points as starting points and sequencing the pixel coordinates clockwise by taking the contour starting point and the contour ending point as starting points;
and S125, connecting lines between the two adjacent outline identification points after sequencing to form the outer outline.
In at least one embodiment of the present invention, the first function is (point.y x imgW) + (point.x+1). Wherein, point.y represents the coordinate of the pixel point along the Y axis in the pixel coordinate system, point.x represents the coordinate of the pixel point along the X axis in the pixel coordinate system, and imgW represents the width of the original image.
S13, when the detection line is detected to be established, a plurality of detection lines are established in the original image according to the outline.
Referring to fig. 3 together, in at least one embodiment of the present invention, the step of constructing a plurality of detection lines in the original image according to the outer contour may further include:
s131, dividing pixel points with the same Y coordinates on the outer contour into a detection group;
s132, constructing a plurality of detection lines according to the Y coordinates corresponding to each detection group.
In at least one embodiment of the present invention, the detection line is a straight line parallel to the X-axis.
And S14, identifying an inflection point detection line and a normal detection line in the detection lines when the first identification instruction is detected.
Referring to fig. 4, in at least one embodiment of the present invention, the step of identifying an inflection point detect line and a normal detect line of the detect lines may further include:
s141, acquiring one detection line as a target detection line;
s142, calculating the intersection point coordinates of the target detection line and the outer contour as detection intersection points;
s143, acquiring the number of the detection intersection points corresponding to the target detection lines as target parameters,
S144, setting a detection line adjacent to the target detection line as a control detection line;
s145, identifying the number of intersection points of the comparison detection line and the outer contour as comparison parameters;
s146, judging whether the target parameter is consistent with the control parameter;
s147, when the target parameter is inconsistent with the comparison parameter, marking the target detection line as an inflection point detection line;
and S148, when the target parameter is consistent with the contrast parameter line, marking the target detection line as a normal detection line.
S15, converting the inflection point detection line into the normal detection line when a conversion instruction is detected.
Referring to fig. 5 together, in at least one embodiment of the present invention, the step of converting the inflection point detect line to the normal detect line may further include:
s151, extracting the detection intersection point corresponding to the inflection point detection line;
s152, taking the detection intersection point, of which the difference value between the detection intersection point and at least two comparison intersection points on the X axis is within a preset difference value range, as an inflection point;
and S153, adding a detection intersection point consistent with the inflection point coordinates into the inflection point detection line.
In at least one embodiment of the present invention, the predetermined difference is 1 pixel. In other embodiments, the predetermined difference may be 2 pixels. And the contrast intersection point is a pixel point which is closest to the detection intersection point on the contrast detection line.
S16, when a second identification instruction is detected, identifying the background pixel and the target pixel on the normal detection line.
Referring to fig. 6, in at least one embodiment of the present invention, the step of identifying the background pixel and the target pixel on the normal detection line may further include:
s161, sorting all the detection intersection points according to X coordinates;
s162, connecting the detection intersection points sequentially to form a plurality of line segments;
s163, alternately setting a plurality of line segments into a background line segment and a target line segment according to the arrangement sequence;
s164, setting the pixels on the background line segment as the background pixels and the pixels on the target line segment as the target pixels.
In at least one embodiment of the invention, the line segment is a line segment parallel to the X axis.
S17, when an extraction instruction is detected, extracting the target image in the original image according to the target pixel.
In at least one embodiment of the invention, each of the pixels is comprised of at least three sub-pixel elements. The three sub-pixel units may be a red sub-pixel, a blue sub-pixel, and a green sub-pixel. The texture data is a two-dimensional array comprising 8-bit unsigned integer values. Each array comprises a first parameter R, a second parameter G, a third parameter B and a fourth parameter a. The first parameter R represents the color value of the red subpixel. The second parameter B represents the color value of the blue subpixel. The third parameter G represents the color value of the green subpixel. The fourth parameter a represents transparency.
Referring to fig. 7, in at least one embodiment of the present invention, the step of acquiring the target image in the original image according to the target pixel may further include:
s171, converting the original image into texture data according to a specified interface program;
s172, adjusting the transparency of the texture data corresponding to the background pixels to a preset value;
s173, inputting the adjusted texture data into a target program to form the target image.
In at least one embodiment of the present invention, the object is a Canvas written in JAVASCRIPT language. The specified interface program is canvas renderingcontext2D. Wherein the predetermined value is 0.
S18, when a synthesis instruction is detected, synthesizing the target image with a preset background.
In at least one embodiment of the present invention, all of the instructions described above may be data request instructions received by the electronic device. The electronic device may include a keyboard, a touch screen, etc., but the user input manner in the example embodiments of the present disclosure is not limited thereto. May be generated for the user by a specific operation on the visual interface. Specifically, the operations of the user include, but are not limited to: sliding operation, clicking operation (e.g., single click operation, double click operation, etc.). Specifically, the preset key may be an entity key on the electronic device, or may be a virtual key on the electronic device (for example, the virtual key may be a virtual icon on a display of the electronic device, etc.), which is not limited herein.
According to the image processing method, the outline identification points in the image are intelligently identified through the AI in the image processing interface, fine matting is automatically carried out on the original image, and difficulty of matting operation is reduced. Furthermore, the H5 page is used as an image processing interface, so that online image processing can be realized, and an image processing application program does not need to be downloaded, so that the image processing method can be widely applied to different environments.
Referring to fig. 8, the present invention provides an image processing apparatus 1, which is applied to one or more devices. In at least one embodiment of the present invention, the image processing apparatus 1 is applied to an image processing system formed by at least one electronic device and a server. And carrying out data transmission between the electronic equipment and the server according to a preset protocol. The image processing device 1 is used for matting the target image in the original image at the image processing interface and combining the target image with the appointed background.
In one embodiment of the present invention, the image processing apparatus 1 includes:
the interface construction module 10 is used for constructing an image processing interface and importing an original image when a construction instruction is detected.
In at least one embodiment of the present invention, the image processing interface is an H5 page. In other embodiments, the image processing interface may be other types of web interfaces.
The acquiring module 20 is configured to acquire the outline identification point in the original image when an acquiring instruction is detected.
In at least one embodiment of the present invention, the original image is composed of a plurality of pixel points. The outer contour identification points are identified by utilizing an artificial intelligent identification technology. The artificial intelligent recognition technology is to establish a target image model by using an artificial neural network method, train the target image model by using a sample set, judge whether the original image has a target image or not when the original image is input into the image model, and acquire the contour identification point of the target image when the original image has the target image. The outline identification point is an inflection point and can be set according to the requirement of a user.
The contour construction module 30 is configured to construct an outer contour of the target image according to the outer contour identification point when a contour construction instruction is detected.
The contour construction module 30 further establishes a pixel coordinate system from the original image with the point of the upper left corner of the original image as the origin. The contour construction module 30 further assigns each of the outer contour identification points a first function to obtain a first parameter. The contour construction module 30 further identifies the maximum and minimum values of the plurality of first parameters as the contour start point and the contour end point of the target image, respectively. The contour construction module 30 further orders the pixel coordinates of the contour identification points clockwise with the contour start point and the contour end point as starting points. The contour construction module 30 further links the two adjacent contour identification points after the sequencing to form the outer contour.
In at least one embodiment of the present invention, the first function is (point.y x imgW) + (point.x+1). Wherein, point.y represents the coordinate of the pixel point along the Y axis in the pixel coordinate system, point.x represents the coordinate of the pixel point along the X axis in the pixel coordinate system, and imgW represents the width of the original image.
And the detection line establishment module 40 is configured to establish a plurality of detection lines in the original image according to the outer contour when detection line establishment is detected.
The detection line building module 40 further divides the pixels having the same Y coordinate on the outer contour into one detection group, and builds a plurality of detection lines according to the Y coordinate corresponding to each detection group.
In at least one embodiment of the present invention, the detection line is a straight line parallel to the X-axis.
The first identifying module 50 is configured to identify an inflection point detection line and a normal detection line among the detection lines when a first identifying instruction is detected.
The first identifying module 50 further obtains one detection line as a target detection line, calculates coordinates of intersection points of the target detection line and the outer contour as detection intersection points, obtains the number of the detection intersection points corresponding to the target detection line as a target parameter, sets a detection line adjacent to the target detection line as a comparison detection line, and identifies the number of intersection points of the comparison detection line and the outer contour as a comparison parameter.
The first identification module 50 further determines whether the target parameter is consistent with the control parameter. The identification module 50 further identifies the target detection line as an inflection point detection line when the target parameter is inconsistent with the control parameter. The first identification module 50 further identifies the target detection line as a normal detection line when the target parameter is consistent with the control parameter line.
The conversion module 60 is configured to convert the inflection point detect line into the normal detect line when a conversion instruction is detected.
The conversion module 60 further extracts the detection intersection points corresponding to the inflection point detection lines, takes the detection intersection point, of the detection intersection points, of which the difference value between the detection intersection points and at least two comparison intersection points in the X axis is within a preset difference value range as an inflection point, and adds one detection intersection point consistent with the inflection point coordinates in the inflection point detection lines. In at least one embodiment of the present invention, the predetermined difference is 1 pixel. In other embodiments, the predetermined difference may be 2 pixels. And the contrast intersection point is a pixel point which is closest to the detection intersection point on the contrast detection line.
The second identifying module 70 is configured to identify a background pixel and a target pixel on the normal detection line when a second identifying instruction is detected.
The second recognition module 70 further sorts all the detected intersection points according to the X-coordinate, sequentially connects the detected intersection points to form a plurality of line segments, alternately sets the plurality of line segments as a background line segment and a target line segment according to the arrangement sequence, sets the pixels on the background line segment as the background pixels and sets the pixels on the target line segment as the target pixels. In at least one embodiment of the invention, the line segment is a line segment parallel to the X axis.
And an extraction module 80, configured to extract the target image in the original image according to the target pixel when an extraction instruction is detected.
In at least one embodiment of the invention, each of the pixels is comprised of at least three sub-pixel elements. The three sub-pixel units may be a red sub-pixel, a blue sub-pixel, and a green sub-pixel. The texture data is a two-dimensional array comprising 8-bit unsigned integer values. Each array comprises a first parameter R, a second parameter G, a third parameter B and a fourth parameter a. The first parameter R represents the color value of the red subpixel. The second parameter B represents the color value of the blue subpixel. The third parameter G represents the color value of the green subpixel. The fourth parameter a represents transparency.
The extraction module 80 further converts the original image into texture data according to a specified interface program, adjusts the transparency of the texture data corresponding to the background pixels to a predetermined value, and inputs the adjusted texture data into a target program to form the target image. In at least one embodiment of the present invention, the object is a Canvas written in JAVASCRIPT language. The specified interface program is CanvasRendering Context2D. Wherein the predetermined value is 0.
A synthesizing module 90, configured to synthesize the target image with a predetermined background when a synthesizing instruction is detected.
According to the image processing device, the outline identification points in the image are intelligently identified through the AI on the image processing interface, and the original image is automatically subjected to fine matting, so that the difficulty of matting operation is reduced, an application program is not required to be downloaded, and the image processing device can be widely applied to different environments.
Fig. 9 is a schematic diagram of an electronic device according to an embodiment of the invention. The electronic device includes a processor 106, a memory 102, and a communication bus 104.
The memory 102 is used for storing program codes. The memory 102 may be a circuit with a storage function without a physical form in an integrated circuit, or the memory 102 may also be a memory with a physical form, such as a memory bank, a TF Card (Trans-flash Card), a smart media Card (smart media Card), a secure digital Card (secure digital Card), a flash memory Card (flash Card), or other storage devices. The memory 102 may be in data communication with the processor 106 via the communication bus 104. The memory 102 may include an operating system, a network communication module, and an image processing program. An operating system is a program that manages and controls the hardware and software resources of an electronic device, supporting the execution of image processing programs and other software and/or programs. The network communication module is used to implement communication between the components within the memory 102 and with other hardware and software in the image processing device.
The processor 106 may include one or more microprocessors, digital processors. The processor 106 may invoke program code stored in the memory 102 to perform related functions. For example, the various modules depicted in fig. 8 are program code stored in the memory 102 and executed by the processor 106 to implement an image processing method. The processor 106 is also called a central processing Unit (CPU, central Processing Unit), which is a very large scale integrated circuit (asic), and is an operation Core (Core) and a Control Unit (Control Unit).
The processor 106 is configured to execute a plurality of computer instructions stored in the memory 102 to implement an image processing method, and the processor 106 is configured to execute the plurality of instructions to implement the steps of:
s10, when a construction instruction is detected, constructing an image processing interface and importing an original image.
In at least one embodiment of the present invention, the image processing interface is an H5 page. In other embodiments, the image processing interface may be other types of web interfaces.
S11, when an acquisition instruction is detected, acquiring an outline identification point in the original image.
In at least one embodiment of the present invention, the original image is composed of a plurality of pixel points. The outer contour identification points are identified by utilizing an artificial intelligent identification technology. The artificial intelligent recognition technology is to establish a target image model by using an artificial neural network method, train the target image model by using a sample set, judge whether the original image has a target image or not when the original image is input into the image model, and acquire the contour identification point of the target image when the original image has the target image. The outline identification point is an inflection point and can be set according to the requirement of a user.
Referring to fig. 2, S12, when a contour construction command is detected, an outer contour is constructed according to the outer contour identification points.
In at least one embodiment of the present invention, the step of constructing the outer contour according to the outer contour identification points may further include:
s121, establishing a pixel coordinate system according to the original image, and taking a point at the upper left corner of the original image as an origin;
s122, assigning a first function to each outer contour identification point to obtain a first parameter;
s123, identifying the maximum value and the minimum value in the first parameters as a contour starting point and a contour ending point of the target image respectively;
s124, taking the pixel coordinates of the contour identification points as starting points and sequencing the pixel coordinates clockwise by taking the contour starting point and the contour ending point as starting points;
and S125, connecting lines between the two adjacent outline identification points after sequencing to form an outer outline.
In at least one embodiment of the present invention, the first function is (point.y x imgW) + (point.x+1). Wherein, point.y represents the coordinate of the pixel point along the Y axis in the pixel coordinate system, point.x represents the coordinate of the pixel point along the X axis in the pixel coordinate system, and imgW represents the width of the original image.
S13, when the detection line is detected to be established, a plurality of detection lines are established in the original image according to the outline.
Referring to fig. 3 together, in at least one embodiment of the present invention, the step of constructing a plurality of detection lines in the original image according to the outer contour may further include:
s131, dividing pixel points with the same Y coordinates on the outer contour into a detection group;
s132, constructing a plurality of detection lines according to the Y coordinates corresponding to each detection group.
In at least one embodiment of the present invention, the detection line is a straight line parallel to the X-axis.
And S14, identifying an inflection point detection line and a normal detection line in the detection lines when the first identification instruction is detected.
Referring to fig. 4, in at least one embodiment of the present invention, the step of identifying an inflection point detect line and a normal detect line of the detect lines may further include:
s141, acquiring one detection line as a target detection line;
s142, calculating the intersection point coordinates of the target detection line and the outer contour as detection intersection points;
s143, acquiring the number of detection intersection points corresponding to the target detection lines as target parameters;
S144, setting a detection line adjacent to the target detection line as a control detection line;
s145, identifying the number of intersection points of the comparison detection line and the outer contour as comparison parameters;
s146, judging whether the target parameter is consistent with the control parameter;
s147, when the target parameter is inconsistent with the comparison parameter, marking the target detection line as an inflection point detection line;
and S148, when the target parameter is consistent with the contrast parameter line, marking the target detection line as a normal detection line.
S15, converting the inflection point detection line into the normal detection line when a conversion instruction is detected.
Referring to fig. 5 together, in at least one embodiment of the present invention, the step of converting the inflection point detect line to the normal detect line may further include:
s151, extracting the detection intersection point corresponding to the inflection point detection line;
s152, taking the detection intersection point, of which the difference value between the detection intersection point and at least two comparison intersection points on the X axis is within a preset difference value range, as an inflection point;
and S153, adding a detection intersection point consistent with the inflection point coordinates into the inflection point detection line.
In at least one embodiment of the present invention, the predetermined difference is 1 pixel. In other embodiments, the predetermined difference may be 2 pixels. And the contrast intersection point is a pixel point which is closest to the detection intersection point on the contrast detection line.
S16, when a second identification instruction is detected, identifying the background pixel and the target pixel on the normal detection line.
Referring to fig. 6, in at least one embodiment of the present invention, the step of identifying the background pixel and the target pixel on the normal detection line may further include:
s161, sorting all the detection intersection points according to X coordinates;
s162, connecting the detection intersection points sequentially to form a plurality of line segments;
s163, alternately setting a plurality of line segments into a background line segment and a target line segment according to the arrangement sequence;
s164, setting the pixels on the background line segment as the background pixels and the pixels on the target line segment as the target pixels.
In at least one embodiment of the invention, the line segment is a line segment parallel to the X axis.
S17, when an extraction instruction is detected, extracting the target image in the original image according to the target pixel.
In at least one embodiment of the invention, each of the pixels is comprised of at least three sub-pixel elements. The three sub-pixel units may be a red sub-pixel, a blue sub-pixel, and a green sub-pixel. The texture data is a two-dimensional array comprising 8-bit unsigned integer values. Each array comprises a first parameter R, a second parameter G, a third parameter B and a fourth parameter a. The first parameter R represents the color value of the red subpixel. The second parameter B represents the color value of the blue subpixel. The third parameter G represents the color value of the green subpixel. The fourth parameter a represents transparency.
Referring to fig. 7, in at least one embodiment of the present invention, the step of acquiring the target image in the original image according to the target pixel may further include:
s171, converting the original image into texture data according to a specified interface program;
s172, adjusting the transparency of the texture data corresponding to the background pixels to a preset value;
s173, inputting the adjusted texture data into a target program to form the target image.
In at least one embodiment of the present invention, the object is a Canvas written in JAVASCRIPT language. The specified interface program is CanvasRendering Context2D. Wherein the predetermined value is 0.
S18, when a synthesis instruction is detected, synthesizing the target image with a preset background.
According to the image processing method, the outline identification points in the image are intelligently identified through the AI in the image processing interface, fine matting is automatically carried out on the original image, and difficulty of matting operation is reduced. Furthermore, the H5 page is used as an image processing interface, so that online image processing can be realized, and an image processing application program does not need to be downloaded, so that the image processing method can be widely applied to different environments.
The invention also provides a storage medium. The storage medium is a computer-readable storage medium. The computer readable storage medium has stored thereon computer instructions. The computer instructions may be stored on the memory 102 and when executed by the one or more processors 106 implement the image processing methods described in the method embodiments above, such as S10-S18 shown in fig. 1, which are not described in detail herein.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the modules, merely a logical function division, and there may be additional manners of dividing actual implementations, such as multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processor, or each module may exist alone physically, or two or more modules may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (9)

1. An image processing method, characterized in that the image processing method comprises:
When a construction instruction is detected, constructing an image processing interface and importing an original image;
when an acquisition instruction is detected, acquiring an outline identification point in the original image;
when a contour construction instruction is detected, constructing an outer contour according to the outer contour identification points;
when the detection line is detected to be established, a plurality of detection lines are established in the original image according to the outer contour;
when a first identification instruction is detected, identifying an inflection point detection line and a normal detection line in the detection lines;
converting the inflection point detect line into the normal detect line when a conversion instruction is detected;
when a second identification instruction is detected, identifying a background pixel and a target pixel on the normal detection line;
when the extraction instruction is detected, extracting a target image in the original image according to the target pixel, wherein the extraction instruction comprises the following steps: converting the original image into texture data according to a specified interface program; adjusting the transparency of the texture data corresponding to the background pixels to a preset value; inputting the adjusted texture data into a target program to form the target image;
and when the synthesis instruction is detected, synthesizing the target image with a preset background.
2. The image processing method according to claim 1, wherein the step of constructing an outer contour from the outer contour identification points includes:
establishing a pixel coordinate system according to the original image, and taking the point of the upper left corner of the original image as an origin;
assigning a first function to each outer contour identification point to obtain a first parameter;
identifying a maximum value and a minimum value in a plurality of first parameters as a contour start point and a contour end point of the target image respectively;
taking the pixel coordinates of the contour identification points as starting points and sequencing the pixel coordinates clockwise by taking the contour starting points and the contour ending points as starting points;
and connecting lines between the two adjacent outline identification points after sequencing to form the outer outline.
3. The image processing method according to claim 2, wherein the original image is constituted by a plurality of pixel points; the first function is (point.y. ImgW) + (point.x+1); wherein, point.y represents the coordinate of the pixel point along the Y axis in the pixel coordinate system, point.x represents the coordinate of the pixel point along the X axis in the pixel coordinate system, and imgW represents the width of the original image.
4. The image processing method according to any one of claims 1 to 3, wherein the step of identifying an inflection point detection line and a normal detection line among the detection lines includes:
acquiring one detection line as a target detection line;
calculating the intersection point coordinates of the target detection line and the outer contour as detection intersection points;
the number of the detection intersection points corresponding to the target detection lines is obtained as a target parameter,
setting a detection line adjacent to the target detection line as a comparison detection line;
identifying the number of intersection points of the comparison detection line and the outer contour as comparison parameters;
judging whether the target parameter is consistent with the control parameter;
when the target parameter is inconsistent with the control parameter, marking the target detection line as an inflection point detection line;
and when the target parameter is consistent with the contrast parameter line, identifying the target detection line as a normal detection line.
5. The image processing method of claim 4, wherein the step of converting the inflection point detect line to the normal detect line comprises:
extracting the detection intersection point corresponding to the inflection point detection line;
taking the detection intersection point, of which the difference value between the detection intersection point and at least two comparison intersection points on the X axis is within a preset difference value range, as an inflection point;
And adding a detection intersection point with the same coordinates as the inflection point into the inflection point detection line.
6. The image processing method according to claim 5, wherein the step of identifying the background pixel and the target pixel on the normal detection line is preceded by the step of:
sequencing all the detection intersection points according to an X coordinate;
sequentially connecting the detection intersection points to form a plurality of line segments;
alternately setting a plurality of line segments into a background line segment and a target line segment according to the arrangement sequence;
setting pixels on the background line segment as the background pixels and setting pixels on the target line segment as the target pixels.
7. An image processing apparatus, characterized in that the image processing apparatus comprises:
the interface construction module is used for constructing an image processing interface and importing an original image when a construction instruction is detected;
the acquisition module is used for acquiring outline identification points in the original image when an acquisition instruction is detected;
the outline construction module is used for constructing an outline according to the outline identification points when an outline construction instruction is detected;
the detection line building module is used for building a plurality of detection lines in the original image according to the outer contour when the detection line is detected to be built;
The first identification module is used for identifying an inflection point detection line and a normal detection line in the detection lines when a first identification instruction is detected;
the conversion module is used for converting the inflection point detection line into the normal detection line when a conversion instruction is detected;
the second identification module is used for identifying background pixels and target pixels on the normal detection line when a second identification instruction is detected;
the extraction module is used for extracting a target image in the original image according to the target pixel when an extraction instruction is detected, and comprises the following steps: converting the original image into texture data according to a specified interface program; adjusting the transparency of the texture data corresponding to the background pixels to a preset value; inputting the adjusted texture data into a target program to form the target image;
and the synthesis module is used for synthesizing the target image with a preset background when a synthesis instruction is detected.
8. An electronic device comprising a processor and a memory, wherein the processor is configured to implement the image processing method according to any one of claims 1 to 6 when executing a computer program stored in the memory.
9. A storage medium, characterized in that the storage medium is a computer-readable storage medium, storing at least one instruction, which when executed by a processor, implements the image processing method according to any one of claims 1 to 6.
CN201911306168.4A 2019-12-18 2019-12-18 Image processing method, device, electronic equipment and storage medium Active CN111192276B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911306168.4A CN111192276B (en) 2019-12-18 2019-12-18 Image processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911306168.4A CN111192276B (en) 2019-12-18 2019-12-18 Image processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111192276A CN111192276A (en) 2020-05-22
CN111192276B true CN111192276B (en) 2024-04-09

Family

ID=70707331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911306168.4A Active CN111192276B (en) 2019-12-18 2019-12-18 Image processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111192276B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105006002A (en) * 2015-08-31 2015-10-28 北京华拓金融服务外包有限公司 Automatic picture matting method and apparatus
CN108960011A (en) * 2017-05-23 2018-12-07 湖南生物机电职业技术学院 The citrusfruit image-recognizing method of partial occlusion
CN109271654A (en) * 2018-07-19 2019-01-25 平安科技(深圳)有限公司 The cutting method and device of model silhouette, storage medium, terminal
CN110097570A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 A kind of image processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010097438A (en) * 2008-10-16 2010-04-30 Keyence Corp Outline information extraction method using image processing, creation method for pattern model in image processing, positioning method for pattern model in image processing, image processor, image processing program and computer-readable recording medium
JP5742399B2 (en) * 2011-04-06 2015-07-01 富士ゼロックス株式会社 Image processing apparatus and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105006002A (en) * 2015-08-31 2015-10-28 北京华拓金融服务外包有限公司 Automatic picture matting method and apparatus
CN108960011A (en) * 2017-05-23 2018-12-07 湖南生物机电职业技术学院 The citrusfruit image-recognizing method of partial occlusion
CN109271654A (en) * 2018-07-19 2019-01-25 平安科技(深圳)有限公司 The cutting method and device of model silhouette, storage medium, terminal
CN110097570A (en) * 2019-04-30 2019-08-06 腾讯科技(深圳)有限公司 A kind of image processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
人物证件照的自动换底算法;王明楸等;福建电脑(第06期);第12-13页 *
基于单幅图像的自动抠图技术研究;孙国星等;《信息技术与信息化》;第84-90页 *

Also Published As

Publication number Publication date
CN111192276A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
US9741137B2 (en) Image-based color palette generation
US9245350B1 (en) Image-based color palette generation
CN110555795A (en) High resolution style migration
CN106682632B (en) Method and device for processing face image
Huang et al. RGB-D salient object detection by a CNN with multiple layers fusion
CN111047509A (en) Image special effect processing method and device and terminal
CN112328345A (en) Method and device for determining theme color, electronic equipment and readable storage medium
WO2020034981A1 (en) Method for generating encoded information and method for recognizing encoded information
CN109255355A (en) Image processing method, device, terminal, electronic equipment and computer-readable medium
JP5042346B2 (en) Information display apparatus, method and program
CN107818323A (en) Method and apparatus for handling image
CN109241930B (en) Method and apparatus for processing eyebrow image
CN111192276B (en) Image processing method, device, electronic equipment and storage medium
EP3410389A1 (en) Image processing method and device
WO2020124442A1 (en) Pushing method and related product
CN112634444B (en) Human body posture migration method and device based on three-dimensional information, storage medium and terminal
CN110738227A (en) Model training method and device, recognition method, storage medium and electronic equipment
CN114972466A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN114663570A (en) Map generation method and device, electronic device and readable storage medium
CN113780047A (en) Virtual makeup trying method and device, electronic equipment and storage medium
CN112348069A (en) Data enhancement method and device, computer readable storage medium and terminal equipment
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium
CN113538537B (en) Image registration and model training method, device, equipment, server and medium
CN115937338B (en) Image processing method, device, equipment and medium
CN113744362B (en) Method and device for generating graphics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant