WO2021136224A1 - Image segmentation method and device - Google Patents

Image segmentation method and device Download PDF

Info

Publication number
WO2021136224A1
WO2021136224A1 PCT/CN2020/140570 CN2020140570W WO2021136224A1 WO 2021136224 A1 WO2021136224 A1 WO 2021136224A1 CN 2020140570 W CN2020140570 W CN 2020140570W WO 2021136224 A1 WO2021136224 A1 WO 2021136224A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
anchor point
point
line
dividing
Prior art date
Application number
PCT/CN2020/140570
Other languages
French (fr)
Chinese (zh)
Inventor
赵光耀
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021136224A1 publication Critical patent/WO2021136224A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation

Definitions

  • This application relates to the field of artificial intelligence, and more specifically, to an image segmentation method and an image segmentation device in the computer vision field.
  • Computer vision is an inseparable part of various intelligent/autonomous systems in various application fields, such as manufacturing, inspection, document analysis, medical diagnosis, and military. It is about how to use cameras/video cameras and computers to obtain What we need is the knowledge of the data and information of the subject. To put it vividly, it is to install eyes (camera/camcorder) and brain (algorithm) on the computer to replace the human eye to identify, track and measure the target, so that the computer can perceive the environment. Precise segmentation tasks have a wide range of applications in the field of computer vision. For example, image blurring, background replacement, e-commerce advertising production and live broadcast, movie (animation) production, etc., where precise segmentation can refer to the segmentation line between the target object and the background on the acquired image or video.
  • the polygon method or curve fitting method is usually used for accurate image segmentation; among them, the difference between the image segmentation line obtained by the polygon method and the actual image segmentation line is large and the accuracy is low; the curve fitting method requires manual selection Multiple demarcation points result in the need to consume a lot of manpower.
  • the present application provides an image segmentation method and an image segmentation device, which can obtain a segmentation result that matches the natural boundary of the image while saving manpower, thereby improving the accuracy of the image segmentation result.
  • an image segmentation method applied to a terminal device with a display screen, includes: detecting a first operation of manually marking an anchor point in a first image by a user, wherein the anchor point includes a start anchor point and Target anchor point; detecting the second operation instructed by the user to automatically segment the first image; in response to the second operation, displaying a second image on the display screen, wherein the second image is all
  • the first image is an image obtained after the second operation, and the second image includes a dividing line between the start anchor point and the target anchor point, and the dividing line passes through the dividing point in the
  • the first image is obtained by moving the start anchor point as the start position and the target anchor point as the target position.
  • the above-mentioned dividing line may be obtained by automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, automatic moving may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or manually mark a small number of marked points including the start anchor point and the target anchor point, and the start anchor point and the target point in the first image can be automatically obtained.
  • the dividing line between anchor points may be automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, automatic moving may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or manually mark a small number of marked points including the start anchor point and the target anchor point, and the start anchor point and the target point in the first image can be automatically obtained.
  • the dividing line between anchor points may be automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point
  • the dividing point may refer to a point on the dividing line between the target area and the background area in the image, where the target area may refer to the area including the target object.
  • the starting anchor point and the target anchor point may be positions in the image to be processed where the position does not change.
  • a small number of anchor points including the starting anchor point and the target anchor point are manually marked on the first image, and the starting anchor point is used as the starting point in the pixel gradient map of the first image through the dividing point.
  • the above-mentioned second operation used by the user to instruct the automatic segmentation may include the user clicking the button for automatically segmenting the image in the image processing tool, or may include the behavior of the user instructing the automatic segmentation through voice, or, It can include other actions of the user to instruct automatic segmentation.
  • the dividing line coincides with the dividing line in the ridge direction of the pixel gradient map of the first image.
  • the ridge direction in the pixel gradient map of the first image may refer to a curve formed by the local maximum value of the gradient in the pixel gradient map.
  • the obtained dividing line of the first image may coincide with the dividing line of the ridge direction in the pixel gradient map of the first image, that is, the dividing line of the first image may be based on the pixel gradient of the first image.
  • the image is obtained, so as to avoid the labor-consuming problem caused by manually marking a large number of anchor points in the first image manually; by manually marking a small number of anchor points, the accuracy of the image segmentation result can be improved while saving manpower .
  • the displaying the second image on the display screen in response to the second operation includes: searching for the The mask image and the secant image corresponding to the first image, wherein the mask image is used to represent different objects in the first image, and the secant image is used to represent different objects in the first image. Boundary; displaying the superimposed image of the first image, the mask image, and the secant image on the display screen.
  • the mask image and/or the secant image corresponding to the first image may be searched under the preset file path according to the first image.
  • each image file can be saved as a bitmap file (bitmap, BMP), a losslessly compressed bitmap graphics format (portable network graphics, PNG), or other image file formats; image files, mask files, and secant files
  • bitmap bitmap
  • PNG losslessly compressed bitmap graphics format
  • image files, mask files, and secant files The corresponding relationship can be described by icon or file name naming rules, or by packaging three images in a file or a folder.
  • the method further includes: detecting that the user instructs to process the second image through the mask image or the secant image.
  • the second image may refer to the superimposed image of the first image, the mask image, and the secant image displayed on the display screen, and the third operation may be used to instruct to pass the mask image or cut the image.
  • the line image performs automatic segmentation processing on the superimposed image.
  • the user can choose to perform automatic segmentation processing on the first image according to the mask image or the secant image.
  • the user can manually adjust the position of the anchor point according to the mask image or the secant image.
  • the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling model is used to indicate State the direction of movement of the dividing point.
  • the second image may be obtained according to the pixel gradient map of the first image and the anchor point pulling model.
  • the anchor point pulling model may indicate that the dividing point automatically moves from the starting anchor point to the target anchor point every time. One-step movement direction, thereby reducing manual participation.
  • the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the current position of the dividing point and the The line between the target anchor points; or
  • the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to a straight line where each of the eight azimuths is located, and the second straight line refers to a line between the current position of the dividing point and the target anchor point,
  • the distance parameter refers to the distance between the current position of the boundary point and the target anchor point;
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • the moving direction of the dividing point at the previous moment is the same as the moving direction of the dividing point at the current moment
  • the moving direction of the dividing point is Determined according to the absolute value of the gradient in different directions.
  • the demarcation point when the movement direction of the demarcation point at the previous moment is the same as the movement direction of the current moment, the demarcation point can be moved in a preset direction, that is, the movement direction of the demarcation point can be based on different The absolute value of the gradient of the moving direction is determined, so that the error of the dividing point when moving along the ridge direction of the gradient map can be compensated.
  • the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image .
  • the position of the anchor point may have a certain deviation; in order to make the position of the anchor point more accurate, it improves The accuracy of the segmentation line of the image to be processed can optimize the position of the anchor points manually marked by the user.
  • an image segmentation method including: acquiring a first image and position information of an anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor point; An image and the anchor point to obtain a second image, where the second image is an image obtained after the first image is subjected to image segmentation processing, and the second image includes the starting anchor point and the A dividing line between target anchor points, the dividing line is to move through the dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position and the target anchor point as the target position owned.
  • the dividing point may refer to a point on the dividing line between the target area and the background area in the image, where the target area may refer to the area including the target object.
  • the first image, the pixel gradient map of the first image, and the position information of the anchor points in the first image may be obtained, and the anchor points may include a start anchor point and a target anchor point.
  • the above-mentioned dividing line may be obtained by automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, the automatic moving may be obtained
  • the start anchor point and the target anchor point in the first image, or a small number of marked points including the start anchor point and the target anchor point can automatically obtain the segmentation between the start anchor point and the target anchor point in the first image line.
  • the starting anchor point and the target anchor point may be positions in the image to be processed where the position does not change.
  • a small number of anchor points including the starting anchor point and the target anchor point are manually marked on the first image, and the starting anchor point is used as the starting point in the pixel gradient map of the first image through the dividing point.
  • the dividing line coincides with the dividing line in the ridge direction in the pixel gradient map.
  • the ridge direction in the pixel gradient map may refer to a curve formed by the local maximum value of the gradient in the pixel gradient map.
  • the obtained dividing line of the first image may coincide with the dividing line of the ridge direction in the pixel gradient map of the first image, that is, the dividing line of the first image may be based on the pixel gradient of the first image.
  • the image is obtained, so as to avoid the labor-consuming problem caused by manually marking a large number of anchor points in the first image manually; by manually marking a small number of anchor points, the accuracy of the image segmentation result can be improved while saving manpower .
  • the obtaining a second image according to the first image and the anchor point includes: according to the pixel gradient map, the anchor point, and the anchor point. Point pulling model to obtain the second image, wherein the anchor point pulling model is used to indicate the moving direction of the dividing point.
  • the secant image may be obtained according to the pixel gradient map of the first image and the anchor point pulling model.
  • the anchor point pulling model may indicate that the dividing point automatically moves from the starting anchor point to the target anchor point every time. One-step movement direction, thereby reducing manual participation.
  • the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the current position of the dividing point and the The line between the target anchor points; or,
  • the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter ,
  • first straight line refers to the straight line where each of the eight azimuths is located
  • second straight line refers to the line between the current position of the dividing point and the target anchor point
  • the distance parameter refers to the distance between the current position of the boundary point and the target anchor point
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • the method further includes: if the moving direction at a time on the boundary point is the same as the moving direction at the current time, determining the absolute value of the gradient according to the different moving directions The direction of movement of the demarcation point.
  • the demarcation point when the movement direction of the demarcation point at the previous moment is the same as the movement direction of the current moment, the demarcation point can be made to move in a preset direction, that is, the movement direction of the demarcation point can be based on different The absolute value of the gradient of the moving direction is determined, so that the error of the dividing point when moving along the ridge direction of the gradient map can be compensated.
  • the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image .
  • the position of the anchor point may have a certain deviation; in order to make the position of the anchor point more accurate, it improves The accuracy of the segmentation line of the image to be processed can optimize the position of the anchor points manually marked by the user.
  • an image segmentation device has a terminal device with a display screen, and includes: a detection unit configured to detect a first operation of a user to manually mark an anchor point in a first image, wherein: The anchor point includes a start anchor point and a target anchor point; the second operation of automatically segmenting the first image instructed by the user is detected;
  • the processing unit is configured to display a second image on the display screen in response to the second operation, the second image being an image obtained by the first image after the second operation, and the second image
  • the image includes a dividing line between the starting anchor point and the target anchor point, wherein the dividing line is a dividing line in the first image with the starting anchor point as the starting position and The target anchor point is obtained by moving the target position.
  • the above-mentioned dividing line may be obtained by automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, automatic moving may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or a small number of marked points including the start anchor point and the target anchor point, and then automatically obtain the difference between the start anchor point and the target anchor point in the first image.
  • automatic moving may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or a small number of marked points including the start anchor point and the target anchor point, and then automatically obtain the difference between the start anchor point and the target anchor point in the first image.
  • the dividing line coincides with the dividing line in the ridge direction of the pixel gradient map of the first image.
  • the processing unit is specifically configured to: search for a mask image and a secant image corresponding to the first image according to the first image, wherein the The mask image is used to represent different objects in the first image, the secant image is used to represent the boundaries of different objects in the first image; the first image, the An image obtained by superimposing the mask image and the secant image.
  • the detection unit is further configured to detect that the user instructs to process the second image through the mask image or the secant image The third operation.
  • the second image may refer to the superimposed image of the first image, the mask image, and the secant image displayed on the display screen, and the third operation may be used to instruct to pass the mask image or cut the image.
  • the line image performs automatic segmentation processing on the superimposed image.
  • the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling model is used to indicate State the direction of movement of the dividing point.
  • the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the current position of the dividing point and the The line between the target anchor points; or
  • the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to a straight line where each of the eight azimuths is located, and the second straight line refers to a line between the current position of the dividing point and the target anchor point,
  • the distance parameter refers to the distance between the current position of the boundary point and the target anchor point;
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • the moving direction of the dividing point at the previous moment is the same as the moving direction of the dividing point at the current moment
  • the moving direction of the dividing point is Determined according to the absolute value of the gradient in different directions.
  • the anchor point is obtained by optimizing an initial anchor point, wherein the initial anchor point is an anchor point manually marked by the user in the first image .
  • an image segmentation device including: an acquiring unit for acquiring a first image and position information of an anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor Point; a processing unit for obtaining a second image according to the first image and the anchor point, where the second image is an image obtained after the first image is subjected to image segmentation processing, and the second The image includes a dividing line between the starting anchor point and the target anchor point, where the dividing line passes through the dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position And it is obtained by moving the target anchor point as the target position.
  • the first image, the pixel gradient map of the first image, and the position information of the anchor points in the first image may be obtained, and the anchor points may include a start anchor point and a target anchor point.
  • the above-mentioned dividing line may be obtained by automatically moving the excessive demarcation point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, the automatic movement may be obtained
  • the start anchor point and the target anchor point in the first image, or a small number of marked points including the start anchor point and the target anchor point, can automatically obtain the segmentation between the start anchor point and the target anchor point in the first image line.
  • the dividing line coincides with the dividing line in the ridge direction of the pixel gradient map.
  • the processing unit is specifically configured to: obtain the second image according to the pixel gradient map, the anchor point, and the anchor point pulling model, wherein: The anchor point traction model is used to indicate the moving direction of the boundary point.
  • the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to a straight line in each of the eight directions, and the second straight line refers to the current position of the dividing point and the target The line between anchor points; or,
  • the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter ,
  • the first straight line refers to a straight line in each of the eight directions
  • the second straight line refers to the line between the current position of the dividing point and the target anchor point
  • the The distance parameter refers to the distance between the current position of the boundary point and the target anchor point
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • the processing unit is further configured to: if the moving direction at the previous moment of the dividing point is the same as the moving direction at the current moment, according to the gradient of different moving directions The absolute value determines the direction of movement of the dividing point.
  • the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image .
  • an image segmentation device has a display screen, including: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed ,
  • the processor is configured to perform: detecting a first operation of manually marking an anchor point in the first image by the user, wherein the anchor point includes a starting anchor point and a target anchor point; detecting that the user instructs to automatically segment the The second operation of the first image; in response to the second operation, a second image is displayed on the display screen, the second image is an image obtained by the first image after the second operation, so
  • the second image includes a dividing line between the starting anchor point and the target anchor point, wherein the dividing line starts from the starting anchor point in the first image through a dividing point The position is obtained by moving the target anchor point as the target position.
  • the processor included in the foregoing image segmentation apparatus is further configured to execute the first aspect and the image segmentation method in any one of the implementation manners of the first aspect.
  • an image segmentation device includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the processor uses In execution: acquiring the first image and the position information of the anchor points in the first image, wherein the anchor points include a starting anchor point and a target anchor point; according to the pixel gradient map and the anchor point, the first image is obtained Two images, wherein the second image is an image obtained after the first image is subjected to image segmentation processing, and the second image includes a dividing line between the starting anchor point and the target anchor point, so The dividing line is obtained by moving the dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position and the target anchor point as the target position.
  • the processor included in the foregoing image segmentation apparatus is further configured to execute the first aspect and the image segmentation method in any one of the implementation manners of the first aspect.
  • a computer-readable medium stores program code for device execution, and the program code includes the program code for executing the first aspect and any one of the implementation manners of the first aspect.
  • Image segmentation method In a seventh aspect, a computer-readable medium is provided, and the computer-readable medium stores program code for device execution, and the program code includes the program code for executing the first aspect and any one of the implementation manners of the first aspect. Image segmentation method.
  • a computer-readable medium stores program code for device execution.
  • the program code includes Image segmentation method.
  • a computer program product containing instructions is provided.
  • the computer program product runs on a computer, the computer executes the first aspect and the image segmentation method in any one of the first aspects.
  • a computer program product containing instructions is provided.
  • the computer program product runs on a computer, the computer executes the second aspect and the image segmentation method in any one of the second aspects.
  • a chip in an eleventh aspect, includes a processor and a data interface.
  • the processor reads instructions stored in a memory through the data interface, and executes any one of the first aspect and the first aspect.
  • An image segmentation method in one implementation.
  • the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory.
  • the processor is configured to execute the above-mentioned first aspect and the image segmentation method in any one of the implementation manners of the first aspect.
  • a chip in a twelfth aspect, includes a processor and a data interface.
  • the processor reads instructions stored in a memory through the data interface, and executes any one of the second aspect and the second aspect.
  • An image segmentation method in an implementation.
  • the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory.
  • the processor is configured to execute the above-mentioned second aspect and the image segmentation method in any one of the implementation manners of the second aspect.
  • FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an N-map adjustment mask image provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an image to be processed and a pixel gradient map provided by an embodiment of the present application
  • FIG. 6 is a schematic flowchart of a method for optimizing anchor point positions provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a four-way convolution kernel provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a ridge running algorithm based on anchor point traction gradient provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a convolution kernel for calculating gradients of different orientations according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of performing lateral drift according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of an anchor point traction model provided by an embodiment of the present application.
  • FIG. 14 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application.
  • FIG. 15 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of the hardware structure of an image segmentation device provided by an embodiment of the present application.
  • the polygon method or curve fitting method is usually used for accurate image segmentation; among them, the difference between the image segmentation line obtained by the polygon method and the actual image segmentation line is large and the accuracy is low; the curve fitting method requires manual selection Multiple demarcation points result in the need to consume a lot of manpower.
  • the embodiments of the present application provide an image segmentation method and device.
  • the user manually marks the starting anchor point and the target anchor point on the image to be processed, so that the starting anchor point is the starting position of the dividing point and the
  • the target anchor point is the target position and automatically moves according to the pixel gradient map of the image to be processed to obtain the segmentation line of the image to be processed; that is, a segmentation line can be automatically run out on the pixel gradient map along the gradient ridge direction boundary point to make the segmentation
  • the line coincides with the natural boundary of the image, so that the accuracy of the image segmentation result is improved while saving manpower.
  • Fig. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • the system 100 may include an application server 110, a data server 120, and multiple clients (for example, a client 131, a client 132, and a client 133); among them, the client may communicate with the application server through a communication network.
  • 110 is connected to a data server 120; the data server 110 can be used to store a large number of image files and annotation files; the application server 120 can be used to provide image annotation or image editing services; the client can be used to provide a human-computer interaction interface.
  • the client can be a mobile or fixed terminal; for example, the client can be a mobile phone with image processing functions, a tablet personal computer (TPC), a media player, a smart TV, or a laptop computer. , LC), personal digital assistant (personal digital assistant, PDA), personal computer (PC), camera, video camera, smart watch, wearable device (WD) or self-driving vehicle, etc., this application The embodiment does not limit this.
  • each client can interact with the data server 110 or the application server 120 through a communication network of any communication mechanism/communication standard.
  • the communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
  • the system 100 shown in FIG. 1 is a multi-user system, and the image segmentation method provided in this application is also applicable to a single-user system; for a single-user system, that is, a system including one client, the image segmentation method provided in the embodiment of the application is It can be deployed on a client; for a multi-user system, that is, the system including multiple clients as shown in FIG. 1, the image segmentation method provided in this embodiment of the present application can be deployed on a server, such as an application server or a data server.
  • a server such as an application server or a data server.
  • the image segmentation method provided in the embodiments of the present application can be applied to an image segmentation labeling tool or a retouching tool, and the image segmentation method can realize the background replacement of the image or the background blur of the image.
  • the user turns on the video call function in the smart terminal.
  • the image can be segmented in real time, and only the target object area is reserved to realize the replacement of the background area of the video call.
  • the user turns on the shooting function on the smart terminal.
  • the image can be segmented in real time, so that the foreground area of the photographed target object is clear and the background area is blurred, realizing the image effect of the large aperture of the SLR camera.
  • the background blurring of the image and the replacement of the background of the image described above are only two specific scenarios applied by the image processing method of the embodiment of the present application, and the image processing method of the embodiment of the present application is not limited to the above two when applied.
  • the image processing method of the embodiment of the present application can be applied to any scene that requires image segmentation.
  • Fig. 2 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application. The method may be executed by the server or the client shown in FIG. 1, and the method shown in FIG. 2 includes step 210 to step 230, and these steps are respectively described in detail below.
  • Step 210 A first operation of manually marking an anchor point in the first image by the user is detected.
  • the first image may refer to an image to be processed with an image segmentation requirement
  • the above-mentioned anchor points may include a starting anchor point and a target anchor point.
  • the user can open the image processing tool by operating and import the first image into the image processing tool, and the first image that needs to be segmented can be displayed on the interface of the image processing tool.
  • Step 220 A second operation instructed by the user to automatically segment the first image is detected.
  • the second operation used by the user to instruct to automatically segment the first image may include the user clicking the button for automatically segmenting the image in the image processing tool, or may include the behavior of the user instructing the automatic segmentation through voice, or may also include the user Other actions that indicate automatic segmentation; the above are examples and do not limit this application in any way.
  • Step 230 In response to the second operation, display a second image on the display screen;
  • the second image is an image obtained after the first image is subjected to the second operation, and the second image includes a dividing line between the starting anchor point and the target anchor point, and the The dividing line is obtained by moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position.
  • the above-mentioned dividing line may be obtained by automatically moving an excessive demarcation point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, automatic movement may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or a small number of mark points including the start anchor point and the target anchor point, and then automatically obtain the difference between the start anchor point and the target anchor point in the first image.
  • the starting anchor point and the target anchor point may be positions in the image to be processed where the position does not change, and the starting anchor point and the target in the first image can be obtained by moving the dividing point from the starting anchor point to the target anchor point.
  • the aforementioned dividing point may be based on the starting anchor point as the starting position, and the target anchor point as the target position.
  • a dividing line is run along the ridge of the pixel gradient map of the first image, that is, the dividing line and the first image.
  • the division lines in the ridge direction of the pixel gradient map of the overlapped; wherein, the pixel gradient map of the first image may refer to an image composed of changes in the brightness of pixels in different rows or different columns in the first image.
  • the terminal device can search for the mask image and the secant image corresponding to the first image according to the first image, where the mask image can be used to represent the to-be The object area in the first image, the secant image can be used to represent the boundary of different objects in the first image; the image to be processed, the mask image, and the secant image are superimposed and displayed on the interface of the image processing tool .
  • the mask image corresponding to the first image and the secant image corresponding to the first image are found.
  • the secant image and the mask image corresponding to the first image are not found after searching, it is necessary to generate the secant image and the mask image corresponding to the first image.
  • a preset boundary threshold can be used according to the first image and the mask image , Use the N-map-based method to automatically adjust the mask range, and then automatically generate the secant image according to the mask boundary, so that the secant image and the mask image are automatically aligned.
  • the N-map method can also be called boundary drift based on N-map; suppose that there are N image segmentation areas in the mask layer file, and the value of each pixel is the area number K to which the pixel belongs, K ⁇ [ 0, N-1]; Set the pixels of the unknown classification on both sides of the secant, and change the value of the mask layer to N; Then, use the pre-trained deep neural network to process the corresponding image layer files, and classify the unknown Pixels and background pixels are allocated to N divided regions.
  • the mask image corresponding to the first image needs to be generated.
  • the mask image corresponding to the first image is found. If the secant image corresponding to the first image is not found, the secant image corresponding to the first image needs to be generated; the specifics of generating the secant image Refer to the schematic flowcharts shown in Figure 9 and Figure 10 below for the process.
  • the user may manually select to perform segmentation processing on the first image through the above-mentioned mask image or secant image.
  • the secant image needs to be updated synchronously after the user adjusts the mask image, that is, the updated mask image is synchronized with the secant image; similarly, the user needs to adjust the secant image after the user adjusts the secant image.
  • the film image is updated synchronously, that is, the updated secant image is synchronized with the mask image.
  • the image segmentation method further includes: detecting a third operation instructed by the user to process the second image through the mask image or the secant image.
  • the above-mentioned second image may refer to the superimposed image of the first image, the mask image, and the secant image displayed on the display screen, and the third operation may be used to instruct to pass the mask image or The secant image performs automatic segmentation processing on the superimposed image.
  • the second image may be obtained according to the pixel gradient map of the first image and the anchor point pulling model, and the anchor point pulling model may be used to indicate the moving direction of the boundary point, that is,
  • the preset anchor point pulling model can be used to make the anchor point obtain the segmentation line of the first image along the ridge direction of the pixel gradient map, so that the segmentation line of the first image coincides with the natural boundary of the image.
  • the anchor point traction model may provide a moving direction for the dividing point in the dividing line.
  • the moving direction of the dividing point is Determined according to the angle between the first line and the second line, where the first line refers to the line where each of the eight directions is located, and the second line refers to the boundary The line between the current position of the point and the target anchor point;
  • the moving direction of the dividing point is based on the angle between the first line and the second line and The distance parameter is determined, wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second line refers to the distance between the current position of the dividing point and the target anchor point. Line, the distance parameter refers to the distance between the current position of the boundary point and the target anchor point;
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • the first straight line is the direct line where each of the eight directions is located, as shown in (a) in Figure 11, which can respectively refer to the straight line oa, the straight line ob, the straight line oc, the straight line od, and the straight line oe. , Straight of, straight og, straight oh.
  • the gravitational model of the target anchor point (for example, anchor point B) to the running point (for example, the demarcation point) can be regarded as composed of a strong gravitational model and a weak gravitational model; suppose the distance between the running point and the target anchor point is The distance is dis, the distance between anchor point A and anchor point B is d0, and the angle between the 8 azimuths and anchor point AB is ⁇ i , and the strength of each azimuth can be calculated according to the gravity model shown in Figure 13 The gravity weight w i and the weak gravity offset b i .
  • the stronger gravitational model and the weak gravitational model the smaller the angle between the eight azimuths and the anchor point AB, the greater the gravitational force; for example, the angle between the eight azimuths and the anchor point AB is 0°
  • the azimuth gravitational force is the largest, and the azimuth gravitational force is the smallest when the angle between the anchor point AB is 180°.
  • the dividing point can be made to move toward the preset Directional movement, that is, the movement direction of the dividing point is determined according to the absolute value of the gradient of different moving directions.
  • the position of the anchor point is generated by the user clicking on the image with the mouse, the position of the anchor point may be deviated and not located on the gradient map corresponding to the image, in order to make the anchor point be able to
  • the gradient map accurately obtains the segmentation line of the image, and the anchor point position can be optimized.
  • the position of the anchor point can be optimized by the four-way gradient convolution kernel as shown in FIG. 6; for example, the anchor point can be adjusted according to the absolute value of the gradient of the anchor point in different directions.
  • the position is optimized, and the optimized position information of the anchor point is obtained.
  • FIG. 3 shows a schematic flowchart of an image segmentation method provided by an embodiment of the present application.
  • the method can be implemented by an image processing tool (for example, an image segmentation labeling tool, or a retouching tool) in the server or client shown in FIG. carried out.
  • the method shown in FIG. 3 includes steps 301 to 311, and these steps are respectively described in detail below.
  • the image processing tools may include image layers, mask layers, and cutline layers.
  • each pixel value in the image layer can represent the gray level of the image, and the pixel coordinates (x, y) can be a positive integer, and the image layer is displayed on the screen as a normal image, such as a color image.
  • each pixel value in the mask layer can represent the serial number of the mask; for example, when the mask layer is displayed on the screen, the same area of the mask serial number can be colored with the same color, for example, the background
  • the mask serial number is 0, and the coloring can be blue when displaying; the mask serial number of TV is 1, and the coloring is red when displaying; other mask serial numbers can be deduced by analogy.
  • the user can assign a different display color to each mask serial number by operating a menu or button or palette on a graphical user interface (GUI).
  • GUI graphical user interface
  • the secant layer may be composed of a dividing line consisting of an anchor point and a dividing point; wherein the anchor point may be the end point of the dividing line; the value of the anchor point may represent the serial number of the anchor point (for example, 129 to 255) The value of the dividing point may represent the serial number of the dividing line (for example, 1 to 127), and the point with a value of 0 may not be an anchor point or a dividing point.
  • the anchor point and the segmentation point may be between pixels and not coincide with the pixel position, and their two-dimensional coordinates (u, v) and the coordinates (x, y) of nearby pixels are offset by a distance of 0.5 pixels.
  • the split layer can be displayed as a dotted line on the screen, and the anchor point can be displayed as a small circle on the screen; the user can move any anchor point by clicking or dragging, and the split line also moves with the anchor point.
  • the user can operate the menu or button on the GUI of the annotation tool to enable each layer to be turned off (not displayed) or turned on (displayed) independently.
  • Step 301 Read the image layer file.
  • the user imports an image layer file through an operation.
  • Step 302 Search for the mask layer file and the secant layer file corresponding to the image layer file.
  • the above-mentioned image layer file may refer to an image file with specific image segmentation requirements, and the file corresponding to the image layer file can be searched through a preset file path.
  • each layer file can be saved as a bitmap file (bitmap, BMP), a losslessly compressed bitmap graphics format (portable network graphics, PNG), or other image file formats; the correspondence between different layer files , Can be described by icon or file name naming rules, or by packaging the image layer file, the mask layer file corresponding to the image layer file, and the secant file corresponding to the image layer file in one file or a file Folder, as shown in Table 1.
  • bitmap bitmap
  • PNG losslessly compressed bitmap graphics format
  • step 306 may be executed.
  • step 303 is executed to automatically generate the mask layer file and the secant layer file corresponding to the image layer file.
  • Mask-RCNN or DeepLabV3 or other algorithms to perform automatic image segmentation through the image layer file, thereby generating the mask layer file; further, according to the image layer file and the mask layer file, the preset boundary threshold is used, and the N-based The -map method automatically adjusts the mask range, and then automatically generates the secant layer file according to the mask boundary, so that the secant layer and the mask layer are automatically aligned.
  • the above-mentioned method of adjusting the mask layer based on N-map can also be called boundary drift based on N-map; suppose that there are N image segmentation areas in the mask layer file, and the value of each pixel is the area to which the pixel belongs Number K, K ⁇ [0, N-1]; set the pixels of unknown classification on both sides of the secant, and change the value of the mask layer to N; then, use the pre-trained deep neural network to process the corresponding image layer File, the pixels of unknown classification and background pixels are allocated to N segmented areas.
  • the value of the mask layer can be changed to N for consecutive T pixels on both sides of the vertical secant at the midpoint of the secant line; the farther from the midpoint of the secant line, the mask layer
  • the number of pixels whose value is changed to N is less; at the anchor point, the number of pixels whose mask layer value is changed to N is 0; if the secant is a closed curve that does not include the anchor point, it can be at any of the secant Position Change the value of the mask layer on both sides of the secant line to N and the number of pixels is T; where T can be a preset threshold, which can be manually adjusted in the user interface.
  • step 304 is executed to automatically generate the secant layer file.
  • the N-map-based method can be used to automatically adjust the mask range according to the preset boundary threshold, and then automatically generate the secant layer file according to the mask boundary to make the cut
  • the line layer and the mask layer are automatically aligned.
  • step 305 is executed to automatically generate the mask layer file.
  • the user can observe the edge of the image, and the user can manually adjust the anchor point of the secant layer, and use the anchor point pulling gradient ridge run method to automatically adjust the secant position of the secant layer; ,
  • the mask layer file is automatically generated, so that the mask layer and the secant layer are automatically aligned.
  • Step 306 The three layer files can be superimposed and displayed to allow the user to edit the image segmentation.
  • Step 307 The user can select the image for image segmentation editing; for example, if the user selects the mask layer file to adjust the image segmentation, step 308 and step 309 are executed; if the user selects the secant layer file to adjust the image segmentation, then Step 310 and step 311 are performed.
  • Step 308 Adjust the mask layer file.
  • the mask layer file can be adjusted by the N-map method shown in FIG. 4 above.
  • Step 309 Perform a secant layer file update on the updated mask layer file, that is, make the updated mask layer file and the secant layer file synchronously aligned.
  • Step 310 Adjust the secant layer file.
  • the user can manually adjust the anchor point of the secant layer file by observing the edge of the image; or use the anchor point pulling gradient ridge run algorithm shown in Figure 9 and Figure 10 to automatically adjust the secant position of the secant layer file.
  • Step 311 Perform a mask layer file update on the updated secant layer file, that is, the updated secant layer file and the mask layer file are synchronized and aligned.
  • FIG. 5 where (a) in FIG. 5 is the original image, and (b) in FIG. 5 is the image corresponding to the original image shown in (a) in FIG. 5 Pixel gradient map;
  • the user can observe the original image (for example, color map), click 6 anchor points (for example, 1-6) in the original image with the mouse, and map the positions of the 6 anchor points to the pixel gradient map;
  • you can draw a dividing line between two adjacent anchor points along the ridge direction of the pixel gradient map by pulling the anchor point.
  • 6 dividing lines can form a closed curve, and the inside of the curve can be the foreground , The outside of the curve can be the background.
  • the position of the anchor point may have a certain deviation; in order to make the position of the anchor point more accurate, whereby, the accuracy of the segmentation line of the image to be processed is improved, and the position of the anchor points manually marked by the user can be optimized.
  • FIG. 6 is a schematic flowchart of a method for optimizing anchor point positions provided by an embodiment of the present application.
  • the method shown in FIG. 6 includes steps 401 to 406, and these steps are respectively described in detail below.
  • Step 401 The image layer and the secant layer are displayed in a graphical user interface (GUI).
  • GUI graphical user interface
  • Step 402 The user edits the anchor point position.
  • the user can generate an anchor point by clicking on the image, or the user can drag an existing anchor point to generate an updated anchor point.
  • Step 403 Obtain the original coordinates of the anchor point.
  • the original coordinates of anchor point A are (x1, y1).
  • Step 404 Search the ridge according to the four-way gradient to obtain the optimized anchor point position.
  • FIG. 7 the four-way convolution kernel provided in this embodiment of the application is shown in FIG. 7, where (a) in FIG. 7 is the V-direction gradient convolution kernel; (b) in FIG. 7 Shown is the H-direction gradient convolution kernel; Fig. 7 (c) shows the L-direction gradient convolution kernel; Fig. 7 (d) shows the R-direction gradient convolution kernel.
  • the V-direction gradient ridge search position shown in (e) in Fig. 7 and the (f) R-direction gradient ridge search position shown in Fig. 7 are examples to illustrate the method of optimizing anchor points.
  • the four-direction gradient Search for the gradient ridge in the direction with the largest absolute value for example, calculate the same at the positions of A-1, A-2, A-3, A-4, and A+1, A+2, A+3, and A+4.
  • Gradient if there is a position opposite to the sign of the same gradient at anchor point A, stop searching immediately; then at the position with the same sign of the gradient at anchor point A, take the position with the largest absolute value of the gradient as the optimized anchor Point position A.
  • Step 405 Adjust the anchor point data in the secant line.
  • Step 406 Display the optimized anchor point (x2, y2) in the graphical user interface.
  • FIG. 8 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application. The method can be executed by the server or the client shown in FIG. 1. The method shown in FIG. 8 includes step 510 and step 520, and these steps are respectively described in detail below.
  • Step 510 Acquire the first image and the position information of the anchor points in the first image, where the anchor points may include a starting anchor point and a target anchor point.
  • the first image may refer to an image that has an image segmentation requirement.
  • a pixel gradient map of the first image may also be obtained.
  • Step 520 Obtain a second image according to the first image and the anchor point.
  • the second image is an image obtained after image segmentation of the first image
  • the second image may include a dividing line between the starting anchor point and the target anchor point
  • the dividing line may be a pixel in the first image passing through the dividing point.
  • the gradient graph is obtained by moving the starting anchor point as the starting position and the target anchor point as the target position.
  • the starting anchor point and the target anchor point may be positions in the first image where the position does not change, and the starting anchor point and the target anchor point in the first image can be obtained by moving the boundary point from the starting anchor point to the target anchor point.
  • the aforementioned dividing point may be based on the starting anchor point as the starting position, and the target anchor point as the target position.
  • a dividing line is run along the ridge of the pixel gradient map of the first image, that is, the dividing line and the first image The dividing lines in the ridge direction of the pixel gradient map coincide.
  • the ridge direction in the pixel gradient map of the first image may refer to a curve formed by the local maximum value of the gradient in the pixel gradient map.
  • obtaining the second image according to the first image and the anchor point may include: obtaining the second image according to the pixel gradient map, the anchor point, and the anchor point pulling model of the first image.
  • the moving direction of the boundary point is determined according to the angle between the first line and the second line, wherein, the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the line between the current position of the dividing point and the target anchor point; or,
  • the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter ,
  • first straight line refers to the straight line where each of the eight azimuths is located
  • second straight line refers to the line between the current position of the dividing point and the target anchor point
  • the distance parameter refers to the distance between the current position of the boundary point and the target anchor point
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • the strong gravity weight w i and the weak gravity offset b i for each azimuth can be calculated according to the gravity model.
  • the stronger gravitational model and the weak gravitational model the smaller the angle between the eight azimuths and the anchor point AB, the greater the gravitational force; for example, the angle between the eight azimuths and the anchor point AB is 0°
  • the azimuth gravitational force is the largest, and the azimuth gravitational force is the smallest when the angle between the anchor point AB is 180°.
  • the dividing point can be made to move toward the preset Directional movement, that is, the movement direction of the dividing point is determined according to the absolute value of the gradient of different moving directions.
  • the position of the anchor point is generated by the user clicking on the image with the mouse, the position of the anchor point may be deviated and not located on the gradient map corresponding to the image, in order to make the anchor point be able to
  • the gradient map accurately obtains the segmentation line of the image, and the anchor point position can be optimized.
  • the position of the anchor point can be optimized by the four-way gradient convolution kernel as shown in FIG. 6; for example, the anchor point can be adjusted according to the absolute value of the gradient of the anchor point in different directions.
  • the position is optimized, and the optimized position information of the anchor point is obtained.
  • FIG. 9 shows a schematic flowchart of an image segmentation method provided by an embodiment of the present application.
  • the method may be executed by the server or client shown in FIG. 1.
  • the method shown in FIG. 9 includes steps 601 to 609, and these steps are respectively described in detail below.
  • Step 601 Obtain the anchor point position A.
  • the anchor point position A may be an anchor point selected by the user in the image, or a new anchor point generated by the user by dragging an existing anchor point.
  • it may be an anchor point generated by the user clicking on the image with the mouse, or a new anchor point generated by the user dragging an existing anchor point with the mouse.
  • the position information of the anchor point A on the picture can be obtained through the annotation tool.
  • Step 602 Optimize the anchor point position A.
  • the anchor point position A is generated by the user clicking on the image with the mouse, the anchor point position A may have a deviation and is not located on the gradient map corresponding to the image, so that the anchor point can be accurately obtained according to the gradient map.
  • the dividing line of the image can optimize the anchor point position A.
  • the anchor point position A may be optimized through the four-way gradient convolution kernel as shown in FIG. 6.
  • the four-way gradient of anchor point A (for example, V, H, L, and R) can be obtained, where the absolute value of the four-way gradient is the largest Search for the gradient ridge in the direction of. If there is a position opposite to the same gradient sign at anchor point A, stop searching immediately; then at the same position as the gradient sign at anchor point A, take the position with the largest absolute value of the gradient, As the optimized anchor position A.
  • Step 603 Obtain the anchor point position B.
  • the anchor point position A may refer to the starting point of the image segmentation
  • the anchor point position B may refer to the target point of the image segmentation.
  • the image segmentation line can be obtained by the connection between the anchor point position A and the anchor point position B.
  • the anchor point position B can be the anchor point selected by the user in the image, or the user can drag an existing anchor point to generate a new anchor point.
  • Step 604 Optimize the anchor point position B.
  • step 601 and step 603 can be performed at the same time, or step 601 can be performed first and then step 603; similarly, step 602 and step 604 can be performed at the same time, or step 602 can be performed first and then step 602 is performed. 604. This application does not make any limitation on this.
  • Step 605 Generate multiple candidate segmentation lines according to the optimized anchor point position A and the optimized anchor point position B.
  • the process of ridge running based on the anchor point traction gradient is executed.
  • the specific process refer to the flowchart shown in FIG. 10.
  • the optimized anchor point position A and the anchor point position B perform a two-way run, that is, the optimized anchor point position A takes position A as the starting point, and the optimized anchor point position B is the target point to execute the running point; optimized anchor point position A Point position B takes position B as the starting point, and optimizes anchor point position A as the target point to execute the run; obtain two dividing lines L1 and L2, where the number of breakpoints is N1 and N2, respectively, and the breakpoint can refer to a certain The gradient value of the running point is lower than the preset threshold.
  • the dividing line L3 is obtained, where the number of breakpoints is N3.
  • the dividing line L4 is obtained, where the number of breakpoints is N4.
  • Step 606 Preferably, the dividing line with the fewest breakpoints is selected.
  • a division line with the least number of break points is selected from a plurality of candidate division lines.
  • Step 607 It is judged whether the selection of the dividing line is successful; if it succeeds, step 608 is executed to end the process; if it fails, step 609 is executed manually by the user.
  • the number of breakpoints for each dividing line is counted, and the dividing line with the least number of breakpoints is preferred; if the dividing line with the least number of breakpoints has little difference in the number of breakpoints from other dividing lines, multiple dividing lines can be presented Give the user manual selection.
  • Fig. 10 is a schematic flowchart of an anchor-point traction gradient ridge run provided by an embodiment of the present application. This method can be executed by the server or the client shown in FIG. 1. The method shown in FIG. 10 includes steps 701 to 710, and these steps are respectively described in detail below.
  • Step 701 Start from anchor point position A.
  • the anchor point position A may be an anchor point selected by the user in the image, or the user may drag an existing anchor point to generate a new anchor point.
  • a preset step length can be set before starting from anchor point position A, that is, it can refer to moving from anchor point position A to target anchor point B, and the distance shifted by each step.
  • anchor point position A may be the initial anchor point manually marked by the user in the image, or may also refer to the anchor point position in the process of moving from the initial anchor point to the target anchor point, which is not limited in this application .
  • Step 702 Update the starting point coordinates.
  • updating the starting point coordinates may refer to optimizing the anchor point position A, thereby improving the accuracy of the dividing line; the specific process of optimizing the anchor point position A can be seen in FIG. 7 and will not be repeated here.
  • the dividing line of the image may be obtained by moving the dividing point in the pixel gradient map corresponding to the image with the starting anchor point as the starting position and the target anchor point as the target position, where
  • the running point can refer to the point on the dividing line obtained by moving the above-mentioned dividing point according to the preset step length through the algorithm.
  • the running point can start from the position of the starting anchor point according to the preset step length and the selected running direction Move to the position of the target anchor point to obtain the dividing line of the image between the starting anchor point and the target anchor point.
  • Step 703 Choose a running direction.
  • running direction may refer to the moving direction of each step of the running point according to the preset step length.
  • the running direction of the running point can be selected according to the preset step length in the following ways: Method 1: The running direction of the running point is determined based on the calculation of the gradient value of different directions.
  • Figure 11 (a) shows the schematic diagram of the 8-direction azimuth
  • Figure 11 (b) shows the oa azimuth gradient convolution kernel
  • Figure 11 (c) shows the ob azimuth gradient Convolution kernel
  • Figure 11 (d) shows the oab azimuth gradient convolution kernel
  • Figure 11 (e) shows the ocd azimuth gradient convolution kernel; if the run-in direction is in the fob direction Below, you can use the oab gradient convolution kernel; if the running direction of the running point is above the fob direction, you can use the ocb gradient convolution kernel; if the running direction of the running point is fo, you can use the oab gradient and The larger absolute value of the ocb gradient.
  • the anchor point runs along the ridge of the gradient map; for example, if the previous step is running in from the ao direction, the next step is not allowed to run from the oa direction Out; so in order to determine the next running direction, you need to calculate the gradients of the remaining 7 directions that are different from the running direction. For example, if the previous step is running from the ao direction, calculate ob, oc, od, oe, of, og , Oh and other 7-way gradients, choose the next step to run from the direction with the largest absolute value of the 7-way gradient.
  • Method 2 Determine the running direction of the running point based on the anchor point traction model.
  • the distance between the running point and the anchor point position B can be calculated as dis, for example, the distance between the anchor point position A and the anchor point position B is d0; then, the traction weight wi and the anchor point A can be calculated bi, get the anchor traction model.
  • the above anchor point traction model can be regarded as composed of a strong gravitational model and a weak gravitational model; for example, suppose the distance between the running point and the target anchor point is dis, and the distance between anchor point A and anchor point B is d0,8 AB azimuth angle with the anchor connection as ⁇ i, can be calculated according to the gravity model 13 shown in FIG strong gravitational weight of each weight w i and orientation offsets weak gravitational b i.
  • the stronger gravitational model and the weak gravitational model the smaller the angle between the eight azimuths and the anchor point AB, the greater the gravitational force; for example, the angle between the eight azimuths and the anchor point AB is 0°
  • the azimuth gravitational force is the largest, and the azimuth gravitational force is the smallest when the angle between the anchor point AB is 180°.
  • the weighted calculation of the gradient can be performed by the following equation:
  • D' i represents the weighted 7-way gradient of the i-th orientation
  • D i represents the unweighted 7-way gradient of the i-th orientation
  • w i represents the strong gravity weight of the i-th orientation
  • b i represents the i-th orientation. Weak gravitational weight of each direction.
  • the weak gravity model can provide a clear direction for the running point; when the running point is at the bifurcation of the ridge, the strong gravity model can provide a clear direction for the running point.
  • Direction guidance when the running point is located on an obvious ridge and there is no bifurcation position, the path of the running point can be mainly determined by the gradient map.
  • the strong gravitational weight wi and the weak gravitational offset bi can be calculated for each azimuth; among them, Figure 13 (a) shows different anchor point directions; Figure 13 (b) ) Shows the schematic diagram of the strong gravitational model; Figure 13(c) shows the schematic diagram of the weak gravitational model.
  • the schematic image shown in Figure 13(b) includes straight line 1, straight line 2, and straight line 3.
  • the movement direction of each step of the running point can be calculated at the preset step length according to the above-mentioned method 1 and method 2, so as to obtain the running direction of the next step of the running point.
  • Step 704 Determine whether the running direction and the running direction of the running point are the same; if they are the same, perform step 705; if they are not the same, perform step 709 to determine whether the anchor point B has been reached.
  • step 710 is executed to end the running point process, that is, the moving process of the dividing point is ended; if the selected running direction is different from the running direction, And if the running point does not reach the anchor point position B, return to step 702.
  • step 705 in order to compensate for the error of the running point in the process of moving along the pixel gradient map, step 705 may be performed.
  • Step 705 Determine the drift direction.
  • the three positions a, a1, and a2 are used as alternate lateral drift directions.
  • the position where the absolute value of the gradient is the largest is the drift direction.
  • Step 706 Determine whether it is a continuous same-directional drift; if it is a continuous same-directional drift, execute step 708 to cancel the drift; if it is not a continuous same-directional drift, execute step 707 to execute the drift.
  • the aforementioned continuous drift in the same direction means that when the running point executes the aforementioned multiple preset step lengths, the moving direction of each step length moves in the aforementioned drift direction.
  • drift can be cancelled, and the running direction of the running point can be re-determined through the above-mentioned method one or the above-mentioned method two.
  • FIG. 14 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application. It should be understood that the image segmentation apparatus 800 can execute the image segmentation methods shown in FIG. 6 and FIG. 8 to FIG. 13.
  • the image segmentation device 800 includes a detection unit 810 and a processing unit 820.
  • the detecting unit 810 is configured to detect the first operation of the user to manually mark the anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor point; and the user indicates that the user is detected to automatically segment the anchor point.
  • the second operation of the first image; the processing unit 820 is configured to display a second image on the display screen in response to the second operation, the second image being the first image after the second operation Obtained image, the second image includes a dividing line between the starting anchor point and the target anchor point, wherein the dividing line is a dividing line in the first image with the starting anchor point and the starting anchor point.
  • the anchor point is the starting position and the target anchor point is moved to the target position.
  • the dividing line coincides with the dividing line in the ridge direction in the pixel gradient map of the first image.
  • processing unit 820 is specifically configured to:
  • the detection unit 810 is further configured to:
  • a third operation instructed by the user to perform segmentation processing on the second image through the mask image or the secant image is detected.
  • the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling model is used to indicate the moving direction of the boundary point.
  • the moving direction of the boundary point is based on the difference between the first line and the second line
  • the included angle is determined, wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second line refers to the line between the current position of the dividing point and the target anchor point Connect; or
  • the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second line refers to the line between the current position of the dividing point and the target anchor point ,
  • the distance parameter refers to the distance between the current position of the boundary point and the target anchor point;
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • the moving direction of the dividing point at the previous moment is the same as the moving direction of the dividing point at the current moment
  • the moving direction of the dividing point is based on the absolute value of the gradient of different orientations. definite.
  • the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image.
  • FIG. 15 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application. It should be understood that the image segmentation device 900 can perform the image segmentation methods shown in FIGS. 2 to 13.
  • the image segmentation device 900 includes: an acquisition unit 910 and a processing unit 920.
  • the acquiring unit 910 is configured to acquire the first image and the position information of the anchor points in the first image, wherein the anchor points include a starting anchor point and a target anchor point;
  • the processing unit 920 is configured to obtain information according to the first image Image and the anchor point to obtain a second image, where the second image is an image obtained after image segmentation of the first image, and the second image includes the starting anchor point and the target anchor A dividing line between points, the dividing line is obtained by moving a dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position and the target anchor point as the target position .
  • the dividing line coincides with the dividing line in the ridge direction in the pixel gradient map.
  • the processing unit 920 is specifically configured to:
  • the second image is obtained according to the pixel gradient map, the anchor point, and the anchor point pulling model, wherein the anchor point pulling model is used to indicate the moving direction of the boundary point.
  • the moving direction of the boundary point is based on the difference between the first line and the second line
  • the included angle is determined, wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second line refers to the line between the current position of the dividing point and the target anchor point Connection; or,
  • the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter ,
  • first straight line refers to the straight line where each of the eight directions is located
  • second line refers to the line between the current position of the dividing point and the target anchor point
  • the distance parameter refers to the distance between the current position of the boundary point and the target anchor point
  • the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  • processing unit is further configured to:
  • the movement direction of the boundary point at the previous moment is the same as the movement direction of the current moment, the movement direction of the boundary point is determined according to the absolute value of the gradient of different moving directions.
  • the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image.
  • image segmentation device 800 and image segmentation device 900 are embodied in the form of functional units.
  • unit herein can be implemented in the form of software and/or hardware, which is not specifically limited.
  • a "unit” may be a software program, a hardware circuit, or a combination of the two that realizes the above-mentioned functions.
  • the hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, and a processor for executing one or more software or firmware programs (such as a shared processor, a dedicated processor, or a group processor). Etc.) and memory, merged logic circuits and/or other suitable components that support the described functions.
  • the units of the examples described in the embodiments of the present application can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • FIG. 16 is a schematic diagram of the hardware structure of an image segmentation device provided by an embodiment of the present application.
  • the image segmentation apparatus 1000 (the image segmentation apparatus 1000 may specifically be a computer device) includes a memory 1001, a processor 1002, a communication interface 1003, and a bus 1004. Among them, the memory 1001, the processor 1002, and the communication interface 1003 implement communication connections between each other through the bus 1004.
  • the memory 1001 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM).
  • the memory 1001 may store a program.
  • the processor 1002 is configured to execute each step of the image segmentation method of the embodiment of the present application, for example, execute each of the steps shown in FIGS. 2 to 13 step.
  • the image segmentation apparatus shown in the embodiment of the present application may be a server, for example, it may be a server in the cloud, or may also be a chip configured in a server in the cloud.
  • the processor 1002 may adopt a general central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), or one or more integrated circuits for executing related programs to realize the The image segmentation method of the application method embodiment.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • the processor 1002 may also be an integrated circuit chip with signal processing capability.
  • each step of the image segmentation method of the present application can be completed by an integrated logic circuit of hardware in the processor 1002 or instructions in the form of software.
  • the aforementioned processor 1002 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
  • the storage medium is located in the memory 1001, and the processor 1002 reads the information in the memory 1001, and combines its hardware to complete the functions required by the units included in the image segmentation device shown in FIG. 14 or FIG. 15 in the implementation of this application, or execute The image segmentation method shown in FIG. 2 to FIG. 13 of the method embodiment of the present application.
  • the communication interface 1003 uses a transceiver device such as but not limited to a transceiver to implement communication between the image segmentation device 1200 and other devices or a communication network.
  • a transceiver device such as but not limited to a transceiver to implement communication between the image segmentation device 1200 and other devices or a communication network.
  • the bus 1004 may include a path for transferring information between various components of the image segmentation device 1000 (for example, the memory 1001, the processor 1002, and the communication interface 1003).
  • image segmentation device 1000 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the image segmentation device 1000 may also include other necessary for normal operation. Device. At the same time, according to specific needs, those skilled in the art should understand that the above-mentioned image segmentation apparatus 1000 may also include hardware devices that implement other additional functions.
  • image segmentation device 1000 may also only include the necessary components for implementing the embodiments of the present application, and not necessarily include all the components shown in FIG. 16.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application.
  • the implementation process constitutes any limitation.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

An image segmentation method and device (800, 900, 1000). The image segmentation method is used for a terminal device having a display screen, and comprises: detecting a first operation that a user manually marks anchor points in a first image, wherein the anchor points comprise a starting anchor point and a target anchor point (210); detecting a second operation that the user instructs automatic segmentation of the first image (220); and in response to the second operation, displaying a second image on the display screen, wherein the second image is an image obtained by performing the second operation on the first image and comprises a segmentation line between the starting anchor point and the target anchor point, the segmentation line being obtained by moving a demarcation point in the first image with the starting anchor point as a starting position and the target anchor point as a target position (230). The described technical solution can obtain a segmentation result matching the natural boundary of the image while saving manpower, thereby improving the accuracy of the image segmentation result.

Description

图像分割方法以及图像分割装置Image segmentation method and image segmentation device
本申请要求于2019年12月31日提交中国专利局、申请号为201911411574.7、申请名称为“图像分割方法以及图像分割装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 201911411574.7, and the application name is "Image Segmentation Method and Image Segmentation Device" on December 31, 2019, the entire content of which is incorporated into this application by reference .
技术领域Technical field
本申请涉及人工智能领域,更具体地,涉及计算机视觉领域中的图像分割方法和图像分割装置。This application relates to the field of artificial intelligence, and more specifically, to an image segmentation method and an image segmentation device in the computer vision field.
背景技术Background technique
计算机视觉是各个应用领域,如制造业、检验、文档分析、医疗诊断,和军事等领域中各种智能/自主系统中不可分割的一部分,它是一门关于如何运用照相机/摄像机和计算机来获取我们所需的,被拍摄对象的数据与信息的学问。形象地说,就是给计算机安装上眼睛(照相机/摄像机)和大脑(算法)用来代替人眼对目标进行识别、跟踪和测量等,从而使计算机能够感知环境。精确分割任务在计算机视觉领域中具有广泛的应用。例如,图像的背景虚化、背景替换,电商的广告制作以及直播、电影(动画)制作等,其中,精确定分割可以是指对获取的图像或者视频上的目标对象与背景的分割线。Computer vision is an inseparable part of various intelligent/autonomous systems in various application fields, such as manufacturing, inspection, document analysis, medical diagnosis, and military. It is about how to use cameras/video cameras and computers to obtain What we need is the knowledge of the data and information of the subject. To put it vividly, it is to install eyes (camera/camcorder) and brain (algorithm) on the computer to replace the human eye to identify, track and measure the target, so that the computer can perceive the environment. Precise segmentation tasks have a wide range of applications in the field of computer vision. For example, image blurring, background replacement, e-commerce advertising production and live broadcast, movie (animation) production, etc., where precise segmentation can refer to the segmentation line between the target object and the background on the acquired image or video.
目前,通常采用多边形法,或者曲线拟合法进行图像的精确分割;其中,通过多边形法得到的图像分割线与图像实际的分割线的差异较大准确性较低;通过曲线拟合法需要人工手动选择多个分界点导致需要耗费大量的人力。At present, the polygon method or curve fitting method is usually used for accurate image segmentation; among them, the difference between the image segmentation line obtained by the polygon method and the actual image segmentation line is large and the accuracy is low; the curve fitting method requires manual selection Multiple demarcation points result in the need to consume a lot of manpower.
因此,在节省人力的情况下,如何提高图像的精确分割结果的准确性成为一个亟需解决的问题。Therefore, in the case of saving manpower, how to improve the accuracy of the precise segmentation result of the image becomes an urgent problem to be solved.
发明内容Summary of the invention
本申请提供一种图像分割方法以及图像分割装置,能够在节省人力的情况下,得到与图像的自然边界相吻合的分割结果,从而提高图像分割结果的准确性。The present application provides an image segmentation method and an image segmentation device, which can obtain a segmentation result that matches the natural boundary of the image while saving manpower, thereby improving the accuracy of the image segmentation result.
第一方面,一种图像分割方法,应用于具有显示屏的终端设备,包括:检测到用户在第一图像中手动标记锚点的第一操作,其中,所述锚点包括起始锚点与目标锚点;检测到所述用户指示自动分割所述第一图像的第二操作;响应于所述第二操作,在所述显示屏上显示第二图像,其中,所述第二图像是所述第一图像经过所述第二操作后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。In a first aspect, an image segmentation method, applied to a terminal device with a display screen, includes: detecting a first operation of manually marking an anchor point in a first image by a user, wherein the anchor point includes a start anchor point and Target anchor point; detecting the second operation instructed by the user to automatically segment the first image; in response to the second operation, displaying a second image on the display screen, wherein the second image is all The first image is an image obtained after the second operation, and the second image includes a dividing line between the start anchor point and the target anchor point, and the dividing line passes through the dividing point in the The first image is obtained by moving the start anchor point as the start position and the target anchor point as the target position.
其中,上述分割线可以是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置自动移动得到的;其中,自动移动可以是指用户在第一图像中只需要手动标记起始锚点与目标锚点,或者手动标记包括起始锚点与目标锚点的少量标 记点,就可以自动得到第一图像中起始锚点与目标锚点之间的分割线。Wherein, the above-mentioned dividing line may be obtained by automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, automatic moving may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or manually mark a small number of marked points including the start anchor point and the target anchor point, and the start anchor point and the target point in the first image can be automatically obtained. The dividing line between anchor points.
其中,分界点可以是指图像中目标区域与背景区域之间分界线上的点,其中,目标区域可以是指包括目标对象的区域。Wherein, the dividing point may refer to a point on the dividing line between the target area and the background area in the image, where the target area may refer to the area including the target object.
上述第一图像可以是指具有图像分割需求的图像;第二图像可以是指对第一图像经过自动分割处理后得到的图像。The above-mentioned first image may refer to an image that has an image segmentation requirement; the second image may refer to an image obtained after automatic segmentation processing is performed on the first image.
应理解,起始锚点与目标锚点可以是待处理图像中位置不发生改变的位置,通过分界点从起始锚点至目标锚点的移动可以得到第一图像在起始锚点与目标锚点之间的分割线,该分割线与第一图像中不同对象的自然边界吻合。It should be understood that the starting anchor point and the target anchor point may be positions in the image to be processed where the position does not change. By moving the dividing point from the starting anchor point to the target anchor point, it can be obtained that the first image is at the starting anchor point and the target anchor point. A dividing line between the anchor points, which coincides with the natural boundary of different objects in the first image.
在本申请的实施例中,通过在第一图像上手动标记包括起始锚点与目标锚点的少量锚点,通过分界点在第一图像的像素梯度图中以起始锚点为起始位置并且以目标锚点为目标位置自动移动得到第一图像的分割线,能够在节省人力的情况下,得到与图像的自然边界相吻合的分割结果,从而提高图像分割结果的准确性。In the embodiment of the present application, a small number of anchor points including the starting anchor point and the target anchor point are manually marked on the first image, and the starting anchor point is used as the starting point in the pixel gradient map of the first image through the dividing point. Position and automatically move the target anchor point as the target position to obtain the segmentation line of the first image, which can obtain the segmentation result consistent with the natural boundary of the image while saving manpower, thereby improving the accuracy of the image segmentation result.
在一种可能的实现方式中,上述用户用于指示自动分割的第二操作可以包括用户点击图像处理工具中的自动分割图像的按钮,也可以包括用户通过语音指示自动分割的行为,或者,还可以包括用户其它的指示自动分割的行为。In a possible implementation manner, the above-mentioned second operation used by the user to instruct the automatic segmentation may include the user clicking the button for automatically segmenting the image in the image processing tool, or may include the behavior of the user instructing the automatic segmentation through voice, or, It can include other actions of the user to instruct automatic segmentation.
结合第一方面,在第一方面的某些实现方式中,所述分割线与所述第一图像的像素梯度图中岭脊方向的分割线重合。With reference to the first aspect, in some implementations of the first aspect, the dividing line coincides with the dividing line in the ridge direction of the pixel gradient map of the first image.
需要说明的是,第一图像的像素梯度图中岭脊方向可以是指由像素梯度图中梯度局部极大值构成的曲线。It should be noted that the ridge direction in the pixel gradient map of the first image may refer to a curve formed by the local maximum value of the gradient in the pixel gradient map.
在本申请的实施例中,得到的第一图像的分割线可以与第一图像的像素梯度图中岭脊方向的分割线重合,即第一图像的分割线可以是根据第一图像的像素梯度图得到的,从而避免人工手动在第一图像中标记大量的锚点引入的耗费大量人力的问题;通过人工标记少量的锚点,使得在节省人力的情况下,能够提高图像分割结果的准确性。In the embodiment of the present application, the obtained dividing line of the first image may coincide with the dividing line of the ridge direction in the pixel gradient map of the first image, that is, the dividing line of the first image may be based on the pixel gradient of the first image. The image is obtained, so as to avoid the labor-consuming problem caused by manually marking a large number of anchor points in the first image manually; by manually marking a small number of anchor points, the accuracy of the image segmentation result can be improved while saving manpower .
结合第一方面,在第一方面的某些实现方式中,所述响应于所述第二操作,在所述显示屏上显示所述第二图像,包括:根据所述第一图像查找所述第一图像对应的掩膜图像与割线图像,其中,所述掩膜图像用于表示所述第一图像中的不同对象,所述割线图像用于表示所述第一图像中不同对象的边界;在所述显示屏上显示所述第一图像、所述掩膜图像以及所述割线图像叠加后的图像。With reference to the first aspect, in some implementations of the first aspect, the displaying the second image on the display screen in response to the second operation includes: searching for the The mask image and the secant image corresponding to the first image, wherein the mask image is used to represent different objects in the first image, and the secant image is used to represent different objects in the first image. Boundary; displaying the superimposed image of the first image, the mask image, and the secant image on the display screen.
在一种可能的实现方式中,可以根据第一图像在预设文件路径下查找该第一图像对应的掩膜图像和/或割线图像。In a possible implementation manner, the mask image and/or the secant image corresponding to the first image may be searched under the preset file path according to the first image.
示例性地,每个图像文件可以保存为位图文件(bitmap,BMP)、无损压缩的位图图形格式(portable network graphics,PNG)或其他图像文件格式;图像文件、掩膜文件以及割线文件之间的对应关系,可以通过图标或文件名命名规则来描述,或者,通过将三个图像打包在一个文件里或一个文件夹里进行表示。Exemplarily, each image file can be saved as a bitmap file (bitmap, BMP), a losslessly compressed bitmap graphics format (portable network graphics, PNG), or other image file formats; image files, mask files, and secant files The corresponding relationship can be described by icon or file name naming rules, or by packaging three images in a file or a folder.
结合第一方面,在第一方面的某些实现方式中,还包括:检测到所述用户指示通过所述掩膜图像或者所述割线图像对所述第二图像进行处理的第三操作。With reference to the first aspect, in some implementations of the first aspect, the method further includes: detecting that the user instructs to process the second image through the mask image or the secant image.
在一种可能的实现方式中,第二图像可以是指在显示屏上显示的第一图像、掩膜图像以及割线图像叠加后的图像,第三操作可以用于指示通过掩膜图像或者割线图像对叠加后的图像进行自动分割处理。In a possible implementation manner, the second image may refer to the superimposed image of the first image, the mask image, and the secant image displayed on the display screen, and the third operation may be used to instruct to pass the mask image or cut the image. The line image performs automatic segmentation processing on the superimposed image.
在本申请的实施例中,用户可以选择根据掩膜图像或者割线图像对第一图像进行自动分割处理,比如,用户可以根据掩膜图像或者割线图像手动调整锚点的位置。In the embodiment of the present application, the user can choose to perform automatic segmentation processing on the first image according to the mask image or the secant image. For example, the user can manually adjust the position of the anchor point according to the mask image or the secant image.
结合第一方面,在第一方面的某些实现方式中,所述第二图像是根据所述第一图像的像素梯度图与锚点牵引模型得到的,所述锚点牵引模型用于指示所述分界点的移动方向。With reference to the first aspect, in some implementations of the first aspect, the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling model is used to indicate State the direction of movement of the dividing point.
在本申请的实施例中,第二图像可以是根据第一图像的像素梯度图以及锚点牵引模型得到的,锚点牵引模型可以指示分界点从起始锚点自动移动至目标锚点时每一步的移动方向,从而减少人工参与度。In the embodiment of the present application, the second image may be obtained according to the pixel gradient map of the first image and the anchor point pulling model. The anchor point pulling model may indicate that the dividing point automatically moves from the starting anchor point to the target anchor point every time. One-step movement direction, thereby reducing manual participation.
结合第一方面,在第一方面的某些实现方式中,在所述分界点当前位于所述像素梯度图中无岭脊的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者With reference to the first aspect, in some implementations of the first aspect, when the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the current position of the dividing point and the The line between the target anchor points; or
在所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者When the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to a straight line where each of the eight azimuths is located, and the second straight line refers to a line between the current position of the dividing point and the target anchor point, The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or
在所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域时,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。When the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
结合第一方面,在第一方面的某些实现方式中,在上一时刻所述分界点的移动方向与当前时刻所述分界点的移动方向相同的情况下,所述分界点的移动方向是根据不同方位的梯度绝对值确定的。With reference to the first aspect, in some implementations of the first aspect, when the moving direction of the dividing point at the previous moment is the same as the moving direction of the dividing point at the current moment, the moving direction of the dividing point is Determined according to the absolute value of the gradient in different directions.
在本申请的实施例中,在分界点的上一时刻的移动方向与当前时刻的移动方向相同的情况下,可以使得分界点向预设的方向移动,即分界点的移动方向可以是根据不同移动方位的梯度绝对值确定的,从而能够补偿分界点在沿着梯度图的岭脊方向移动时的误差。In the embodiment of the present application, when the movement direction of the demarcation point at the previous moment is the same as the movement direction of the current moment, the demarcation point can be moved in a preset direction, that is, the movement direction of the demarcation point can be based on different The absolute value of the gradient of the moving direction is determined, so that the error of the dividing point when moving along the ridge direction of the gradient map can be compensated.
结合第一方面,在第一方面的某些实现方式中,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。With reference to the first aspect, in some implementations of the first aspect, the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image .
在本申请的实施例中,由于待处理图像中的锚点是用户手动在待处理图像中标记的,因此,锚点的位置可能存在一定的偏差;为了使得锚点的位置更加准确,从而提高待处理图像分割线的准确性,可以对用户手动标记的锚点进行位置优化。In the embodiment of the present application, since the anchor points in the image to be processed are manually marked by the user in the image to be processed, the position of the anchor point may have a certain deviation; in order to make the position of the anchor point more accurate, it improves The accuracy of the segmentation line of the image to be processed can optimize the position of the anchor points manually marked by the user.
第二方面,提供一种图像分割方法,包括:获取第一图像以及所述第一图像中锚点的位置信息,其中,所述锚点包括起始锚点与目标锚点;根据所述第一图像与所述锚点,得到第二图像,其中,所述第二图像是所述第一图像经过图像分割处理后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像的像素梯度图中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。In a second aspect, an image segmentation method is provided, including: acquiring a first image and position information of an anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor point; An image and the anchor point to obtain a second image, where the second image is an image obtained after the first image is subjected to image segmentation processing, and the second image includes the starting anchor point and the A dividing line between target anchor points, the dividing line is to move through the dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position and the target anchor point as the target position owned.
其中,分界点可以是指图像中目标区域与背景区域之间分界线上的点,其中,目标区域可以是指包括目标对象的区域。Wherein, the dividing point may refer to a point on the dividing line between the target area and the background area in the image, where the target area may refer to the area including the target object.
在一种可能的实现方式中,可以获取第一图像、第一图像的像素梯度图以及所述第一图像中锚点的位置信息,所述锚点可以包括起始锚点与目标锚点。In a possible implementation manner, the first image, the pixel gradient map of the first image, and the position information of the anchor points in the first image may be obtained, and the anchor points may include a start anchor point and a target anchor point.
其中,上述分割线可以是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置自动移动得到的;其中,自动移动可以是获取在第一图像中的起始锚点与目标锚点,或者包括起始锚点与目标锚点的少量标记点,就可以自动得到第一图像中起始锚点与目标锚点之间的分割线。Wherein, the above-mentioned dividing line may be obtained by automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, the automatic moving may be obtained The start anchor point and the target anchor point in the first image, or a small number of marked points including the start anchor point and the target anchor point, can automatically obtain the segmentation between the start anchor point and the target anchor point in the first image line.
上述第一图像可以是指具有图像分割需求的图像;第二图像可以是指对第一图像经过自动分割处理后得到的图像。The above-mentioned first image may refer to an image that has an image segmentation requirement; the second image may refer to an image obtained after automatic segmentation processing is performed on the first image.
应理解,起始锚点与目标锚点可以是待处理图像中位置不发生改变的位置,通过分界点从起始锚点至目标锚点的移动可以得到第一图像在起始锚点与目标锚点之间的分割线,该分割线与第一图像中不同对象的自然边界吻合。It should be understood that the starting anchor point and the target anchor point may be positions in the image to be processed where the position does not change. By moving the dividing point from the starting anchor point to the target anchor point, it can be obtained that the first image is at the starting anchor point and the target anchor point. A dividing line between the anchor points, which coincides with the natural boundary of different objects in the first image.
在本申请的实施例中,通过在第一图像上手动标记包括起始锚点与目标锚点的少量锚点,通过分界点在第一图像的像素梯度图中以起始锚点为起始位置并且以目标锚点为目标位置自动移动得到第一图像的分割线,能够在节省人力的情况下,得到与图像的自然边界相吻合的分割结果,从而提高图像分割结果的准确性。In the embodiment of the present application, a small number of anchor points including the starting anchor point and the target anchor point are manually marked on the first image, and the starting anchor point is used as the starting point in the pixel gradient map of the first image through the dividing point. Position and automatically move the target anchor point as the target position to obtain the segmentation line of the first image, which can obtain the segmentation result consistent with the natural boundary of the image while saving manpower, thereby improving the accuracy of the image segmentation result.
结合第二方面,在第二方面的某些实现方式中,所述分割线与所述像素梯度图中岭脊方向的分割线重合。With reference to the second aspect, in some implementations of the second aspect, the dividing line coincides with the dividing line in the ridge direction in the pixel gradient map.
需要说明的是,像素梯度图中岭脊方向可以是指由像素梯度图中梯度局部极大值构成的曲线。It should be noted that the ridge direction in the pixel gradient map may refer to a curve formed by the local maximum value of the gradient in the pixel gradient map.
在本申请的实施例中,得到的第一图像的分割线可以与第一图像的像素梯度图中岭脊方向的分割线重合,即第一图像的分割线可以是根据第一图像的像素梯度图得到的,从而避免人工手动在第一图像中标记大量的锚点引入的耗费大量人力的问题;通过人工标记少量的锚点,使得在节省人力的情况下,能够提高图像分割结果的准确性。In the embodiment of the present application, the obtained dividing line of the first image may coincide with the dividing line of the ridge direction in the pixel gradient map of the first image, that is, the dividing line of the first image may be based on the pixel gradient of the first image. The image is obtained, so as to avoid the labor-consuming problem caused by manually marking a large number of anchor points in the first image manually; by manually marking a small number of anchor points, the accuracy of the image segmentation result can be improved while saving manpower .
结合第二方面,在第二方面的某些实现方式中,所述根据所述第一图像与所述锚点,得到第二图像,包括:根据所述像素梯度图、所述锚点以及锚点牵引模型,得到所述第二图像,其中,所述锚点牵引模型用于指示所述分界点的移动方向。With reference to the second aspect, in some implementations of the second aspect, the obtaining a second image according to the first image and the anchor point includes: according to the pixel gradient map, the anchor point, and the anchor point. Point pulling model to obtain the second image, wherein the anchor point pulling model is used to indicate the moving direction of the dividing point.
在本申请的实施例中,割线图像可以是根据第一图像的像素梯度图以及锚点牵引模型得到的,锚点牵引模型可以指示分界点从起始锚点自动移动至目标锚点时每一步的移动方向,从而减少人工参与度。In the embodiment of the present application, the secant image may be obtained according to the pixel gradient map of the first image and the anchor point pulling model. The anchor point pulling model may indicate that the dividing point automatically moves from the starting anchor point to the target anchor point every time. One-step movement direction, thereby reducing manual participation.
结合第二方面,在第二方面的某些实现方式中,若所述分界点当前位于所述像素梯度图中无岭脊的区域,则所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者,With reference to the second aspect, in some implementations of the second aspect, if the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the current position of the dividing point and the The line between the target anchor points; or,
若所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者,If the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter , Wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second straight line refers to the line between the current position of the dividing point and the target anchor point, so The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or,
若所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。If the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
结合第二方面,在第二方面的某些实现方式中,还包括:若所述分界点上一时刻的移 动方向与当前时刻的移动方向相同,则根据不同移动方位的梯度绝对值确定所述分界点的移动方向。With reference to the second aspect, in some implementations of the second aspect, the method further includes: if the moving direction at a time on the boundary point is the same as the moving direction at the current time, determining the absolute value of the gradient according to the different moving directions The direction of movement of the demarcation point.
在本申请的实施例中,在分界点的上一时刻的移动方向与当前时刻的移动方向相同的情况下,可以使得分界点向预设的方向移动,即分界点的移动方向可以是根据不同移动方位的梯度绝对值确定的,从而能够补偿分界点在沿着梯度图的岭脊方向移动时的误差。In the embodiment of the present application, when the movement direction of the demarcation point at the previous moment is the same as the movement direction of the current moment, the demarcation point can be made to move in a preset direction, that is, the movement direction of the demarcation point can be based on different The absolute value of the gradient of the moving direction is determined, so that the error of the dividing point when moving along the ridge direction of the gradient map can be compensated.
结合第二方面,在第二方面的某些实现方式中,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。With reference to the second aspect, in some implementations of the second aspect, the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image .
在本申请的实施例中,由于待处理图像中的锚点是用户手动在待处理图像中标记的,因此,锚点的位置可能存在一定的偏差;为了使得锚点的位置更加准确,从而提高待处理图像分割线的准确性,可以对用户手动标记的锚点进行位置优化。In the embodiment of the present application, because the anchor points in the image to be processed are manually marked by the user in the image to be processed, the position of the anchor point may have a certain deviation; in order to make the position of the anchor point more accurate, it improves The accuracy of the segmentation line of the image to be processed can optimize the position of the anchor points manually marked by the user.
第三方面,提供了一种图像分割装置,所述图像分割装置具有显示屏的终端设备,包括:检测单元,用于检测到用户在第一图像中手动标记锚点的第一操作,其中,所述锚点包括起始锚点与目标锚点;检测到所述用户指示自动分割所述第一图像的第二操作;In a third aspect, an image segmentation device is provided. The image segmentation device has a terminal device with a display screen, and includes: a detection unit configured to detect a first operation of a user to manually mark an anchor point in a first image, wherein: The anchor point includes a start anchor point and a target anchor point; the second operation of automatically segmenting the first image instructed by the user is detected;
处理单元,用于响应于所述第二操作,在所述显示屏上显示第二图像,所述第二图像是所述第一图像经过所述第二操作后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,其中,所述分割线是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。The processing unit is configured to display a second image on the display screen in response to the second operation, the second image being an image obtained by the first image after the second operation, and the second image The image includes a dividing line between the starting anchor point and the target anchor point, wherein the dividing line is a dividing line in the first image with the starting anchor point as the starting position and The target anchor point is obtained by moving the target position.
其中,上述分割线可以是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置自动移动得到的;其中,自动移动可以是指用户在第一图像中只需要手动标记起始锚点与目标锚点或者包括起始锚点与目标锚点的少量标记点,就可以自动得到第一图像中起始锚点与目标锚点之间的分割线。Wherein, the above-mentioned dividing line may be obtained by automatically moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, automatic moving may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or a small number of marked points including the start anchor point and the target anchor point, and then automatically obtain the difference between the start anchor point and the target anchor point in the first image. The dividing line between.
结合第三方面,在第三方面的某些实现方式中,所述分割线与所述第一图像的像素梯度图中岭脊方向的分割线重合。With reference to the third aspect, in some implementation manners of the third aspect, the dividing line coincides with the dividing line in the ridge direction of the pixel gradient map of the first image.
结合第三方面,在第三方面的某些实现方式中,所述处理单元具体用于:根据所述第一图像查找所述第一图像对应的掩膜图像与割线图像,其中,所述掩膜图像用于表示所述第一图像中的不同对象,所述割线图像用于表示所述第一图像中不同对象的边界;在所述显示屏上显示所述第一图像、所述掩膜图像以及所述割线图像叠加后的图像。With reference to the third aspect, in some implementations of the third aspect, the processing unit is specifically configured to: search for a mask image and a secant image corresponding to the first image according to the first image, wherein the The mask image is used to represent different objects in the first image, the secant image is used to represent the boundaries of different objects in the first image; the first image, the An image obtained by superimposing the mask image and the secant image.
结合第三方面,在第三方面的某些实现方式中,所述检测单元还用于检测到所述用户指示通过所述掩膜图像或者所述割线图像对所述第二图像进行处理的第三操作。With reference to the third aspect, in some implementations of the third aspect, the detection unit is further configured to detect that the user instructs to process the second image through the mask image or the secant image The third operation.
在一种可能的实现方式中,第二图像可以是指在显示屏上显示的第一图像、掩膜图像以及割线图像叠加后的图像,第三操作可以用于指示通过掩膜图像或者割线图像对叠加后的图像进行自动分割处理。In a possible implementation manner, the second image may refer to the superimposed image of the first image, the mask image, and the secant image displayed on the display screen, and the third operation may be used to instruct to pass the mask image or cut the image. The line image performs automatic segmentation processing on the superimposed image.
结合第三方面,在第三方面的某些实现方式中,所述第二图像是根据所述第一图像的像素梯度图与锚点牵引模型得到的,所述锚点牵引模型用于指示所述分界点的移动方向。With reference to the third aspect, in some implementations of the third aspect, the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling model is used to indicate State the direction of movement of the dividing point.
结合第三方面,在第三方面的某些实现方式中,在所述分界点当前位于所述像素梯度图中无岭脊的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者With reference to the third aspect, in some implementations of the third aspect, when the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the current position of the dividing point and the The line between the target anchor points; or
在所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域时,所述分界点的移 动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者When the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to a straight line where each of the eight azimuths is located, and the second straight line refers to a line between the current position of the dividing point and the target anchor point, The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or
在所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域时,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。When the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
结合第三方面,在第三方面的某些实现方式中,在上一时刻所述分界点的移动方向与当前时刻所述分界点的移动方向相同的情况下,所述分界点的移动方向是根据不同方位的梯度绝对值确定的。With reference to the third aspect, in some implementations of the third aspect, when the moving direction of the dividing point at the previous moment is the same as the moving direction of the dividing point at the current moment, the moving direction of the dividing point is Determined according to the absolute value of the gradient in different directions.
结合第三方面,在第三方面的某些实现方式中,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。With reference to the third aspect, in some implementations of the third aspect, the anchor point is obtained by optimizing an initial anchor point, wherein the initial anchor point is an anchor point manually marked by the user in the first image .
第四方面,提供了一种图像分割装置,包括:获取单元,用于获取第一图像以及所述第一图像中锚点的位置信息,其中,所述锚点包括起始锚点与目标锚点;处理单元,用于根据所述第一图像与所述锚点,得到第二图像,其中,所述第二图像是所述第一图像经过图像分割处理后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像的像素梯度图中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。In a fourth aspect, an image segmentation device is provided, including: an acquiring unit for acquiring a first image and position information of an anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor Point; a processing unit for obtaining a second image according to the first image and the anchor point, where the second image is an image obtained after the first image is subjected to image segmentation processing, and the second The image includes a dividing line between the starting anchor point and the target anchor point, where the dividing line passes through the dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position And it is obtained by moving the target anchor point as the target position.
在一种可能的实现方式中,可以获取第一图像、第一图像的像素梯度图以及所述第一图像中锚点的位置信息,所述锚点可以包括起始锚点与目标锚点。In a possible implementation manner, the first image, the pixel gradient map of the first image, and the position information of the anchor points in the first image may be obtained, and the anchor points may include a start anchor point and a target anchor point.
其中,上述分割线可以是过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置自动移动得到的;其中,自动移动可以是获取在第一图像中的起始锚点与目标锚点,或者包括起始锚点与目标锚点的少量标记点,就可以自动得到第一图像中起始锚点与目标锚点之间的分割线。Wherein, the above-mentioned dividing line may be obtained by automatically moving the excessive demarcation point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, the automatic movement may be obtained The start anchor point and the target anchor point in the first image, or a small number of marked points including the start anchor point and the target anchor point, can automatically obtain the segmentation between the start anchor point and the target anchor point in the first image line.
结合第四方面,在第四方面的某些实现方式中,所述分割线与所述像素梯度图中岭脊方向的分割线重合。With reference to the fourth aspect, in some implementation manners of the fourth aspect, the dividing line coincides with the dividing line in the ridge direction of the pixel gradient map.
结合第四方面,在第四方面的某些实现方式中,所述处理单元具体用于:根据所述像素梯度图、所述锚点以及锚点牵引模型,得到所述第二图像,其中,所述锚点牵引模型用于指示所述分界点的移动方向。With reference to the fourth aspect, in some implementation manners of the fourth aspect, the processing unit is specifically configured to: obtain the second image according to the pixel gradient map, the anchor point, and the anchor point pulling model, wherein: The anchor point traction model is used to indicate the moving direction of the boundary point.
结合第四方面,在第四方面的某些实现方式中,若所述分界点当前位于所述像素梯度图中无岭脊的区域,则所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者,With reference to the fourth aspect, in some implementations of the fourth aspect, if the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first line and The angle between the second line is determined, wherein the first straight line refers to a straight line in each of the eight directions, and the second straight line refers to the current position of the dividing point and the target The line between anchor points; or,
若所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者,If the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter , Wherein the first straight line refers to a straight line in each of the eight directions, the second straight line refers to the line between the current position of the dividing point and the target anchor point, and the The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or,
若所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。If the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
结合第四方面,在第四方面的某些实现方式中,所述处理单元还用于:若所述分界点 上一时刻的移动方向与当前时刻的移动方向相同,则根据不同移动方位的梯度绝对值确定所述分界点的移动方向。With reference to the fourth aspect, in some implementation manners of the fourth aspect, the processing unit is further configured to: if the moving direction at the previous moment of the dividing point is the same as the moving direction at the current moment, according to the gradient of different moving directions The absolute value determines the direction of movement of the dividing point.
结合第四方面,在第四方面的某些实现方式中,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。With reference to the fourth aspect, in some implementations of the fourth aspect, the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image .
第五方面,提供了一种图像分割装置,该图像分割装置具有显示屏,包括:存储器,用于存储程序;处理器,用于执行该存储器存储的程序,当该存储器存储的程序被执行时,该处理器用于执行:检测到用户在第一图像中手动标记锚点的第一操作,其中,所述锚点包括起始锚点与目标锚点;检测到所述用户指示自动分割所述第一图像的第二操作;响应于所述第二操作,在所述显示屏上显示第二图像,所述第二图像是所述第一图像经过所述第二操作后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,其中,所述分割线是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。In a fifth aspect, an image segmentation device is provided. The image segmentation device has a display screen, including: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed , The processor is configured to perform: detecting a first operation of manually marking an anchor point in the first image by the user, wherein the anchor point includes a starting anchor point and a target anchor point; detecting that the user instructs to automatically segment the The second operation of the first image; in response to the second operation, a second image is displayed on the display screen, the second image is an image obtained by the first image after the second operation, so The second image includes a dividing line between the starting anchor point and the target anchor point, wherein the dividing line starts from the starting anchor point in the first image through a dividing point The position is obtained by moving the target anchor point as the target position.
在一种可能的实现方式中,上述图像分割装置中包括的处理器还用于执行第一方面以及第一方面的任意一种实现方式中的图像分割方法。In a possible implementation manner, the processor included in the foregoing image segmentation apparatus is further configured to execute the first aspect and the image segmentation method in any one of the implementation manners of the first aspect.
应理解,在上述第一方面中对相关内容的扩展、限定、解释和说明也适用于第五方面中相同的内容。It should be understood that the expansion, limitation, explanation and description of the related content in the above-mentioned first aspect are also applicable to the same content in the fifth aspect.
第六方面,提供了一种图像分割装置,该图像分割装置包括:存储器,用于存储程序;处理器,用于执行该存储器存储的程序,当该存储器存储的程序被执行时,该处理器用于执行:获取第一图像以及所述第一图像中锚点的位置信息,其中,所述锚点包括起始锚点与目标锚点;根据所述像素梯度图与所述锚点,得到第二图像,其中,所述第二图像是所述第一图像经过图像分割处理后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像的像素梯度图中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。In a sixth aspect, an image segmentation device is provided. The image segmentation device includes: a memory for storing a program; a processor for executing the program stored in the memory, and when the program stored in the memory is executed, the processor uses In execution: acquiring the first image and the position information of the anchor points in the first image, wherein the anchor points include a starting anchor point and a target anchor point; according to the pixel gradient map and the anchor point, the first image is obtained Two images, wherein the second image is an image obtained after the first image is subjected to image segmentation processing, and the second image includes a dividing line between the starting anchor point and the target anchor point, so The dividing line is obtained by moving the dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position and the target anchor point as the target position.
在一种可能的实现方式中,上述图像分割装置中包括的处理器还用于执行第一方面以及第一方面的任意一种实现方式中的图像分割方法。In a possible implementation manner, the processor included in the foregoing image segmentation apparatus is further configured to execute the first aspect and the image segmentation method in any one of the implementation manners of the first aspect.
应理解,在上述第一方面中对相关内容的扩展、限定、解释和说明也适用于第六方面中相同的内容。It should be understood that the expansion, limitation, explanation and description of the related content in the above-mentioned first aspect are also applicable to the same content in the sixth aspect.
第七方面,提供了一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行上述第一方面以及第一方面的任意一种实现方式中的图像分割方法。In a seventh aspect, a computer-readable medium is provided, and the computer-readable medium stores program code for device execution, and the program code includes the program code for executing the first aspect and any one of the implementation manners of the first aspect. Image segmentation method.
第八方面,提供了一种计算机可读介质,该计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行上述第二方面以及第二方面的任意一种实现方式中的图像分割方法。In an eighth aspect, a computer-readable medium is provided. The computer-readable medium stores program code for device execution. The program code includes Image segmentation method.
第九方面,提供了一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面以及第一方面的任意一种实现方式中的图像分割方法。In a ninth aspect, a computer program product containing instructions is provided. When the computer program product runs on a computer, the computer executes the first aspect and the image segmentation method in any one of the first aspects.
第十方面,供了一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第二方面以及第二方面的任意一种实现方式中的图像分割方法。In a tenth aspect, a computer program product containing instructions is provided. When the computer program product runs on a computer, the computer executes the second aspect and the image segmentation method in any one of the second aspects.
第十一方面,提供了一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第一方面以及第一方面的任意一种实现方式中的图像分割方法。In an eleventh aspect, a chip is provided. The chip includes a processor and a data interface. The processor reads instructions stored in a memory through the data interface, and executes any one of the first aspect and the first aspect. An image segmentation method in one implementation.
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行上述第一方面以及第一方面中的任意一种实现方式中的图像分割方法。Optionally, as an implementation manner, the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory. When the instructions are executed, the The processor is configured to execute the above-mentioned first aspect and the image segmentation method in any one of the implementation manners of the first aspect.
第十二方面,提供了一种芯片,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,执行上述第二方面以及第二方面的任意一种实现方式中的图像分割方法。In a twelfth aspect, a chip is provided. The chip includes a processor and a data interface. The processor reads instructions stored in a memory through the data interface, and executes any one of the second aspect and the second aspect. An image segmentation method in an implementation.
可选地,作为一种实现方式,所述芯片还可以包括存储器,所述存储器中存储有指令,所述处理器用于执行所述存储器上存储的指令,当所述指令被执行时,所述处理器用于执行上述第二方面以及第二方面的任意一种实现方式中的图像分割方法。Optionally, as an implementation manner, the chip may further include a memory in which instructions are stored, and the processor is configured to execute instructions stored on the memory. When the instructions are executed, the The processor is configured to execute the above-mentioned second aspect and the image segmentation method in any one of the implementation manners of the second aspect.
附图说明Description of the drawings
图1是本申请实施例提供的系统架构的示意图;FIG. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application;
图2是本申请实施例提供的图像分割方法的示意性流程图;FIG. 2 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application;
图3是本申请实施例提供的图像分割方法的示意性流程图;FIG. 3 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application;
图4是本申请实施例提供的N-map调整掩膜图像的示意图;FIG. 4 is a schematic diagram of an N-map adjustment mask image provided by an embodiment of the present application;
图5是本申请实施例提供的待处理图像与像素梯度图的示意图;FIG. 5 is a schematic diagram of an image to be processed and a pixel gradient map provided by an embodiment of the present application;
图6是本申请实施例提供的优化锚点位置方法的示意性流程图;FIG. 6 is a schematic flowchart of a method for optimizing anchor point positions provided by an embodiment of the present application;
图7是本申请实施例提供的四向卷积核的示意图;FIG. 7 is a schematic diagram of a four-way convolution kernel provided by an embodiment of the present application;
图8是本申请实施例提供的图像分割方法的示意性流程图;FIG. 8 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application;
图9是本申请实施例提供的图像分割方法的示意性流程图;FIG. 9 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application;
图10是本申请实施例提供的基于锚点牵引梯度岭跑算法的示意性流程图;FIG. 10 is a schematic flowchart of a ridge running algorithm based on anchor point traction gradient provided by an embodiment of the present application;
图11是本申请实施例提供的不同方位梯度计算的卷积核的示意图;FIG. 11 is a schematic diagram of a convolution kernel for calculating gradients of different orientations according to an embodiment of the present application;
图12是本申请实施例提供的执行横向漂移的示意图;FIG. 12 is a schematic diagram of performing lateral drift according to an embodiment of the present application;
图13是本申请实施例提供的锚点牵引模型的示意图;FIG. 13 is a schematic diagram of an anchor point traction model provided by an embodiment of the present application;
图14是本申请实施例提供的图像分割装置的示意性框图;FIG. 14 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application;
图15是本申请实施例提供的图像分割装置的示意性框图;FIG. 15 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application;
图16是本申请实施例提供的图像分割装置的硬件结构示意图。FIG. 16 is a schematic diagram of the hardware structure of an image segmentation device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described below in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
目前,通常采用多边形法,或者曲线拟合法进行图像的精确分割;其中,通过多边形法得到的图像分割线与图像实际的分割线的差异较大准确性较低;通过曲线拟合法需要人工手动选择多个分界点导致需要耗费大量的人力。At present, the polygon method or curve fitting method is usually used for accurate image segmentation; among them, the difference between the image segmentation line obtained by the polygon method and the actual image segmentation line is large and the accuracy is low; the curve fitting method requires manual selection Multiple demarcation points result in the need to consume a lot of manpower.
有鉴于此,本申请实施例提供了一种图像分割方法与装置,通过用户在待处理图像上手动标记起始锚点与目标锚点,使得分界点以起始锚点为起始位置并以目标锚点为目标位置根据待处理图像的像素梯度图自动移动,得到待处理图像的分割线;即在像素梯度图上沿着梯度岭脊方向分界点可以自动跑出一条分割线,使该分割线与图像的自然边界相吻合,从而实现在节约人力的情况下,提高了图像分割结果的准确性。In view of this, the embodiments of the present application provide an image segmentation method and device. The user manually marks the starting anchor point and the target anchor point on the image to be processed, so that the starting anchor point is the starting position of the dividing point and the The target anchor point is the target position and automatically moves according to the pixel gradient map of the image to be processed to obtain the segmentation line of the image to be processed; that is, a segmentation line can be automatically run out on the pixel gradient map along the gradient ridge direction boundary point to make the segmentation The line coincides with the natural boundary of the image, so that the accuracy of the image segmentation result is improved while saving manpower.
图1是本申请实施例提供的系统架构的示意图。Fig. 1 is a schematic diagram of a system architecture provided by an embodiment of the present application.
如图1所示,系统100中可以包括应用服务器110、数据服务器120以及多个客户端(例如,客户端131、客户端132以及客户端133);其中,客户端可以通过通信网络与应用服务器110与数据服务器120连接;数据服务器110可以用于存储海量的图像文件以及标注文件;应用服务器120可以用于提供图像标注,或者修图服务;客户端可以用于提供人机交互的界面。As shown in FIG. 1, the system 100 may include an application server 110, a data server 120, and multiple clients (for example, a client 131, a client 132, and a client 133); among them, the client may communicate with the application server through a communication network. 110 is connected to a data server 120; the data server 110 can be used to store a large number of image files and annotation files; the application server 120 can be used to provide image annotation or image editing services; the client can be used to provide a human-computer interaction interface.
例如,客户端可以为移动的或固定的终端;例如,客户端可以是具有图像处理功能的移动电话、平板个人电脑(tablet personal computer,TPC)、媒体播放器、智能电视、笔记本电脑(laptop computer,LC)、个人数字助理(personal digital assistant,PDA)、个人计算机(personal computer,PC)、照相机、摄像机、智能手表、可穿戴式设备(wearable device,WD)或者自动驾驶的车辆等,本申请实施例对此不作限定。For example, the client can be a mobile or fixed terminal; for example, the client can be a mobile phone with image processing functions, a tablet personal computer (TPC), a media player, a smart TV, or a laptop computer. , LC), personal digital assistant (personal digital assistant, PDA), personal computer (PC), camera, video camera, smart watch, wearable device (WD) or self-driving vehicle, etc., this application The embodiment does not limit this.
其中,每个客户端可以通过任何通信机制/通信标准的通信网络与数据服务器110或者应用服务器120进行交互,其中,通信网络可以是广域网、局域网、点对点连接等方式,或它们的任意组合。Among them, each client can interact with the data server 110 or the application server 120 through a communication network of any communication mechanism/communication standard. The communication network can be a wide area network, a local area network, a point-to-point connection, etc., or any combination thereof.
应理解,上述图1所示的系统100为多用户系统,本申请提供的图像分割方法也适用于单用户系统;对于单用户系统即包括一个客户端的系统,本申请实施例提供的图像分割方法可以部署于客户端;对于多用户系统即图1所示的包括多个客户端的系统,本申请实施例提供的图像分割方法可以部署于服务器上,比如,应用服务器或者数据服务器。It should be understood that the system 100 shown in FIG. 1 is a multi-user system, and the image segmentation method provided in this application is also applicable to a single-user system; for a single-user system, that is, a system including one client, the image segmentation method provided in the embodiment of the application is It can be deployed on a client; for a multi-user system, that is, the system including multiple clients as shown in FIG. 1, the image segmentation method provided in this embodiment of the present application can be deployed on a server, such as an application server or a data server.
示例性地,本申请实施例提供的图像分割方法可以应用于图像分割标注工具或者,修图工具中,通过图像分割方法可以实现图像的背景替换或者图像的背景虚化。Exemplarily, the image segmentation method provided in the embodiments of the present application can be applied to an image segmentation labeling tool or a retouching tool, and the image segmentation method can realize the background replacement of the image or the background blur of the image.
例如,用户在智能终端中开启视频通话功能,在拍摄的过程中,可以实时进行图像分割后,仅保留目标对象区域,实现视频通话背景区域的替换。For example, the user turns on the video call function in the smart terminal. During the shooting process, the image can be segmented in real time, and only the target object area is reserved to realize the replacement of the background area of the video call.
例如,用户在智能终端开启拍摄功能,在拍摄的过程中,可以实时进行图像分割后,让被拍摄目标对象的前景区域清晰背景区域虚化,实现单反相机的大光圈的图像效果。For example, the user turns on the shooting function on the smart terminal. During the shooting process, the image can be segmented in real time, so that the foreground area of the photographed target object is clear and the background area is blurred, realizing the image effect of the large aperture of the SLR camera.
应理解,上文介绍的图像的背景虚化和图像的背景替换只是本申请实施例的图像处理方法所应用的两个具体场景,本申请实施例的图像处理方法在应用时并不限于上述两个场景,本申请实施例的图像处理方法能够应用到任何需要进行图像分割场景中。It should be understood that the background blurring of the image and the replacement of the background of the image described above are only two specific scenarios applied by the image processing method of the embodiment of the present application, and the image processing method of the embodiment of the present application is not limited to the above two when applied. For each scene, the image processing method of the embodiment of the present application can be applied to any scene that requires image segmentation.
下面结合图2至图13对本申请实施例的图像分割方法进行详细的介绍。The image segmentation method of the embodiment of the present application will be described in detail below in conjunction with FIG. 2 to FIG. 13.
图2是本申请实施例提供的图像分割方法的示意性流程图。该方法可以由图1所示服务器或者客户端执行,图2所示的方法包括步骤210至步骤230,下面分别对这些步骤进行详细的描述。Fig. 2 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application. The method may be executed by the server or the client shown in FIG. 1, and the method shown in FIG. 2 includes step 210 to step 230, and these steps are respectively described in detail below.
步骤210、检测到用户在第一图像中手动标记锚点的第一操作。Step 210: A first operation of manually marking an anchor point in the first image by the user is detected.
其中,第一图像可以是指具有图像分割需求的待处理图像,上述锚点可以包括起始锚点与目标锚点。Wherein, the first image may refer to an image to be processed with an image segmentation requirement, and the above-mentioned anchor points may include a starting anchor point and a target anchor point.
示例性地,在步骤210之前用户可以通过操作打开图像处理工具,并且将第一图像导入图像处理工具,在图像处理工具的界面可以显示需要进行图像分割的第一图像。Exemplarily, before step 210, the user can open the image processing tool by operating and import the first image into the image processing tool, and the first image that needs to be segmented can be displayed on the interface of the image processing tool.
步骤220、检测到所述用户指示自动分割所述第一图像的第二操作。Step 220: A second operation instructed by the user to automatically segment the first image is detected.
示例性地,用户用于指示自动分割第一图像的第二操作可以包括用户点击图像处理工具中的自动分割图像的按钮,也可以包括用户通过语音指示自动分割的行为,或者,还可以包括用户其它的指示自动分割的行为;上述为举例说明,并不对本申请作任何限定。Exemplarily, the second operation used by the user to instruct to automatically segment the first image may include the user clicking the button for automatically segmenting the image in the image processing tool, or may include the behavior of the user instructing the automatic segmentation through voice, or may also include the user Other actions that indicate automatic segmentation; the above are examples and do not limit this application in any way.
步骤230、响应于所述第二操作,在所述显示屏上显示第二图像;Step 230: In response to the second operation, display a second image on the display screen;
其中,所述第二图像是所述第一图像经过所述第二操作后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。Wherein, the second image is an image obtained after the first image is subjected to the second operation, and the second image includes a dividing line between the starting anchor point and the target anchor point, and the The dividing line is obtained by moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position.
例如,上述分割线可以是过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置自动移动得到的;其中,自动移动可以是指用户在第一图像中只需要手动标记起始锚点与目标锚点或者包括起始锚点与目标锚点的少量标记点,就可以自动得到第一图像中起始锚点与目标锚点之间的分割线。For example, the above-mentioned dividing line may be obtained by automatically moving an excessive demarcation point in the first image with the starting anchor point as the starting position and the target anchor point as the target position; wherein, automatic movement may refer to The user only needs to manually mark the start anchor point and the target anchor point in the first image, or a small number of mark points including the start anchor point and the target anchor point, and then automatically obtain the difference between the start anchor point and the target anchor point in the first image. The dividing line between.
应理解,起始锚点与目标锚点可以是待处理图像中位置不发生改变的位置,通过分界点从起始锚点至目标锚点的移动可以得到第一图像中起始锚点与目标锚点之间的分割线,该分割线与第一图像中不同对象的自然边界吻合。It should be understood that the starting anchor point and the target anchor point may be positions in the image to be processed where the position does not change, and the starting anchor point and the target in the first image can be obtained by moving the dividing point from the starting anchor point to the target anchor point. A dividing line between the anchor points, which coincides with the natural boundary of different objects in the first image.
示例性地,上述分界点可以是以起始锚点为起始位置,目标锚点为目标位置沿着第一图像的像素梯度图的岭脊跑出一条分割线,即分割线与第一图像的像素梯度图中岭脊方向的分割线重合;其中,第一图像的像素梯度图可以是指第一图像中不同行或者不同列中的像素亮度的变化量组成的图像。Exemplarily, the aforementioned dividing point may be based on the starting anchor point as the starting position, and the target anchor point as the target position. A dividing line is run along the ridge of the pixel gradient map of the first image, that is, the dividing line and the first image The division lines in the ridge direction of the pixel gradient map of the overlapped; wherein, the pixel gradient map of the first image may refer to an image composed of changes in the brightness of pixels in different rows or different columns in the first image.
在一个示例中,在用户用于指示自动分割的第二操作后,终端设备可以根据第一图像查找该第一图像对应的掩膜图像与割线图像,其中,掩膜图像可以用于表示待第一图像中的对象区域,割线图像可以用于表示所述第一图像中不同对象的边界;将待处理图像、掩膜图像以及割线图像三者叠加后在图像处理工具的界面中显示。In an example, after the second operation for instructing automatic segmentation by the user, the terminal device can search for the mask image and the secant image corresponding to the first image according to the first image, where the mask image can be used to represent the to-be The object area in the first image, the secant image can be used to represent the boundary of different objects in the first image; the image to be processed, the mask image, and the secant image are superimposed and displayed on the interface of the image processing tool .
例如,通过查找后查找到该第一图像对应的掩膜图像以及第一图像对应的割线图像。For example, after searching, the mask image corresponding to the first image and the secant image corresponding to the first image are found.
在一个示例中,通过搜索后未查找到该第一图像对应的割线图像与掩膜图像,则需要生成该第一图像对应的割线图像与掩膜图像。In one example, if the secant image and the mask image corresponding to the first image are not found after searching, it is necessary to generate the secant image and the mask image corresponding to the first image.
例如,通过第一图像采用Mask-RCNN、DeepLabV3或者其他算法进行自动图像分割,从而生成第一图像对应的掩膜图像;进一步地,可以根据第一图像与掩膜图像,采用预设的边界门限,使用基于N-map的方法自动调整掩膜范围,然后根据掩膜边界自动生成割线图像,使割线图像与掩膜图像自动对齐。For example, using Mask-RCNN, DeepLabV3 or other algorithms to perform automatic image segmentation through the first image to generate a mask image corresponding to the first image; further, a preset boundary threshold can be used according to the first image and the mask image , Use the N-map-based method to automatically adjust the mask range, and then automatically generate the secant image according to the mask boundary, so that the secant image and the mask image are automatically aligned.
其中,N-map方法也可以称为基于N-map的边界漂移;假设,在掩膜层文件中有N个图像分割区域,每个像素的值为该像素所属的区域编号K,K∈[0,N-1];在割线两侧设置未知分类的像素,将其掩膜层的值改为N;然后,使用预先训练过的深度神经网络处理对应的图像层文件,把未知分类的像素和背景像素分配到N个分割区域里。Among them, the N-map method can also be called boundary drift based on N-map; suppose that there are N image segmentation areas in the mask layer file, and the value of each pixel is the area number K to which the pixel belongs, K∈[ 0, N-1]; Set the pixels of the unknown classification on both sides of the secant, and change the value of the mask layer to N; Then, use the pre-trained deep neural network to process the corresponding image layer files, and classify the unknown Pixels and background pixels are allocated to N divided regions.
在一个示例中,通过搜索后查找到第一图像对应的割线图像,未查找到该第一图像对应掩膜图像,则需要生成该第一图像对应的掩膜图像。In one example, after searching, the secant image corresponding to the first image is found, and the mask image corresponding to the first image is not found, the mask image corresponding to the first image needs to be generated.
在一个示例中,通过搜索后查找到第一图像对应的掩膜图像,未查找到该第一图像对 应割线图像,则需要生成该第一图像对应的割线图像;生成割线图像的具体流程参见后面图9与图10所示的示意性流程图。In an example, after searching, the mask image corresponding to the first image is found. If the secant image corresponding to the first image is not found, the secant image corresponding to the first image needs to be generated; the specifics of generating the secant image Refer to the schematic flowcharts shown in Figure 9 and Figure 10 below for the process.
进一步地,用户可以手动选择通过上述掩膜图像或者割线图像对第一图像进行分割处理。Further, the user may manually select to perform segmentation processing on the first image through the above-mentioned mask image or secant image.
需要说明的是,在用户调整掩膜图像后需要对割线图像进行同步地更新,即使得更新后的掩膜图像与割线图像同步对齐;同理,在用户调整割线图像后需要对掩膜图像进行同步地更新,即使得更新后的割线图像与掩膜图像同步对齐。It should be noted that the secant image needs to be updated synchronously after the user adjusts the mask image, that is, the updated mask image is synchronized with the secant image; similarly, the user needs to adjust the secant image after the user adjusts the secant image. The film image is updated synchronously, that is, the updated secant image is synchronized with the mask image.
可选地,在本申请的实施例中,图像分割方法还包括:检测到所述用户指示通过所述掩膜图像或者所述割线图像对所述第二图像进行处理的第三操作。Optionally, in the embodiment of the present application, the image segmentation method further includes: detecting a third operation instructed by the user to process the second image through the mask image or the secant image.
在一种可能的实现方式中,上述第二图像可以是指在显示屏上显示的第一图像、掩膜图像以及割线图像叠加后的图像,第三操作可以用于指示通过掩膜图像或者割线图像对叠加后的图像进行自动分割处理处理。In a possible implementation manner, the above-mentioned second image may refer to the superimposed image of the first image, the mask image, and the secant image displayed on the display screen, and the third operation may be used to instruct to pass the mask image or The secant image performs automatic segmentation processing on the superimposed image.
可选地,在本申请的实施例中,第二图像可以是根据第一图像的像素梯度图与锚点牵引模型得到的,锚点牵引模型可以用于指示所述分界点的移动方向,即可以通过预设的锚点牵引模型使得锚点在沿着像素梯度图的岭脊方向得到第一图像的分割线,使得第一图像的分割线与图像的自然边界吻合。Optionally, in the embodiment of the present application, the second image may be obtained according to the pixel gradient map of the first image and the anchor point pulling model, and the anchor point pulling model may be used to indicate the moving direction of the boundary point, that is, The preset anchor point pulling model can be used to make the anchor point obtain the segmentation line of the first image along the ridge direction of the pixel gradient map, so that the segmentation line of the first image coincides with the natural boundary of the image.
示例性地,锚点牵引模型可以为分割线中的分界点提供移动方向,比如,在所述分界点当前位于所述像素梯度图中无岭脊的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二连线是指所述分界点的当前位置与所述目标锚点之间的连线;Exemplarily, the anchor point traction model may provide a moving direction for the dividing point in the dividing line. For example, when the dividing point is currently located in an area without ridges in the pixel gradient map, the moving direction of the dividing point is Determined according to the angle between the first line and the second line, where the first line refers to the line where each of the eight directions is located, and the second line refers to the boundary The line between the current position of the point and the target anchor point;
或者,在所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二连线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;Alternatively, when the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is based on the angle between the first line and the second line and The distance parameter is determined, wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second line refers to the distance between the current position of the dividing point and the target anchor point. Line, the distance parameter refers to the distance between the current position of the boundary point and the target anchor point;
或者,在所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域时,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。Alternatively, when the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
需要说明的是,第一直线即八个方位中每个方位所在的直接即如图11中的(a)所示,可以分别是指直线oa、直线ob、直线oc、直线od、直线oe、直线of、直线og、直线oh。It should be noted that the first straight line is the direct line where each of the eight directions is located, as shown in (a) in Figure 11, which can respectively refer to the straight line oa, the straight line ob, the straight line oc, the straight line od, and the straight line oe. , Straight of, straight og, straight oh.
示例性地,目标锚点(例如,锚点B)对跑点(例如,分界点)的引力模型可以看作是由强引力模型和弱引力模型组成;假设跑点与目标锚点之间的距离为dis,锚点A与锚点B之间的距离为d0,8个方位与锚点AB连线的夹角为θ i,可以根据图13所示的引力模型计算出每个方位的强引力权重w i和弱引力偏移b i。对于强引力模型与弱引力模型而言,8个方位与锚点AB的连线夹角越小的方位则引力越大;比如,8个方位中与锚点AB连线夹角为0°的方位引力最大,锚点AB连线夹角为180°的方位引力最小。强引力模型和弱引力模型的区别在于弱引力模型与距离dis无关;强引力模型在dis=d0时随夹角增大引力衰减的速度最慢,dis偏离d0衰减速度越快。 Exemplarily, the gravitational model of the target anchor point (for example, anchor point B) to the running point (for example, the demarcation point) can be regarded as composed of a strong gravitational model and a weak gravitational model; suppose the distance between the running point and the target anchor point is The distance is dis, the distance between anchor point A and anchor point B is d0, and the angle between the 8 azimuths and anchor point AB is θ i , and the strength of each azimuth can be calculated according to the gravity model shown in Figure 13 The gravity weight w i and the weak gravity offset b i . For the strong gravitational model and the weak gravitational model, the smaller the angle between the eight azimuths and the anchor point AB, the greater the gravitational force; for example, the angle between the eight azimuths and the anchor point AB is 0° The azimuth gravitational force is the largest, and the azimuth gravitational force is the smallest when the angle between the anchor point AB is 180°. The difference between the strong gravitational model and the weak gravitational model is that the weak gravitational model has nothing to do with the distance dis; the strong gravitational model has the slowest gravitational decay rate with the increase of the angle when dis=d0, and the faster the decay rate of dis deviates from d0.
进一步地,为了补偿分界点在沿着梯度图的岭脊方向移动时的误差,在分界点的上一时刻的移动方向与当前时刻的移动方向相同的情况下,可以使得分界点向预设的方向移 动,即分界点的移动方向是根据不同移动方位的梯度绝对值确定的。Further, in order to compensate for the error when the dividing point moves along the ridge direction of the gradient map, when the movement direction of the dividing point at the previous moment is the same as the movement direction at the current moment, the dividing point can be made to move toward the preset Directional movement, that is, the movement direction of the dividing point is determined according to the absolute value of the gradient of different moving directions.
在本申请的实施例中,由于锚点的位置是用户通过鼠标在图像中点击产生的,因此,锚点的位置可能存在偏差并不位于该图像对应的梯度图上,为了使得锚点能够根据梯度图准确的得到图像的分割线,可以对锚点位置进行优化。In the embodiment of the present application, because the position of the anchor point is generated by the user clicking on the image with the mouse, the position of the anchor point may be deviated and not located on the gradient map corresponding to the image, in order to make the anchor point be able to The gradient map accurately obtains the segmentation line of the image, and the anchor point position can be optimized.
示例性地,在本申请的实施例中可以通过如图6所示的四向梯度卷积核对锚点的位置进行优化;例如,可以根据锚点在不同方向上的梯度绝对值对锚点的位置进行优化,得到优化后的锚点的位置信息。Exemplarily, in the embodiment of the present application, the position of the anchor point can be optimized by the four-way gradient convolution kernel as shown in FIG. 6; for example, the anchor point can be adjusted according to the absolute value of the gradient of the anchor point in different directions. The position is optimized, and the optimized position information of the anchor point is obtained.
图3示出了本申请实施例提供的图像分割方法的示意性流程图,该方法可以由图1所示服务器或者客户端中的图像处理工具(例如,图像分割标注工具,或者修图工具)执行。图3所示的方法包括步骤301至步骤311,下面分别对这些步骤进行详细的描述。FIG. 3 shows a schematic flowchart of an image segmentation method provided by an embodiment of the present application. The method can be implemented by an image processing tool (for example, an image segmentation labeling tool, or a retouching tool) in the server or client shown in FIG. carried out. The method shown in FIG. 3 includes steps 301 to 311, and these steps are respectively described in detail below.
首先,对图像处理工具中包括图层进行说明;在图像处理工具中可以包括图像层(image layer)、掩膜层(mask layer)以及割线层(cutline layer)。First, an explanation will be given of the layers included in the image processing tools; the image processing tools may include image layers, mask layers, and cutline layers.
其中,图像层中的每个像素值可以代表图像的灰度,像素坐标(x,y)可以为正整数,该图像层在屏幕上显示为正常的图像,比如,彩色的图像。Wherein, each pixel value in the image layer can represent the gray level of the image, and the pixel coordinates (x, y) can be a positive integer, and the image layer is displayed on the screen as a normal image, such as a color image.
示例性地,掩膜层中的每个像素值可以代表掩膜的序列号;例如,掩膜层在屏幕上显示时,每个掩膜序列号相同区域可以采用相同的染色,比如,背景的掩膜序列号为0,显示时染色可以为蓝色;TV的掩膜序列号为1,显示时染色为红色;其它掩膜序列号以此类推。用户可以通过操作图形用户界面(graphic user interface,GUI)上的菜单或按钮或调色板,从而为每个掩膜序列号分配不同的显示颜色。Exemplarily, each pixel value in the mask layer can represent the serial number of the mask; for example, when the mask layer is displayed on the screen, the same area of the mask serial number can be colored with the same color, for example, the background The mask serial number is 0, and the coloring can be blue when displaying; the mask serial number of TV is 1, and the coloring is red when displaying; other mask serial numbers can be deduced by analogy. The user can assign a different display color to each mask serial number by operating a menu or button or palette on a graphical user interface (GUI).
示例性地,割线层可以是由分割线由锚点和分割点组成的;其中,锚点可以是分割线的端点;锚点的值可以代表锚点的序列号(例如,129至255)分割点的值可以代表分割线的序列号(例如,1至127),值为0的点可以不是锚点也不是分割点。Exemplarily, the secant layer may be composed of a dividing line consisting of an anchor point and a dividing point; wherein the anchor point may be the end point of the dividing line; the value of the anchor point may represent the serial number of the anchor point (for example, 129 to 255) The value of the dividing point may represent the serial number of the dividing line (for example, 1 to 127), and the point with a value of 0 may not be an anchor point or a dividing point.
例如,锚点与分割点可以处于像素之间,不与像素位置重合,其二维坐标(u,v)与附近像素的坐标(x,y)偏移0.5个像素距离。For example, the anchor point and the segmentation point may be between pixels and not coincide with the pixel position, and their two-dimensional coordinates (u, v) and the coordinates (x, y) of nearby pixels are offset by a distance of 0.5 pixels.
例如,分割层在屏幕上显示时可以为虚线,锚点在屏幕上显示可以为小圆圈;用户通过点击或者拖动等操作,可以移动任何锚点,分割线也随锚点一起移动。For example, the split layer can be displayed as a dotted line on the screen, and the anchor point can be displayed as a small circle on the screen; the user can move any anchor point by clicking or dragging, and the split line also moves with the anchor point.
需要说明的是,用户可以通过操作标注工具GUI上的菜单或按钮,使得每个图层可以独立关闭(不显示)或打开(显示)。It should be noted that the user can operate the menu or button on the GUI of the annotation tool to enable each layer to be turned off (not displayed) or turned on (displayed) independently.
步骤301、读取图像层文件。Step 301: Read the image layer file.
例如,可以是用户通过操作导入图像层文件。For example, it may be that the user imports an image layer file through an operation.
步骤302、搜索图像层文件对应的掩膜层文件与割线层文件。Step 302: Search for the mask layer file and the secant layer file corresponding to the image layer file.
其中,上述图像层文件可以是指具体图像分割需求的图像文件,通过预设文件路径可以对图像层文件对应的文件进行搜索。The above-mentioned image layer file may refer to an image file with specific image segmentation requirements, and the file corresponding to the image layer file can be searched through a preset file path.
示例性地,每个图层文件可以保存为位图文件(bitmap,BMP)、无损压缩的位图图形格式(portable network graphics,PNG)或其他图像文件格式;不同图层文件之间的对应关系,可以用图标或文件名命名规则来描述,或者,通过将图像层文件、该图像层文件对应的掩膜层文件以及该图像层文件对应的割线文件三者打包在一个文件里或一个文件夹里,如表1所示。Exemplarily, each layer file can be saved as a bitmap file (bitmap, BMP), a losslessly compressed bitmap graphics format (portable network graphics, PNG), or other image file formats; the correspondence between different layer files , Can be described by icon or file name naming rules, or by packaging the image layer file, the mask layer file corresponding to the image layer file, and the secant file corresponding to the image layer file in one file or a file Folder, as shown in Table 1.
表1Table 1
图像层文件Image layer file 掩膜层文件Mask file 割线层文件Secant layer file
P0001_img.jpgP0001_img.jpg P0001_msk.pngP0001_msk.png P0001_ctl.pngP0001_ctl.png
P0002_img.jpgP0002_img.jpg P0002_msk.pngP0002_msk.png P0002_ctl.pngP0002_ctl.png
P0003_img.jpgP0003_img.jpg P0003_msk.pngP0003_msk.png P0003_ctl.pngP0003_ctl.png
示例性地,可以通过分割线序号与锚点序号之间的对应的关系,可以通过表2所示。Exemplarily, the corresponding relationship between the serial number of the dividing line and the serial number of the anchor point can be adopted, as shown in Table 2.
表2Table 2
分割线Dividing line 锚点AAnchor point A 锚点BAnchor point B
11 129129 130130
22 130130 132132
33 130130 134134
在一个示例中,通过搜索后确定保存该图像层文件对应的掩膜层文件以及割线层文件,则可以执行步骤306。In an example, after searching and determining to save the mask layer file and the secant layer file corresponding to the image layer file, step 306 may be executed.
在一个示例中,通过搜索后确定没有该图像层文件对应的割线层文件与掩膜层文件,则执行步骤303自动生成该图像层文件对应的掩膜层文件与割线层文件。In an example, after searching and determining that there is no secant layer file and mask layer file corresponding to the image layer file, step 303 is executed to automatically generate the mask layer file and the secant layer file corresponding to the image layer file.
例如,通过图像层文件采用Mask-RCNN或者DeepLabV3或者其他算法进行自动图像分割,从而生成掩膜层文件;进一步地,根据图像层文件与掩膜层文件,采用预设的边界门限,使用基于N-map的方法自动调整掩膜范围,然后根据掩膜边界自动生成割线层文件,使割线层与掩膜层自动对齐。For example, use Mask-RCNN or DeepLabV3 or other algorithms to perform automatic image segmentation through the image layer file, thereby generating the mask layer file; further, according to the image layer file and the mask layer file, the preset boundary threshold is used, and the N-based The -map method automatically adjusts the mask range, and then automatically generates the secant layer file according to the mask boundary, so that the secant layer and the mask layer are automatically aligned.
上述基于N-map调整掩膜层的方法,也可以称为基于N-map的边界漂移;假设,在掩膜层文件中有N个图像分割区域,每个像素的值为该像素所属的区域编号K,K∈[0,N-1];在割线两侧设置未知分类的像素,将其掩膜层的值改为N;然后,使用预先训练过的深度神经网络处理对应的图像层文件,把未知分类的像素和背景像素分配到N个分割区域里。The above-mentioned method of adjusting the mask layer based on N-map can also be called boundary drift based on N-map; suppose that there are N image segmentation areas in the mask layer file, and the value of each pixel is the area to which the pixel belongs Number K, K ∈ [0, N-1]; set the pixels of unknown classification on both sides of the secant, and change the value of the mask layer to N; then, use the pre-trained deep neural network to process the corresponding image layer File, the pixels of unknown classification and background pixels are allocated to N segmented areas.
例如,如图4所示,可以在割线中点垂直割线两侧的连续T个像素,将其掩膜层的值改为N;距离割线中点越远,将其掩膜层的值改为N的像素数越少;在锚点处,将其掩膜层的值改为N的像素数为0;若割线是不包括锚点的闭合曲线,则可以在割线的任何位置将割线两侧掩膜层的值改为N的像素数为T;其中,T可以为预设阈值,可以在用户界面中手工调整。For example, as shown in Figure 4, the value of the mask layer can be changed to N for consecutive T pixels on both sides of the vertical secant at the midpoint of the secant line; the farther from the midpoint of the secant line, the mask layer The number of pixels whose value is changed to N is less; at the anchor point, the number of pixels whose mask layer value is changed to N is 0; if the secant is a closed curve that does not include the anchor point, it can be at any of the secant Position Change the value of the mask layer on both sides of the secant line to N and the number of pixels is T; where T can be a preset threshold, which can be manually adjusted in the user interface.
在一个示例中,通过搜索后确定保存该图像层文件对应的掩膜层文件,但是没有该图像层文件对应的割线层文件,则执行步骤304自动生成割线层文件。In an example, after searching, it is determined to save the mask layer file corresponding to the image layer file, but there is no secant layer file corresponding to the image layer file, then step 304 is executed to automatically generate the secant layer file.
例如,根据图像层文件及其对应的掩膜层文件,可以根据预设的边界门限,使用基于N-map的方法自动调整掩膜范围,然后根据掩膜边界自动生成割线层文件,使割线层与掩膜层自动对齐。For example, according to the image layer file and its corresponding mask layer file, you can use the N-map-based method to automatically adjust the mask range according to the preset boundary threshold, and then automatically generate the secant layer file according to the mask boundary to make the cut The line layer and the mask layer are automatically aligned.
在一个示例中,通过搜索后确定保存该图像层文件对应的割线层文件,但是没有该图像层文件对应的掩膜层文件,则执行步骤305自动生成掩膜层文件。In an example, after searching, it is determined that the secant layer file corresponding to the image layer file is saved, but there is no mask layer file corresponding to the image layer file, then step 305 is executed to automatically generate the mask layer file.
例如,根据图像层文件及其对应的分割层文件,可以通过用户观察图像边缘,用户手动调整割线层的锚点,使用锚点牵引梯度岭跑法自动调整割线层的割线位置;然后,自动生成掩膜层文件,使掩膜层与割线层自动对齐。For example, according to the image layer file and its corresponding segmentation layer file, the user can observe the edge of the image, and the user can manually adjust the anchor point of the secant layer, and use the anchor point pulling gradient ridge run method to automatically adjust the secant position of the secant layer; , The mask layer file is automatically generated, so that the mask layer and the secant layer are automatically aligned.
步骤306、可以将三个图层文件叠加显示允许用户编辑图像分割。Step 306: The three layer files can be superimposed and displayed to allow the user to edit the image segmentation.
步骤307、用户可以选择进行图像分割编辑的图像;例如,若用户选择掩膜层文件进行图像分割的调整,则执行步骤308与步骤309;若用户选择割线层文件进行图像分割的调整,则执行步骤310与步骤311。Step 307: The user can select the image for image segmentation editing; for example, if the user selects the mask layer file to adjust the image segmentation, step 308 and step 309 are executed; if the user selects the secant layer file to adjust the image segmentation, then Step 310 and step 311 are performed.
步骤308、调整掩膜层文件。Step 308: Adjust the mask layer file.
例如,可以通过上述图4所示的N-map方法进行掩膜层文件的调整。For example, the mask layer file can be adjusted by the N-map method shown in FIG. 4 above.
步骤309、对更新后的掩膜层文件进行割线层文件更新,即使得更新后的掩膜层文件与割线层文件同步对齐。Step 309: Perform a secant layer file update on the updated mask layer file, that is, make the updated mask layer file and the secant layer file synchronously aligned.
步骤310、调整割线层文件。Step 310: Adjust the secant layer file.
例如,可以通过用户观察图像边缘,手动调整割线层文件的锚点;或者使用参见后面图9与图10所示的锚点牵引梯度岭跑算法自动调整割线层文件的割线位置。For example, the user can manually adjust the anchor point of the secant layer file by observing the edge of the image; or use the anchor point pulling gradient ridge run algorithm shown in Figure 9 and Figure 10 to automatically adjust the secant position of the secant layer file.
步骤311、对更新后的割线层文件进行掩膜层文件更新,即使得更新后的割线层文件与掩膜层文件同步对齐。Step 311: Perform a mask layer file update on the updated secant layer file, that is, the updated secant layer file and the mask layer file are synchronized and aligned.
示例性地,如图5所示,其中,图5中的(a)所示为原始图像,图5中的(b)所示为与图5中的(a)所示的原始图像对应的像素梯度图;用户可以观察原始图像(例如,彩色图),通过鼠标在原始图像中点击6个锚点(例如,1-6),将6个锚点的位置可以映射到像素梯度图上;在像素梯度图上,可以通过牵引锚点沿像素梯度图的岭脊方向上相邻2个锚点之间跑出一条分割线,6条分割线可以形成1个闭合曲线,曲线内部可以为前景,曲线外部可以为背景。Exemplarily, as shown in FIG. 5, where (a) in FIG. 5 is the original image, and (b) in FIG. 5 is the image corresponding to the original image shown in (a) in FIG. 5 Pixel gradient map; the user can observe the original image (for example, color map), click 6 anchor points (for example, 1-6) in the original image with the mouse, and map the positions of the 6 anchor points to the pixel gradient map; On the pixel gradient map, you can draw a dividing line between two adjacent anchor points along the ridge direction of the pixel gradient map by pulling the anchor point. 6 dividing lines can form a closed curve, and the inside of the curve can be the foreground , The outside of the curve can be the background.
应理解,在标注工具界面上通常可以只显示原始图像,不显示像素梯度图。It should be understood that generally, only the original image may be displayed on the interface of the annotation tool, and the pixel gradient map may not be displayed.
进一步,在本申请的实施例中,由于待处理图像中的锚点是用户手动在待处理图像中标记的,因此,锚点的位置可能存在一定的偏差;为了使得锚点的位置更加准确,从而提高待处理图像分割线的准确性,可以对用户手动标记的锚点进行位置优化。Further, in the embodiment of the present application, since the anchor points in the image to be processed are manually marked by the user in the image to be processed, the position of the anchor point may have a certain deviation; in order to make the position of the anchor point more accurate, Thereby, the accuracy of the segmentation line of the image to be processed is improved, and the position of the anchor points manually marked by the user can be optimized.
例如,图6是本申请实施例提供的优化锚点位置方法的示意性流程图。图6所示的方法包括步骤401至步骤406,下面分别对这些步骤进行详细的描述。For example, FIG. 6 is a schematic flowchart of a method for optimizing anchor point positions provided by an embodiment of the present application. The method shown in FIG. 6 includes steps 401 to 406, and these steps are respectively described in detail below.
步骤401、图形用户界面(graphic user interface,GUI)中显示图像层与割线层。Step 401: The image layer and the secant layer are displayed in a graphical user interface (GUI).
步骤402、用户编辑锚点位置。Step 402: The user edits the anchor point position.
例如,用户可以通过点击图像生成锚点,或者用户可以拖动已有锚点生成更新锚点。For example, the user can generate an anchor point by clicking on the image, or the user can drag an existing anchor point to generate an updated anchor point.
步骤403、获取锚点的原始坐标。Step 403: Obtain the original coordinates of the anchor point.
例如,锚点A的原始坐标为(x1,y1)。For example, the original coordinates of anchor point A are (x1, y1).
步骤404、根据四向梯度搜索岭脊得到优化的锚点位置。Step 404: Search the ridge according to the four-way gradient to obtain the optimized anchor point position.
示例性地,如图7所示的为本申请实施例提供的四向卷积核,其中,图7中的(a)所示的为V向梯度卷积核;图7中的(b)所示的为H向梯度卷积核;图7中的(c)所示的为L向梯度卷积核;图7中的(d)所示的为R向梯度卷积核。Exemplarily, the four-way convolution kernel provided in this embodiment of the application is shown in FIG. 7, where (a) in FIG. 7 is the V-direction gradient convolution kernel; (b) in FIG. 7 Shown is the H-direction gradient convolution kernel; Fig. 7 (c) shows the L-direction gradient convolution kernel; Fig. 7 (d) shows the R-direction gradient convolution kernel.
例如,图7中的(e)所示的V向梯度岭脊搜索位置与图7中的(f)R向梯度岭脊搜索位置为例对优化锚点的方法进行举例说明,在四向梯度绝对值最大的方向上搜索梯度岭脊;比如,在A-1、A-2、A-3、A-4以及A+1、A+2、A+3、A+4的位置分别计算同向梯度,若出现与锚点A处的同向梯度符号相反的位置,则立即停止搜索;然后在与锚点A处梯度符号相同的位置,取梯度绝对值最大的位置,作为优化后的锚点位置A。For example, the V-direction gradient ridge search position shown in (e) in Fig. 7 and the (f) R-direction gradient ridge search position shown in Fig. 7 are examples to illustrate the method of optimizing anchor points. In the four-direction gradient Search for the gradient ridge in the direction with the largest absolute value; for example, calculate the same at the positions of A-1, A-2, A-3, A-4, and A+1, A+2, A+3, and A+4. Gradient, if there is a position opposite to the sign of the same gradient at anchor point A, stop searching immediately; then at the position with the same sign of the gradient at anchor point A, take the position with the largest absolute value of the gradient as the optimized anchor Point position A.
步骤405、调整割线中的锚点数据。Step 405: Adjust the anchor point data in the secant line.
步骤406、在图形用户界面中显示优化后的锚点(x2,y2)。Step 406: Display the optimized anchor point (x2, y2) in the graphical user interface.
图8是本申请实施例提供的图像分割方法的示意性流程图。该方法可以由图1所示服务器或者客户端执行,图8所示的方法包括步骤510与步骤520,下面分别对这些步骤进行详细的描述。FIG. 8 is a schematic flowchart of an image segmentation method provided by an embodiment of the present application. The method can be executed by the server or the client shown in FIG. 1. The method shown in FIG. 8 includes step 510 and step 520, and these steps are respectively described in detail below.
步骤510、获取第一图像以及第一图像中锚点的位置信息,其中,锚点可以包括起始锚点与目标锚点。Step 510: Acquire the first image and the position information of the anchor points in the first image, where the anchor points may include a starting anchor point and a target anchor point.
其中,第一图像可以是指具有图像分割需求的图像。Wherein, the first image may refer to an image that has an image segmentation requirement.
示例性地,在步骤510中还可以获取第一图像的像素梯度图。Exemplarily, in step 510, a pixel gradient map of the first image may also be obtained.
步骤520、根据第一图像与所述锚点,得到第二图像。Step 520: Obtain a second image according to the first image and the anchor point.
其中,第二图像是第一图像经过图像分割处理后得到的图像,第二图像可以包括起始锚点与目标锚点之间的分割线,分割线可以是通过分界点在第一图像的像素梯度图中以起始锚点为起始位置并且以目标锚点为目标位置移动得到的。Wherein, the second image is an image obtained after image segmentation of the first image, the second image may include a dividing line between the starting anchor point and the target anchor point, and the dividing line may be a pixel in the first image passing through the dividing point. The gradient graph is obtained by moving the starting anchor point as the starting position and the target anchor point as the target position.
应理解,起始锚点与目标锚点可以是第一图像中位置不发生改变的位置,通过分界点从起始锚点至目标锚点的移动可以得到第一图像中起始锚点与目标锚点之间的分割线,该分割线与第一图像中不同对象的自然边界吻合。It should be understood that the starting anchor point and the target anchor point may be positions in the first image where the position does not change, and the starting anchor point and the target anchor point in the first image can be obtained by moving the boundary point from the starting anchor point to the target anchor point. A dividing line between the anchor points, which coincides with the natural boundary of different objects in the first image.
示例性地,上述分界点可以是以起始锚点为起始位置,目标锚点为目标位置沿着第一图像的像素梯度图的岭脊跑出一条分割线,即分割线与第一图像的像素梯度图中岭脊方向的分割线重合。Exemplarily, the aforementioned dividing point may be based on the starting anchor point as the starting position, and the target anchor point as the target position. A dividing line is run along the ridge of the pixel gradient map of the first image, that is, the dividing line and the first image The dividing lines in the ridge direction of the pixel gradient map coincide.
需要说明的是,第一图像的像素梯度图中岭脊方向可以是指由像素梯度图中梯度局部极大值构成的曲线。It should be noted that the ridge direction in the pixel gradient map of the first image may refer to a curve formed by the local maximum value of the gradient in the pixel gradient map.
可选地,在本申请的实施例中,根据第一图像与所述锚点,得到第二图像,可以包括:根据第一图像的像素梯度图、锚点以及锚点牵引模型,得到第二图像,其中,锚点牵引模型可以用于指示分界点的移动方向。Optionally, in the embodiment of the present application, obtaining the second image according to the first image and the anchor point may include: obtaining the second image according to the pixel gradient map, the anchor point, and the anchor point pulling model of the first image. Image, where the anchor point traction model can be used to indicate the direction of movement of the demarcation point.
示例性地,若所述分界点当前位于所述像素梯度图中无岭脊的区域,则所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者,Exemplarily, if the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is determined according to the angle between the first line and the second line, Wherein, the first straight line refers to the straight line where each of the eight directions is located, and the second straight line refers to the line between the current position of the dividing point and the target anchor point; or,
若所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者,If the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter , Wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second straight line refers to the line between the current position of the dividing point and the target anchor point, so The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or,
若所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。If the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
例如,如图13所示,可以根据引力模型计算出每个方位的强引力权重w i和弱引力偏移b i。对于强引力模型与弱引力模型而言,8个方位与锚点AB的连线夹角越小的方位则引力越大;比如,8个方位中与锚点AB连线夹角为0°的方位引力最大,锚点AB连线夹角为180°的方位引力最小。强引力模型和弱引力模型的区别在于弱引力模型与距离dis 无关;强引力模型在dis=d0时随夹角增大引力衰减的速度最慢,dis偏离d0衰减速度越快。 For example, as shown in Figure 13, the strong gravity weight w i and the weak gravity offset b i for each azimuth can be calculated according to the gravity model. For the strong gravitational model and the weak gravitational model, the smaller the angle between the eight azimuths and the anchor point AB, the greater the gravitational force; for example, the angle between the eight azimuths and the anchor point AB is 0° The azimuth gravitational force is the largest, and the azimuth gravitational force is the smallest when the angle between the anchor point AB is 180°. The difference between the strong gravitational model and the weak gravitational model is that the weak gravitational model has nothing to do with the distance dis; the strong gravitational model has the slowest gravitational decay rate as the angle increases when dis=d0, and the faster dis deviates from d0.
进一步地,为了补偿分界点在沿着梯度图的岭脊方向移动时的误差,在分界点的上一时刻的移动方向与当前时刻的移动方向相同的情况下,可以使得分界点向预设的方向移动,即分界点的移动方向是根据不同移动方位的梯度绝对值确定的。Further, in order to compensate for the error when the dividing point moves along the ridge direction of the gradient map, when the moving direction of the dividing point at the previous moment is the same as the moving direction at the current moment, the dividing point can be made to move toward the preset Directional movement, that is, the movement direction of the dividing point is determined according to the absolute value of the gradient of different moving directions.
在本申请的实施例中,由于锚点的位置是用户通过鼠标在图像中点击产生的,因此,锚点的位置可能存在偏差并不位于该图像对应的梯度图上,为了使得锚点能够根据梯度图准确的得到图像的分割线,可以对锚点位置进行优化。In the embodiment of the present application, because the position of the anchor point is generated by the user clicking on the image with the mouse, the position of the anchor point may be deviated and not located on the gradient map corresponding to the image, in order to make the anchor point be able to The gradient map accurately obtains the segmentation line of the image, and the anchor point position can be optimized.
示例性地,在本申请的实施例中可以通过如图6所示的四向梯度卷积核对锚点的位置进行优化;例如,可以根据锚点在不同方向上的梯度绝对值对锚点的位置进行优化,得到优化后的锚点的位置信息。Exemplarily, in the embodiment of the present application, the position of the anchor point can be optimized by the four-way gradient convolution kernel as shown in FIG. 6; for example, the anchor point can be adjusted according to the absolute value of the gradient of the anchor point in different directions. The position is optimized, and the optimized position information of the anchor point is obtained.
图9示出了本申请实施例提供的图像分割方法的示意性流程图,该方法可以由图1所示服务器或者客户端执行。图9所示的方法包括步骤601至步骤609,下面分别对这些步骤进行详细的描述。FIG. 9 shows a schematic flowchart of an image segmentation method provided by an embodiment of the present application. The method may be executed by the server or client shown in FIG. 1. The method shown in FIG. 9 includes steps 601 to 609, and these steps are respectively described in detail below.
步骤601、获取锚点位置A。Step 601: Obtain the anchor point position A.
其中,锚点位置A可以用户在图像中选择的锚点,或者用户拖动已有锚点生成的新锚点。Wherein, the anchor point position A may be an anchor point selected by the user in the image, or a new anchor point generated by the user by dragging an existing anchor point.
例如,可以是用户通过鼠标点击图像从而产生锚点,或者用户通过鼠标拖动已有锚点生成的新锚点。For example, it may be an anchor point generated by the user clicking on the image with the mouse, or a new anchor point generated by the user dragging an existing anchor point with the mouse.
进一步地,通过标注工具可以获得锚点A在图片上的位置信息。Further, the position information of the anchor point A on the picture can be obtained through the annotation tool.
步骤602、优化锚点位置A。Step 602: Optimize the anchor point position A.
应理解,由于锚点位置A是用户通过鼠标在图像中点击产生的,因此,锚点位置A可能存在偏差并不位于该图像对应的梯度图上,为了使得锚点能够根据梯度图准确的得到图像的分割线,可以对锚点位置A进行优化。It should be understood that because the anchor point position A is generated by the user clicking on the image with the mouse, the anchor point position A may have a deviation and is not located on the gradient map corresponding to the image, so that the anchor point can be accurately obtained according to the gradient map. The dividing line of the image can optimize the anchor point position A.
示例性地,在本申请的实施例中可以通过如图6所示的四向梯度卷积核对锚点位置A进行优化。Exemplarily, in the embodiment of the present application, the anchor point position A may be optimized through the four-way gradient convolution kernel as shown in FIG. 6.
例如,通过图7所示的卷积核分别与图像进行卷积,可以得到锚点A的四向梯度(例如,V向、H向、L向以及R向),在四向梯度绝对值最大的方向上搜索梯度岭脊,若出现与锚点A处的同向梯度符号相反的位置,则立即停止搜索;然后在与锚点A处梯度符号相同的位置,取梯度绝对值最大的位置,作为优化后的锚点位置A。For example, by convolving the image with the convolution kernel shown in Figure 7, the four-way gradient of anchor point A (for example, V, H, L, and R) can be obtained, where the absolute value of the four-way gradient is the largest Search for the gradient ridge in the direction of. If there is a position opposite to the same gradient sign at anchor point A, stop searching immediately; then at the same position as the gradient sign at anchor point A, take the position with the largest absolute value of the gradient, As the optimized anchor position A.
步骤603、获取锚点位置B。Step 603: Obtain the anchor point position B.
其中,锚点位置A可以是指图像分割的起始点,锚点位置B可以是指图像分割的目标点,通过锚点位置A与锚点位置B之间的连接可以得到图像的分割线。Wherein, the anchor point position A may refer to the starting point of the image segmentation, and the anchor point position B may refer to the target point of the image segmentation. The image segmentation line can be obtained by the connection between the anchor point position A and the anchor point position B.
同理,锚点位置B可以是用户在图像中选择的锚点,或者用户可以拖动已有的锚点生成的新锚点。Similarly, the anchor point position B can be the anchor point selected by the user in the image, or the user can drag an existing anchor point to generate a new anchor point.
例如,可以是用户通过鼠标点击图像从而产生锚点B,或者用户通过鼠标拖动已有锚点生成的新锚点。For example, it may be that the user clicks the image with the mouse to generate the anchor point B, or the user drags the existing anchor point with the mouse to generate a new anchor point.
步骤604、优化锚点位置B。Step 604: Optimize the anchor point position B.
需要说明的是,优化锚点位置B的具体流程与上述步骤602类似,此处不再赘述。It should be noted that the specific process of optimizing the anchor point position B is similar to the foregoing step 602, and will not be repeated here.
应理解,步骤601与步骤603可以是同时执行的,也可以是先执行步骤601再执行步 骤603;同理,步骤602与步骤604可以是同时执行的,也可以是先执行步骤602再执行步骤604,本申请对此不作任何限定。It should be understood that step 601 and step 603 can be performed at the same time, or step 601 can be performed first and then step 603; similarly, step 602 and step 604 can be performed at the same time, or step 602 can be performed first and then step 602 is performed. 604. This application does not make any limitation on this.
步骤605、根据优化锚点位置A与优化锚点位置B生成多个候选分割线。Step 605: Generate multiple candidate segmentation lines according to the optimized anchor point position A and the optimized anchor point position B.
根据优化后的锚点位置A与锚点位置B执行基于锚点牵引梯度岭跑的流程,具体流程参见图10所示的流程图。According to the optimized anchor point position A and anchor point position B, the process of ridge running based on the anchor point traction gradient is executed. For the specific process, refer to the flowchart shown in FIG. 10.
在一个示例中,优化后的锚点位置A与锚点位置B执行双向对跑,即优化锚点位置A以位置A为起始点,以优化锚点位置B为目标点执行跑点;优化锚点位置B以位置B为起始点,以优化锚点位置A为目标点执行跑点;得到两条分割线L1与L2,其中,断点数量分别为N1与N2,断点可以是指某个跑点的梯度值低于预设阈值。In one example, the optimized anchor point position A and the anchor point position B perform a two-way run, that is, the optimized anchor point position A takes position A as the starting point, and the optimized anchor point position B is the target point to execute the running point; optimized anchor point position A Point position B takes position B as the starting point, and optimizes anchor point position A as the target point to execute the run; obtain two dividing lines L1 and L2, where the number of breakpoints is N1 and N2, respectively, and the breakpoint can refer to a certain The gradient value of the running point is lower than the preset threshold.
在一个示例中,从优化锚点位置B跑到优化锚点位置A,得到分割线L3,其中,断点数量为N3。In an example, from the optimized anchor point position B to the optimized anchor point position A, the dividing line L3 is obtained, where the number of breakpoints is N3.
在一个示例中,从优化锚点位置A跑到优化锚点位置B,得到分割线L4,其中,断点数量为N4。In an example, from the optimized anchor point position A to the optimized anchor point position B, the dividing line L4 is obtained, where the number of breakpoints is N4.
步骤606、优选断点最少的分割线。Step 606: Preferably, the dividing line with the fewest breakpoints is selected.
例如,根据上述得到的分割线L1~L4以及断点数量N1~N4,从多个候选分割线中选取断点数量最少的分割线。For example, based on the division lines L1 to L4 and the number of break points N1 to N4 obtained above, a division line with the least number of break points is selected from a plurality of candidate division lines.
步骤607、判断分割线是否选择成功;若成功,则执行步骤608结束流程;若失败,则执行步骤609用户手工选择。Step 607: It is judged whether the selection of the dividing line is successful; if it succeeds, step 608 is executed to end the process; if it fails, step 609 is executed manually by the user.
示例性地,统计每条分割线的断点数量,优选断点数量最少的分割线;若断点数量最少的分割线与其它分割线的断点数量差异不大,则可以呈现多条分割线给用户手工选择。Exemplarily, the number of breakpoints for each dividing line is counted, and the dividing line with the least number of breakpoints is preferred; if the dividing line with the least number of breakpoints has little difference in the number of breakpoints from other dividing lines, multiple dividing lines can be presented Give the user manual selection.
图10是本申请实施例提供的基于锚点牵引梯度岭跑的示意性流程图。该方法可以由图1所示服务器或者客户端执行。图10所示的方法包括步骤701至步骤710,下面分别对这些步骤进行详细的描述。Fig. 10 is a schematic flowchart of an anchor-point traction gradient ridge run provided by an embodiment of the present application. This method can be executed by the server or the client shown in FIG. 1. The method shown in FIG. 10 includes steps 701 to 710, and these steps are respectively described in detail below.
步骤701、从锚点位置A出发。Step 701: Start from anchor point position A.
例如,锚点位置A可以用户在图像中选择的锚点,或者用户可以拖动已有锚点生成的新锚点。For example, the anchor point position A may be an anchor point selected by the user in the image, or the user may drag an existing anchor point to generate a new anchor point.
示例性地,在从锚点位置A出发之前可以设置预设的步长,即可以是指从锚点位置A出发向目标锚点B进行移动,每移动一步所偏移的距离。Exemplarily, a preset step length can be set before starting from anchor point position A, that is, it can refer to moving from anchor point position A to target anchor point B, and the distance shifted by each step.
应理解,上述锚点位置A可以是用户在图像中手动标记的起始锚点,或者也可以是指从起始锚点向目标锚点移动过程中的锚点位置,本申请对此不作限定。It should be understood that the aforementioned anchor point position A may be the initial anchor point manually marked by the user in the image, or may also refer to the anchor point position in the process of moving from the initial anchor point to the target anchor point, which is not limited in this application .
步骤702、更新起跑点坐标。Step 702: Update the starting point coordinates.
其中,更新起跑点坐标可以是指优化锚点位置A,从而提高分割线的准确性;优化锚点位置A的具体流程可以参见图7所示,此处不再赘述。Wherein, updating the starting point coordinates may refer to optimizing the anchor point position A, thereby improving the accuracy of the dividing line; the specific process of optimizing the anchor point position A can be seen in FIG. 7 and will not be repeated here.
应理解,在本申请的实施例中,图像的分割线可以是通过分界点在图像对应的像素梯度图中以起始锚点作为起始位置并且以目标锚点作为目标位置移动得到的,其中,跑点可以是指上述分界点通过算法按照预设的步长进行移动得到的位于分割线上的点,跑点可以从起始锚点的位置开始按照预设步长与选择的跑出方向移动到目标锚点的位置,从而得到图像在起始锚点与目标锚点之间的分割线。It should be understood that, in the embodiment of the present application, the dividing line of the image may be obtained by moving the dividing point in the pixel gradient map corresponding to the image with the starting anchor point as the starting position and the target anchor point as the target position, where The running point can refer to the point on the dividing line obtained by moving the above-mentioned dividing point according to the preset step length through the algorithm. The running point can start from the position of the starting anchor point according to the preset step length and the selected running direction Move to the position of the target anchor point to obtain the dividing line of the image between the starting anchor point and the target anchor point.
步骤703、选择跑出方向。Step 703: Choose a running direction.
应理解,上述跑出方向可以是指跑点按照预设的步长每移动一步的移动方向。It should be understood that the above-mentioned running direction may refer to the moving direction of each step of the running point according to the preset step length.
在本申请的实施例中,可以根据以下几种方式按照预设的步长选择跑点的跑出方向:方式一:基于计算不同方位的梯度值大小,确定跑点的跑出方向。In the embodiment of the present application, the running direction of the running point can be selected according to the preset step length in the following ways: Method 1: The running direction of the running point is determined based on the calculation of the gradient value of different directions.
例如,图11中的(a)所示的8向方位的示意图;图11中的(b)所示的为oa方位梯度卷积核;图11中的(c)所示的为ob方位梯度卷积核;图11中的(d)所示的为oab方位梯度卷积核;图11中的(e)所示的为ocd方位梯度卷积核;若跑点的跑入方向在fob方向的下方,则可以采用oab梯度卷积核;若跑点的跑入方向在fob方向的上方,则可以采用ocb梯度卷积核;若跑点的跑入方向为fo,则可以采用oab梯度和ocb梯度中绝对值较大者。For example, Figure 11 (a) shows the schematic diagram of the 8-direction azimuth; Figure 11 (b) shows the oa azimuth gradient convolution kernel; Figure 11 (c) shows the ob azimuth gradient Convolution kernel; Figure 11 (d) shows the oab azimuth gradient convolution kernel; Figure 11 (e) shows the ocd azimuth gradient convolution kernel; if the run-in direction is in the fob direction Below, you can use the oab gradient convolution kernel; if the running direction of the running point is above the fob direction, you can use the ocb gradient convolution kernel; if the running direction of the running point is fo, you can use the oab gradient and The larger absolute value of the ocb gradient.
在本申请的实施例中,在锚点沿着梯度图的岭脊跑点的过程中不允许出现逆向跑出;比如,上一步是从ao方向跑入,则下一步不允许从oa方向跑出;所以为了确定下一步的跑出方向需要计算与跑入方向不同的其余7个方向的梯度,比如,上一步是从ao方向跑入,则计算ob、oc、od、oe、of、og、oh等7向梯度,从7向梯度绝对值最大的方向选择下一步跑出。In the embodiment of the present application, it is not allowed to run in the reverse direction when the anchor point runs along the ridge of the gradient map; for example, if the previous step is running in from the ao direction, the next step is not allowed to run from the oa direction Out; so in order to determine the next running direction, you need to calculate the gradients of the remaining 7 directions that are different from the running direction. For example, if the previous step is running from the ao direction, calculate ob, oc, od, oe, of, og , Oh and other 7-way gradients, choose the next step to run from the direction with the largest absolute value of the 7-way gradient.
方式二:基于锚点牵引模型,确定跑点的跑出方向。Method 2: Determine the running direction of the running point based on the anchor point traction model.
示例性地,首先,可以计算跑点与锚点位置B的距离为dis,比如,锚点位置A与锚点位置B之间的距离为d0;接着,可以计算锚点A的牵引权重wi与bi,得到锚点牵引模型。Exemplarily, first, the distance between the running point and the anchor point position B can be calculated as dis, for example, the distance between the anchor point position A and the anchor point position B is d0; then, the traction weight wi and the anchor point A can be calculated bi, get the anchor traction model.
上述锚点牵引模型可以看作是由强引力模型和弱引力模型组成;比如,假设跑点与目标锚点之间的距离为dis,锚点A与锚点B之间的距离为d0,8个方位与锚点AB连线的夹角为θ i,可以根据图13所示的引力模型计算出每个方位的强引力权重w i和弱引力偏移b i。对于强引力模型与弱引力模型而言,8个方位与锚点AB的连线夹角越小的方位则引力越大;比如,8个方位中与锚点AB连线夹角为0°的方位引力最大,锚点AB连线夹角为180°的方位引力最小。强引力模型和弱引力模型的区别在于弱引力模型与距离dis无关;强引力模型在dis=d0时随夹角增大引力衰减的速度最慢,dis偏离d0衰减速度越快。 The above anchor point traction model can be regarded as composed of a strong gravitational model and a weak gravitational model; for example, suppose the distance between the running point and the target anchor point is dis, and the distance between anchor point A and anchor point B is d0,8 AB azimuth angle with the anchor connection as θ i, can be calculated according to the gravity model 13 shown in FIG strong gravitational weight of each weight w i and orientation offsets weak gravitational b i. For the strong gravitational model and the weak gravitational model, the smaller the angle between the eight azimuths and the anchor point AB, the greater the gravitational force; for example, the angle between the eight azimuths and the anchor point AB is 0° The azimuth gravitational force is the largest, and the azimuth gravitational force is the smallest when the angle between the anchor point AB is 180°. The difference between the strong gravitational model and the weak gravitational model is that the weak gravitational model has nothing to do with the distance dis; the strong gravitational model has the slowest gravitational decay rate with the increase of the angle when dis=d0, and the faster the decay rate of dis deviates from d0.
在本申请的实施例中,可以通过以下等式进行梯度的加权计算:In the embodiment of the present application, the weighted calculation of the gradient can be performed by the following equation:
D’ i=w i*(D i+b i); D' i =w i *(D i +b i );
其中,D’ i表示第i个方位的加权后的7向梯度;D i表示第i个方位的未加权的7向梯度;w i表示第i个方位的强引力权重;b i表示第i个方位的弱引力权重。 Among them, D' i represents the weighted 7-way gradient of the i-th orientation; D i represents the unweighted 7-way gradient of the i-th orientation; w i represents the strong gravity weight of the i-th orientation; b i represents the i-th orientation. Weak gravitational weight of each direction.
示例性地,当跑点位于无明显岭脊的区域时,弱引力模型可以为跑点提供明确的方向指引;当跑点位于岭脊分叉处时,强引力模型可以为跑点提供明确的方向指引;当跑点位于明显的岭脊并且没有分叉的位置时,跑点的路径可以主要由梯度图决定。For example, when the running point is located in an area without obvious ridges, the weak gravity model can provide a clear direction for the running point; when the running point is at the bifurcation of the ridge, the strong gravity model can provide a clear direction for the running point. Direction guidance; when the running point is located on an obvious ridge and there is no bifurcation position, the path of the running point can be mainly determined by the gradient map.
例如,如图13所示,可以计算出每个方位的强引力权重wi与弱引力偏移bi;其中,图13中的(a)所示为不同的锚点方向;图13中的(b)所示为强引力模型的示意图;图13中的(c)所示为弱引力模型的示意图。举例说明,在图13中的(b)所示的示意图像中包括直线1、直线2、直线3,其中,直线1可以表示dis=0或者dis=2d0时的强引力模型;直线2可以表示dis=0.4d0或者dis=1.6d0时的强引力模型;直线3可以表示dis=d0时的强引力模型;从图13中的(b)所示的3条直线可以看出,直线斜率越大则对跑点的牵引力越强;强引力模型与不同方位与锚点AB的连线夹角以及跑点与目标锚点之间的距 离为dis相关;弱引力模型与不同方位与锚点AB的连线夹角相关。For example, as shown in Figure 13, the strong gravitational weight wi and the weak gravitational offset bi can be calculated for each azimuth; among them, Figure 13 (a) shows different anchor point directions; Figure 13 (b) ) Shows the schematic diagram of the strong gravitational model; Figure 13(c) shows the schematic diagram of the weak gravitational model. For example, the schematic image shown in Figure 13(b) includes straight line 1, straight line 2, and straight line 3. Among them, straight line 1 can represent the strong gravitational model when dis=0 or dis=2d0; straight line 2 can represent The strong gravitational model when dis=0.4d0 or dis=1.6d0; the straight line 3 can represent the strong gravitational model when dis=d0; from the three straight lines shown in Figure 13 (b), it can be seen that the greater the slope of the straight line Then the traction to the running point is stronger; the angle between the strong gravitational model and the line between the anchor point AB and the different azimuths and the distance between the running point and the target anchor point are related to dis; the weak gravitational model is related to the different azimuths and the anchor point AB The connection angle is related.
在本申请的实施例中,可以根据上述方式一与方式二可以计算预设的步长下,跑点每一步的移动方向,从而得到跑点下一步的跑出方向。In the embodiment of the present application, the movement direction of each step of the running point can be calculated at the preset step length according to the above-mentioned method 1 and method 2, so as to obtain the running direction of the next step of the running point.
步骤704、判断跑点的跑出方向与跑入方向是否相同;若相同,则执行步骤705;若不相同,则执行步骤709判断是否到达锚点B。Step 704: Determine whether the running direction and the running direction of the running point are the same; if they are the same, perform step 705; if they are not the same, perform step 709 to determine whether the anchor point B has been reached.
若选择的跑出方向与跑入方向不同,且跑点到达锚点位置B,则执行步骤710结束跑点流程,即结束分界点的移动流程;若选择的跑出方向与跑入方向不同,且跑点未到达锚点位置B,则返回执行步骤702。If the selected running direction is different from the running direction, and the running point reaches the anchor point position B, step 710 is executed to end the running point process, that is, the moving process of the dividing point is ended; if the selected running direction is different from the running direction, And if the running point does not reach the anchor point position B, return to step 702.
进一步地,在本申请的实施例中,为了补偿跑点在沿着像素梯度图进行移动过程中的误差,可以执行步骤705。Further, in the embodiment of the present application, in order to compensate for the error of the running point in the process of moving along the pixel gradient map, step 705 may be performed.
步骤705、确定漂移方向。Step 705: Determine the drift direction.
例如,若跑点从eo方向跑入,并且从oa方向跑出,则a、a1、a2三个位置作为备选的横向漂移方向。例如,如图12中的(a)所示可以通过计算o1a1方向梯度与如图12中的(b)所示o2a2方向梯度,排除与oa方向梯度符号相反的位置,在剩下的位置中选择梯度绝对值最大的位置为漂移方向。For example, if the running point runs in from the eo direction and runs out from the oa direction, the three positions a, a1, and a2 are used as alternate lateral drift directions. For example, as shown in Figure 12(a), you can calculate the o1a1 direction gradient and the o2a2 direction gradient as shown in Figure 12(b) to exclude the position opposite to the oa direction gradient, and choose from the remaining positions The position where the absolute value of the gradient is the largest is the drift direction.
步骤706、判断是否为连续同向漂移;若为连续同向漂移,则执行步骤708取消漂移;若不是连续同向漂移,则执行步骤707执行漂移。Step 706: Determine whether it is a continuous same-directional drift; if it is a continuous same-directional drift, execute step 708 to cancel the drift; if it is not a continuous same-directional drift, execute step 707 to execute the drift.
示例性地,上述连续同向漂移是指跑点在执行上述多个预设的步长时,每个步长的移动方向均按照上述漂移方向进行移动,此时为了确保跑点能够到达目标锚点位置(即锚点位置B),则可以取消漂移,通过上述方式一或者上述方式二重新确定跑点的跑出方向。Exemplarily, the aforementioned continuous drift in the same direction means that when the running point executes the aforementioned multiple preset step lengths, the moving direction of each step length moves in the aforementioned drift direction. At this time, in order to ensure that the running point can reach the target anchor Point position (ie anchor point position B), then drift can be cancelled, and the running direction of the running point can be re-determined through the above-mentioned method one or the above-mentioned method two.
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。It should be understood that the above examples are intended to help those skilled in the art understand the embodiments of the present application, and are not intended to limit the embodiments of the present application to the specific numerical values or specific scenarios illustrated. Those skilled in the art can obviously make various equivalent modifications or changes based on the above examples given, and such modifications or changes also fall within the scope of the embodiments of the present application.
上文结合图1至图13,详细描述了本申请实施例的图像分割方法,下面将结合图14至图16,详细描述本申请的装置实施例。应理解,本申请实施例中的图像分割装置可以执行前述本申请实施例的各种图像分割方法,即以下各种产品的具体工作过程,可以参考前述方法实施例中的对应过程。The image segmentation method of the embodiment of the present application is described in detail above with reference to Figs. 1 to 13, and the device embodiment of the present application will be described in detail below with reference to Figs. 14 to 16. It should be understood that the image segmentation device in the embodiment of the present application can execute the various image segmentation methods of the foregoing embodiment of the present application, that is, the specific working process of the following various products, and the corresponding process in the foregoing method embodiment may be referred to.
图14是本申请实施例提供的图像分割装置的示意性框图。应理解,图像分割装置800可以执行图6以及图8至图13所示的图像分割方法。该图像分割装置800包括:检测单元810和处理单元820。FIG. 14 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application. It should be understood that the image segmentation apparatus 800 can execute the image segmentation methods shown in FIG. 6 and FIG. 8 to FIG. 13. The image segmentation device 800 includes a detection unit 810 and a processing unit 820.
其中,检测单元810用于检测到用户在第一图像中手动标记锚点的第一操作,其中,所述锚点包括起始锚点与目标锚点;检测到所述用户指示自动分割所述第一图像的第二操作;处理单元820用于响应于所述第二操作,在所述显示屏上显示第二图像,所述第二图像是所述第一图像经过所述第二操作后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,其中,所述分割线是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。Wherein, the detecting unit 810 is configured to detect the first operation of the user to manually mark the anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor point; and the user indicates that the user is detected to automatically segment the anchor point. The second operation of the first image; the processing unit 820 is configured to display a second image on the display screen in response to the second operation, the second image being the first image after the second operation Obtained image, the second image includes a dividing line between the starting anchor point and the target anchor point, wherein the dividing line is a dividing line in the first image with the starting anchor point and the starting anchor point. The anchor point is the starting position and the target anchor point is moved to the target position.
可选地,作为一个实施例,所述分割线与所述第一图像的像素梯度图中岭脊方向的分割线重合。Optionally, as an embodiment, the dividing line coincides with the dividing line in the ridge direction in the pixel gradient map of the first image.
可选地,作为一个实施例,所述处理单元820具体用于:Optionally, as an embodiment, the processing unit 820 is specifically configured to:
根据所述第一图像查找所述第一图像对应的掩膜图像与割线图像,其中,所述掩膜图像用于表示所述第一图像中的不同对象,所述割线图像用于表示所述第一图像中不同对象的边界;在所述显示屏上显示所述第一图像、所述掩膜图像以及所述割线图像叠加后的图像。Search for a mask image and a secant image corresponding to the first image according to the first image, wherein the mask image is used to represent different objects in the first image, and the secant image is used to represent Boundaries of different objects in the first image; displaying the superimposed image of the first image, the mask image, and the secant image on the display screen.
可选地,作为一个实施例,所述检测单元810还用于:Optionally, as an embodiment, the detection unit 810 is further configured to:
检测到所述用户指示通过所述掩膜图像或者所述割线图像对所述第二图像进行分割处理的第三操作。A third operation instructed by the user to perform segmentation processing on the second image through the mask image or the secant image is detected.
可选地,作为一个实施例,所述第二图像是根据所述第一图像的像素梯度图与锚点牵引模型得到的,所述锚点牵引模型用于指示所述分界点的移动方向。Optionally, as an embodiment, the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling model is used to indicate the moving direction of the boundary point.
可选地,作为一个实施例,在所述分界点当前位于所述像素梯度图中无岭脊的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二连线是指所述分界点的当前位置与所述目标锚点之间的连线;或者Optionally, as an embodiment, when the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the difference between the first line and the second line The included angle is determined, wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second line refers to the line between the current position of the dividing point and the target anchor point Connect; or
在所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二连线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者When the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to the straight line where each of the eight directions is located, and the second line refers to the line between the current position of the dividing point and the target anchor point , The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or
在所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域时,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。When the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
可选地,作为一个实施例,在上一时刻所述分界点的移动方向与当前时刻所述分界点的移动方向相同的情况下,所述分界点的移动方向是根据不同方位的梯度绝对值确定的。Optionally, as an embodiment, in the case that the moving direction of the dividing point at the previous moment is the same as the moving direction of the dividing point at the current moment, the moving direction of the dividing point is based on the absolute value of the gradient of different orientations. definite.
可选地,作为一个实施例,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。Optionally, as an embodiment, the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image.
图15是本申请实施例提供的图像分割装置的示意性框图。应理解,图像分割装置900可以执行图2至图13所示的图像分割方法。该图像分割装置900包括:获取单元910和处理单元920。FIG. 15 is a schematic block diagram of an image segmentation device provided by an embodiment of the present application. It should be understood that the image segmentation device 900 can perform the image segmentation methods shown in FIGS. 2 to 13. The image segmentation device 900 includes: an acquisition unit 910 and a processing unit 920.
其中,获取单元910用于获取第一图像以及所述第一图像中锚点的位置信息,其中,所述锚点包括起始锚点与目标锚点;处理单元920用于根据所述第一图像与所述锚点,得到第二图像,其中,所述第二图像是所述第一图像经过图像分割后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像的像素梯度图中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。Wherein, the acquiring unit 910 is configured to acquire the first image and the position information of the anchor points in the first image, wherein the anchor points include a starting anchor point and a target anchor point; the processing unit 920 is configured to obtain information according to the first image Image and the anchor point to obtain a second image, where the second image is an image obtained after image segmentation of the first image, and the second image includes the starting anchor point and the target anchor A dividing line between points, the dividing line is obtained by moving a dividing point in the pixel gradient map of the first image with the starting anchor point as the starting position and the target anchor point as the target position .
可选地,作为一个实施例,所述分割线与所述像素梯度图中岭脊方向的分割线重合。Optionally, as an embodiment, the dividing line coincides with the dividing line in the ridge direction in the pixel gradient map.
可选地,作为一个实施例,所述处理单元920具体用于:Optionally, as an embodiment, the processing unit 920 is specifically configured to:
根据所述像素梯度图、所述锚点以及锚点牵引模型,得到所述第二图像,其中,所述锚点牵引模型用于指示所述分界点的移动方向。The second image is obtained according to the pixel gradient map, the anchor point, and the anchor point pulling model, wherein the anchor point pulling model is used to indicate the moving direction of the boundary point.
可选地,作为一个实施例,若所述分界点当前位于所述像素梯度图中无岭脊的区域, 则所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二连线是指所述分界点的当前位置与所述目标锚点之间的连线;或者,Optionally, as an embodiment, if the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the difference between the first line and the second line The included angle is determined, wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second line refers to the line between the current position of the dividing point and the target anchor point Connection; or,
若所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二连线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者,If the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter , Wherein the first straight line refers to the straight line where each of the eight directions is located, and the second line refers to the line between the current position of the dividing point and the target anchor point, The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or,
若所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。If the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
可选地,作为一个实施例,所述处理单元还用于:Optionally, as an embodiment, the processing unit is further configured to:
若所述分界点上一时刻的移动方向与当前时刻的移动方向相同,则根据不同移动方位的梯度绝对值确定所述分界点的移动方向。If the movement direction of the boundary point at the previous moment is the same as the movement direction of the current moment, the movement direction of the boundary point is determined according to the absolute value of the gradient of different moving directions.
可选地,作为一个实施例,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。Optionally, as an embodiment, the anchor point is obtained by optimizing an initial anchor point, where the initial anchor point is an anchor point manually marked by the user in the first image.
需要说明的是,上述图像分割装置800以及图像分割装置900以功能单元的形式体现。这里的术语“单元”可以通过软件和/或硬件形式实现,对此不作具体限定。It should be noted that the above-mentioned image segmentation device 800 and image segmentation device 900 are embodied in the form of functional units. The term "unit" herein can be implemented in the form of software and/or hardware, which is not specifically limited.
例如,“单元”可以是实现上述功能的软件程序、硬件电路或二者结合。所述硬件电路可能包括应用特有集成电路(application specific integrated circuit,ASIC)、电子电路、用于执行一个或多个软件或固件程序的处理器(例如共享处理器、专有处理器或组处理器等)和存储器、合并逻辑电路和/或其它支持所描述的功能的合适组件。For example, a "unit" may be a software program, a hardware circuit, or a combination of the two that realizes the above-mentioned functions. The hardware circuit may include an application specific integrated circuit (ASIC), an electronic circuit, and a processor for executing one or more software or firmware programs (such as a shared processor, a dedicated processor, or a group processor). Etc.) and memory, merged logic circuits and/or other suitable components that support the described functions.
因此,在本申请的实施例中描述的各示例的单元,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Therefore, the units of the examples described in the embodiments of the present application can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
图16是本申请实施例提供的图像分割装置的硬件结构示意图。FIG. 16 is a schematic diagram of the hardware structure of an image segmentation device provided by an embodiment of the present application.
如图16所示,图像分割装置1000(该图像分割装置1000具体可以是一种计算机设备)包括存储器1001、处理器1002、通信接口1003以及总线1004。其中,存储器1001、处理器1002、通信接口1003通过总线1004实现彼此之间的通信连接。As shown in FIG. 16, the image segmentation apparatus 1000 (the image segmentation apparatus 1000 may specifically be a computer device) includes a memory 1001, a processor 1002, a communication interface 1003, and a bus 1004. Among them, the memory 1001, the processor 1002, and the communication interface 1003 implement communication connections between each other through the bus 1004.
存储器1001可以是只读存储器(read only memory,ROM),静态存储设备,动态存储设备或者随机存取存储器(random access memory,RAM)。存储器1001可以存储程序,当存储器1001中存储的程序被处理器1002执行时,处理器1002用于执行本申请实施例的图像分割方法的各个步骤,例如,执行图2至图13所示的各个步骤。The memory 1001 may be a read only memory (ROM), a static storage device, a dynamic storage device, or a random access memory (RAM). The memory 1001 may store a program. When the program stored in the memory 1001 is executed by the processor 1002, the processor 1002 is configured to execute each step of the image segmentation method of the embodiment of the present application, for example, execute each of the steps shown in FIGS. 2 to 13 step.
应理解,本申请实施例所示的图像分割装置可以是服务器,例如,可以是云端的服务器,或者,也可以是配置于云端的服务器中的芯片。It should be understood that the image segmentation apparatus shown in the embodiment of the present application may be a server, for example, it may be a server in the cloud, or may also be a chip configured in a server in the cloud.
处理器1002可以采用通用的中央处理器(central processing unit,CPU),微处理器,应用专用集成电路(application specific integrated circuit,ASIC)或者一个或多个集成电路,用于执行相关程序以实现本申请方法实施例的图像分割方法。The processor 1002 may adopt a general central processing unit (CPU), a microprocessor, an application specific integrated circuit (ASIC), or one or more integrated circuits for executing related programs to realize the The image segmentation method of the application method embodiment.
处理器1002还可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,本 申请的图像分割方法的各个步骤可以通过处理器1002中的硬件的集成逻辑电路或者软件形式的指令完成。The processor 1002 may also be an integrated circuit chip with signal processing capability. In the implementation process, each step of the image segmentation method of the present application can be completed by an integrated logic circuit of hardware in the processor 1002 or instructions in the form of software.
上述处理器1002还可以是通用处理器、数字信号处理器(digital signal processing,DSP)、专用集成电路(ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1001,处理器1002读取存储器1001中的信息,结合其硬件完成本申请实施中图14或图15所示的图像分割装置中包括的单元所需执行的功能,或者,执行本申请方法实施例的图2至图13所示的图像分割方法。The aforementioned processor 1002 may also be a general-purpose processor, a digital signal processing (digital signal processing, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, Discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the memory 1001, and the processor 1002 reads the information in the memory 1001, and combines its hardware to complete the functions required by the units included in the image segmentation device shown in FIG. 14 or FIG. 15 in the implementation of this application, or execute The image segmentation method shown in FIG. 2 to FIG. 13 of the method embodiment of the present application.
通信接口1003使用例如但不限于收发器一类的收发装置,来实现图像分割装置1200与其他设备或通信网络之间的通信。The communication interface 1003 uses a transceiver device such as but not limited to a transceiver to implement communication between the image segmentation device 1200 and other devices or a communication network.
总线1004可包括在图像分割装置1000各个部件(例如,存储器1001、处理器1002、通信接口1003)之间传送信息的通路。The bus 1004 may include a path for transferring information between various components of the image segmentation device 1000 (for example, the memory 1001, the processor 1002, and the communication interface 1003).
应注意,尽管上述图像分割装置1000仅仅示出了存储器、处理器、通信接口,但是在具体实现过程中,本领域的技术人员应当理解,图像分割装置1000还可以包括实现正常运行所必须的其他器件。同时,根据具体需要本领域的技术人员应当理解,上述图像分割装置1000还可包括实现其他附加功能的硬件器件。It should be noted that although the above-mentioned image segmentation device 1000 only shows a memory, a processor, and a communication interface, in the specific implementation process, those skilled in the art should understand that the image segmentation device 1000 may also include other necessary for normal operation. Device. At the same time, according to specific needs, those skilled in the art should understand that the above-mentioned image segmentation apparatus 1000 may also include hardware devices that implement other additional functions.
此外,本领域的技术人员应当理解,上述图像分割装置1000也可仅仅包括实现本申请实施例所必须的器件,而不必包括图16中所示的全部器件。In addition, those skilled in the art should understand that the above-mentioned image segmentation device 1000 may also only include the necessary components for implementing the embodiments of the present application, and not necessarily include all the components shown in FIG. 16.
应理解,上述举例说明是为了帮助本领域技术人员理解本申请实施例,而非要将本申请实施例限于所例示的具体数值或具体场景。本领域技术人员根据所给出的上述举例说明,显然可以进行各种等价的修改或变化,这样的修改或变化也落入本申请实施例的范围内。It should be understood that the above examples are intended to help those skilled in the art understand the embodiments of the present application, and are not intended to limit the embodiments of the present application to the specific numerical values or specific scenarios illustrated. Those skilled in the art can obviously make various equivalent modifications or changes based on the above examples given, and such modifications or changes also fall within the scope of the embodiments of the present application.
应理解,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that the term "and/or" in this text is only an association relationship describing the associated objects, indicating that there can be three types of relationships, for example, A and/or B, which can mean: A alone exists, and both A and B exist. , There are three cases of B alone. In addition, the character "/" in this text generally indicates that the associated objects before and after are in an "or" relationship.
应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not correspond to the embodiments of the present application. The implementation process constitutes any limitation.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装 置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the system, device and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method can be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined It can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional modules in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (29)

  1. 一种图像分割方法,其特征在于,应用于具有显示屏的终端设备,包括:An image segmentation method, characterized in that it is applied to a terminal device with a display screen, and includes:
    检测到用户在第一图像中手动标记锚点的第一操作,其中,所述锚点包括起始锚点与目标锚点;A first operation of manually marking an anchor point in the first image by the user is detected, where the anchor point includes a starting anchor point and a target anchor point;
    检测到所述用户指示自动分割所述第一图像的第二操作;Detecting a second operation instructed by the user to automatically segment the first image;
    响应于所述第二操作,在所述显示屏上显示第二图像,In response to the second operation, displaying a second image on the display screen,
    其中,所述第二图像是所述第一图像经过所述第二操作后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。Wherein, the second image is an image obtained after the first image is subjected to the second operation, and the second image includes a dividing line between the starting anchor point and the target anchor point, and the The dividing line is obtained by moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position.
  2. 如权利要求1所述的图像分割方法,其特征在于,所述分割线与所述第一图像的像素梯度图中岭脊方向的分割线重合。The image segmentation method according to claim 1, wherein the segmentation line overlaps with the segmentation line in the ridge direction of the pixel gradient map of the first image.
  3. 如权利要求1或2所述的图像分割方法,其特征在于,所述响应于所述第二操作,在所述显示屏上显示所述第二图像,包括:The image segmentation method according to claim 1 or 2, wherein the displaying the second image on the display screen in response to the second operation comprises:
    根据所述第一图像查找所述第一图像对应的掩膜图像与割线图像,其中,所述掩膜图像用于表示所述第一图像中的不同对象,所述割线图像用于表示所述第一图像中不同对象的边界;Search for a mask image and a secant image corresponding to the first image according to the first image, wherein the mask image is used to represent different objects in the first image, and the secant image is used to represent The boundaries of different objects in the first image;
    在所述显示屏上显示所述第一图像、所述掩膜图像以及所述割线图像叠加后的图像。The superimposed image of the first image, the mask image, and the secant image is displayed on the display screen.
  4. 如权利要求3所述的图像分割方法,其特征在于,还包括:5. The image segmentation method of claim 3, further comprising:
    检测到所述用户指示通过所述掩膜图像或者所述割线图像对所述第二图像进行处理的第三操作。A third operation instructed by the user to process the second image through the mask image or the secant image is detected.
  5. 如权利要求1至4中任一项所述的图像分割方法,其特征在于,所述第二图像是根据所述第一图像的像素梯度图与锚点牵引模型得到的,所述锚点牵引模型用于指示所述分界点的移动方向。The image segmentation method according to any one of claims 1 to 4, wherein the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling The model is used to indicate the direction of movement of the dividing point.
  6. 如权利要求5所述的图像分割方法,其特征在于,在所述分界点当前位于所述像素梯度图中无岭脊的区域时,所述分界点的移动方向是根据第一直线与第二直线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者The image segmentation method according to claim 5, wherein when the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first straight line and the second The angle between the two straight lines is determined, wherein the first straight line refers to the straight line where each of the eight orientations is located, and the second straight line refers to the current position of the boundary point and the target anchor The line between points; or
    在所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者When the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to a straight line where each of the eight azimuths is located, and the second straight line refers to a line between the current position of the dividing point and the target anchor point, The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or
    在所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域时,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。When the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  7. 如权利要求6所述的图像分割方法,其特征在于,在上一时刻所述分界点的移动方向与当前时刻所述分界点的移动方向相同的情况下,所述分界点的移动方向是根据不同方位的梯度绝对值确定的。The image segmentation method according to claim 6, wherein when the moving direction of the dividing point at the previous time is the same as the moving direction of the dividing point at the current time, the moving direction of the dividing point is based on The absolute value of the gradient in different directions is determined.
  8. 如权利要求1至7中任一项所述的图像分割方法,其特征在于,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。The image segmentation method according to any one of claims 1 to 7, wherein the anchor point is obtained by optimizing an initial anchor point, wherein the initial anchor point is the user's position in the first image Manually marked anchor points.
  9. 一种图像分割方法,其特征在于,包括:An image segmentation method, characterized in that it includes:
    获取第一图像以及所述第一图像中锚点的位置信息,其中,所述锚点包括起始锚点与目标锚点;Acquiring a first image and position information of an anchor point in the first image, wherein the anchor point includes a starting anchor point and a target anchor point;
    根据所述第一图像与所述锚点,得到第二图像,其中,所述第二图像是所述第一图像经过图像分割处理后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像的像素梯度图中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。According to the first image and the anchor point, a second image is obtained, wherein the second image is an image obtained after the first image is subjected to image segmentation processing, and the second image includes the starting anchor A dividing line between a point and the target anchor point, the dividing line is a dividing line in the pixel gradient map of the first image with the starting anchor point as the starting position and the target anchor point It is obtained by moving the target position.
  10. 如权利要求9所述的图像分割方法,其特征在于,所述分割线与所述像素梯度图中岭脊方向的分割线重合。9. The image segmentation method of claim 9, wherein the segmentation line coincides with the segmentation line in the ridge direction of the pixel gradient map.
  11. 如权利要求9或10所述的图像分割方法,其特征在于,所述根据所述第一图像与所述锚点,得到第二图像,包括:The image segmentation method according to claim 9 or 10, wherein the obtaining the second image according to the first image and the anchor point comprises:
    根据所述像素梯度图、所述锚点以及锚点牵引模型,得到所述第二图像,其中,所述锚点牵引模型用于指示所述分界点的移动方向。The second image is obtained according to the pixel gradient map, the anchor point, and the anchor point pulling model, wherein the anchor point pulling model is used to indicate the moving direction of the boundary point.
  12. 如权利要求11所述的图像分割方法,其特征在于,若所述分界点当前位于所述像素梯度图中无岭脊的区域,则所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者,The image segmentation method of claim 11, wherein if the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first line and the second line. The angle between the two lines is determined, wherein the first straight line refers to the straight line at which each of the eight azimuths is located, and the second straight line refers to the current position of the dividing point and the target The line between anchor points; or,
    若所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者,If the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter , Wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second straight line refers to the line between the current position of the dividing point and the target anchor point, so The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or,
    若所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。If the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  13. 如权利要求12所述的图像分割方法,其特征在于,还包括:The image segmentation method of claim 12, further comprising:
    若所述分界点上一时刻的移动方向与当前时刻的移动方向相同,则根据不同移动方位的梯度绝对值确定所述分界点的移动方向。If the movement direction of the boundary point at the previous moment is the same as the movement direction of the current moment, the movement direction of the boundary point is determined according to the absolute value of the gradient of different moving directions.
  14. 如权利要求9至13中任一项所述的图像分割方法,其特征在于,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。The image segmentation method according to any one of claims 9 to 13, wherein the anchor point is obtained by optimizing an initial anchor point, wherein the initial anchor point is the user's position in the first image Manually marked anchor points.
  15. 一种图像分割装置,其特征在于,所述图像分割装置具有显示屏,包括:An image segmentation device, characterized in that the image segmentation device has a display screen and includes:
    检测单元,用于检测到用户在第一图像中手动标记锚点的第一操作,其中,所述锚点包括起始锚点与目标锚点;检测到所述用户指示自动分割所述第一图像的第二操作;The detection unit is configured to detect the first operation of the user to manually mark the anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor point; and the first operation is automatically divided upon detection of the user's instruction The second operation of the image;
    处理单元,用于响应于所述第二操作,在所述显示屏上显示第二图像,The processing unit is configured to display a second image on the display screen in response to the second operation,
    其中,所述第二图像是所述第一图像经过所述第二操作后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。Wherein, the second image is an image obtained after the first image is subjected to the second operation, and the second image includes a dividing line between the starting anchor point and the target anchor point, and the The dividing line is obtained by moving the dividing point in the first image with the starting anchor point as the starting position and the target anchor point as the target position.
  16. 如权利要求15所述的图像分割装置,其特征在于,所述分割线与所述第一图像 的像素梯度图中岭脊方向的分割线重合。The image segmentation device according to claim 15, wherein the segmentation line coincides with the segmentation line in the ridge direction of the pixel gradient map of the first image.
  17. 如权利要求15或16所述的图像分割装置,其特征在于,所述处理单元具体用于:The image segmentation device according to claim 15 or 16, wherein the processing unit is specifically configured to:
    根据所述第一图像查找所述第一图像对应的掩膜图像与割线图像,其中,所述掩膜图像用于表示所述第一图像中的不同对象,所述割线图像用于表示所述第一图像中不同对象的边界;Search for a mask image and a secant image corresponding to the first image according to the first image, wherein the mask image is used to represent different objects in the first image, and the secant image is used to represent The boundaries of different objects in the first image;
    在所述显示屏上显示所述第一图像、所述掩膜图像以及所述割线图像叠加后的图像。The superimposed image of the first image, the mask image, and the secant image is displayed on the display screen.
  18. 如权利要求17所述的图像分割装置,其特征在于,所述检测单元还用于:18. The image segmentation device of claim 17, wherein the detection unit is further configured to:
    检测到所述用户指示通过所述掩膜图像或者所述割线图像对所述第二图像进行处理的第三操作。A third operation instructed by the user to process the second image through the mask image or the secant image is detected.
  19. 如权利要求15至18中任一项所述的图像分割装置,其特征在于,所述第二图像是根据所述第一图像的像素梯度图与锚点牵引模型得到的,所述锚点牵引模型用于指示所述分界点的移动方向。The image segmentation device according to any one of claims 15 to 18, wherein the second image is obtained according to a pixel gradient map of the first image and an anchor point pulling model, and the anchor point pulling The model is used to indicate the direction of movement of the dividing point.
  20. 如权利要求19所述的图像分割装置,其特征在于,在所述分界点当前位于所述像素梯度图中无岭脊的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线;或者The image segmentation device according to claim 19, wherein when the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first line and the second line The angle between the two lines is determined, wherein the first straight line refers to the straight line at which each of the eight azimuths is located, and the second straight line refers to the current position of the dividing point and the target The line between anchor points; or
    在所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域时,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者When the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is based on the angle between the first line and the second line and the distance parameter Determined, wherein the first straight line refers to a straight line where each of the eight azimuths is located, and the second straight line refers to a line between the current position of the dividing point and the target anchor point, The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or
    在所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域时,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。When the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  21. 如权利要求20所述的图像分割装置,其特征在于,在上一时刻所述分界点的移动方向与当前时刻所述分界点的移动方向相同的情况下,所述分界点的移动方向是根据不同方位的梯度绝对值确定的。The image segmentation device according to claim 20, wherein when the moving direction of the dividing point at the previous time is the same as the moving direction of the dividing point at the current time, the moving direction of the dividing point is based on The absolute value of the gradient in different directions is determined.
  22. 如权利要求15至21中任一项所述的图像分割装置,其特征在于,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。The image segmentation device according to any one of claims 15 to 21, wherein the anchor point is obtained by optimizing an initial anchor point, wherein the initial anchor point is the user's position in the first image Manually marked anchor points.
  23. 一种图像分割装置,其特征在于,包括:An image segmentation device, characterized in that it comprises:
    获取单元,用于获取第一图像以及所述第一图像中锚点的位置信息,其中,所述锚点包括起始锚点与目标锚点;An obtaining unit, configured to obtain a first image and position information of an anchor point in the first image, wherein the anchor point includes a start anchor point and a target anchor point;
    处理单元,用于根据所述第一图像与所述锚点,得到第二图像,其中,所述第二图像是所述第一图像经过图像分割处理后得到的图像,所述第二图像包括所述起始锚点与所述目标锚点之间的分割线,所述分割线是通过分界点在所述第一图像的像素梯度图中以所述起始锚点为起始位置并且以所述目标锚点为目标位置移动得到的。The processing unit is configured to obtain a second image according to the first image and the anchor point, where the second image is an image obtained after the first image is subjected to image segmentation processing, and the second image includes The dividing line between the starting anchor point and the target anchor point, the dividing line is a dividing line in the pixel gradient map of the first image with the starting anchor point as the starting position and The target anchor point is obtained by moving the target position.
  24. 如权利要求23所述的图像分割装置,其特征在于,所述分割线与所述像素梯度图中岭脊方向的分割线重合。22. The image segmentation device according to claim 23, wherein the segmentation line coincides with the segmentation line in the ridge direction of the pixel gradient map.
  25. 如权利要求23或24所述的图像分割装置,其特征在于,所述处理单元具体用于:The image segmentation device according to claim 23 or 24, wherein the processing unit is specifically configured to:
    根据所述像素梯度图、所述锚点以及锚点牵引模型,得到所述第二图像,其中,所述 锚点牵引模型用于指示所述分界点的移动方向。The second image is obtained according to the pixel gradient map, the anchor point and the anchor point pulling model, wherein the anchor point pulling model is used to indicate the moving direction of the boundary point.
  26. 如权利要求25所述的图像分割装置,其特征在于,若所述分界点当前位于所述像素梯度图中无岭脊的区域,则所述分界点的移动方向是根据第一连线与第二连线之间的夹角确定的,其中,所述第一连线是指所述起始锚点与所述目标锚点之间的连线,所述第二连线是指所述分界点的当前位置与所述目标锚点之间的连线;或者,The image segmentation device of claim 25, wherein if the boundary point is currently located in an area without ridges in the pixel gradient map, the moving direction of the boundary point is based on the first line and the second line The angle between the two lines is determined, wherein the first line refers to the line between the starting anchor point and the target anchor point, and the second line refers to the boundary The line between the current position of the point and the target anchor point; or,
    若所述分界点当前位于所述像素梯度图中有岭脊且有分叉的区域,所述分界点的移动方向是根据第一连线与第二连线之间的夹角以及距离参数确定的,其中,所述第一直线是指八个方位中每个方位所在的直线,所述第二直线是指所述分界点的当前位置与所述目标锚点之间的连线,所述距离参数是指所述分界点的当前位置与所述目标锚点之间的距离;或者,If the dividing point is currently located in an area with ridges and bifurcations in the pixel gradient map, the moving direction of the dividing point is determined according to the angle between the first line and the second line and the distance parameter , Wherein the first straight line refers to the straight line where each of the eight azimuths is located, and the second straight line refers to the line between the current position of the dividing point and the target anchor point, so The distance parameter refers to the distance between the current position of the boundary point and the target anchor point; or,
    若所述分界点当前位于所述像素梯度图中有岭脊且无分叉区域,所述分界点的移动方向是根据所述像素梯度图的岭脊方向确定的。If the boundary point is currently located in the pixel gradient map with a ridge and no bifurcation area, the moving direction of the boundary point is determined according to the ridge direction of the pixel gradient map.
  27. 如权利要求26所述的图像分割装置,其特征在于,所述处理单元还用于:The image segmentation device according to claim 26, wherein the processing unit is further configured to:
    若所述分界点上一时刻的移动方向与当前时刻的移动方向相同,则根据不同移动方位的梯度绝对值确定所述分界点的移动方向。If the movement direction of the boundary point at the previous moment is the same as the movement direction of the current moment, the movement direction of the boundary point is determined according to the absolute value of the gradient of different moving directions.
  28. 如权利要求23至27中任一项所述的图像分割装置,其特征在于,所述锚点是通过优化初始锚点得到的,其中,所述初始锚点是用户在所述第一图像中手动标记的锚点。The image segmentation device according to any one of claims 23 to 27, wherein the anchor point is obtained by optimizing an initial anchor point, wherein the initial anchor point is the user's position in the first image Manually marked anchor points.
  29. 一种计算机可读存储介质,其特征在于,所述计算机可读介质存储用于设备执行的程序代码,该程序代码包括用于执行如权利要求1至8或者9至14中任一项所述的图像分割方法。A computer-readable storage medium, wherein the computer-readable medium stores program code for device execution, and the program code includes a program code for executing any one of claims 1 to 8 or 9 to 14. Image segmentation method.
PCT/CN2020/140570 2019-12-31 2020-12-29 Image segmentation method and device WO2021136224A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911411574.7 2019-12-31
CN201911411574.7A CN113129307A (en) 2019-12-31 2019-12-31 Image segmentation method and image segmentation device

Publications (1)

Publication Number Publication Date
WO2021136224A1 true WO2021136224A1 (en) 2021-07-08

Family

ID=76686519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140570 WO2021136224A1 (en) 2019-12-31 2020-12-29 Image segmentation method and device

Country Status (2)

Country Link
CN (1) CN113129307A (en)
WO (1) WO2021136224A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140488A (en) * 2021-11-30 2022-03-04 北京达佳互联信息技术有限公司 Video target segmentation method and device and training method of video target segmentation model
CN114511566A (en) * 2022-04-19 2022-05-17 武汉大学 Method and related device for detecting basement membrane positioning line in medical image
CN117655563A (en) * 2024-01-31 2024-03-08 成都沃特塞恩电子技术有限公司 Laser cutting path planning method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008886A1 (en) * 2002-07-02 2004-01-15 Yuri Boykov Using graph cuts for editing photographs
CN101329763A (en) * 2007-06-22 2008-12-24 西门子公司 Method for segmenting structures in image data records and image processing unit
CN101859224A (en) * 2010-04-30 2010-10-13 陈铸 Method and system for scratching target object from digital picture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040008886A1 (en) * 2002-07-02 2004-01-15 Yuri Boykov Using graph cuts for editing photographs
CN101329763A (en) * 2007-06-22 2008-12-24 西门子公司 Method for segmenting structures in image data records and image processing unit
CN101859224A (en) * 2010-04-30 2010-10-13 陈铸 Method and system for scratching target object from digital picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BARRETT, WILLIAM A. ET AL.: "Interactive Live-Wire Boundary Extraction.", MEDICAL IMAGE ANALYSIS., vol. 1, no. 4, 31 October 1997 (1997-10-31), XP002246610, DOI: 10.1016/S1361-8415(97)85005-0 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140488A (en) * 2021-11-30 2022-03-04 北京达佳互联信息技术有限公司 Video target segmentation method and device and training method of video target segmentation model
CN114511566A (en) * 2022-04-19 2022-05-17 武汉大学 Method and related device for detecting basement membrane positioning line in medical image
CN114511566B (en) * 2022-04-19 2022-07-19 武汉大学 Method and related device for detecting basement membrane positioning line in medical image
CN117655563A (en) * 2024-01-31 2024-03-08 成都沃特塞恩电子技术有限公司 Laser cutting path planning method and device, electronic equipment and storage medium
CN117655563B (en) * 2024-01-31 2024-05-28 成都沃特塞恩电子技术有限公司 Laser cutting path planning method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113129307A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
WO2021136224A1 (en) Image segmentation method and device
US11238644B2 (en) Image processing method and apparatus, storage medium, and computer device
US10255681B2 (en) Image matting using deep learning
US11042990B2 (en) Automatic object replacement in an image
US9547908B1 (en) Feature mask determination for images
CN113810587B (en) Image processing method and device
US10275892B2 (en) Multi-view scene segmentation and propagation
US10872637B2 (en) Video inpainting via user-provided reference frame
WO2020164092A1 (en) Image processing method and apparatus, moveable platform, unmanned aerial vehicle and storage medium
CN109887003B (en) Method and equipment for carrying out three-dimensional tracking initialization
WO2021136528A1 (en) Instance segmentation method and apparatus
CN110084797B (en) Plane detection method, plane detection device, electronic equipment and storage medium
CN114063858B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
CN110544268B (en) Multi-target tracking method based on structured light and SiamMask network
US20210406548A1 (en) Method, apparatus, device and storage medium for processing image
CN111382647B (en) Picture processing method, device, equipment and storage medium
CN115410173B (en) Multi-mode fused high-precision map element identification method, device, equipment and medium
CN105513083A (en) PTAM camera tracking method and device
CN113408662A (en) Image recognition method and device, and training method and device of image recognition model
WO2023082588A1 (en) Semantic annotation method and apparatus, electronic device, storage medium, and computer program product
CN109215047A (en) Moving target detection method and device based on deep sea video
CN110390724A (en) A kind of SLAM method with example segmentation
CN116030340A (en) Robot, positioning information determining method, device and storage medium
CN112258539B (en) Water system data processing method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20909142

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20909142

Country of ref document: EP

Kind code of ref document: A1