CN110215693B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110215693B
CN110215693B CN201910394946.3A CN201910394946A CN110215693B CN 110215693 B CN110215693 B CN 110215693B CN 201910394946 A CN201910394946 A CN 201910394946A CN 110215693 B CN110215693 B CN 110215693B
Authority
CN
China
Prior art keywords
pixel
region
pixels
area
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910394946.3A
Other languages
Chinese (zh)
Other versions
CN110215693A (en
Inventor
刘祎玮
徐舒畅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lexin Shengwen Technology Co ltd
Original Assignee
Beijing Lexin Shengwen Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lexin Shengwen Technology Co ltd filed Critical Beijing Lexin Shengwen Technology Co ltd
Priority to CN201910394946.3A priority Critical patent/CN110215693B/en
Publication of CN110215693A publication Critical patent/CN110215693A/en
Application granted granted Critical
Publication of CN110215693B publication Critical patent/CN110215693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/23Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a device for processing an image, wherein the method comprises the following steps: acquiring an image; determining pixels in the image that are not accessed; selecting a pixel from the non-accessed pixels as a central point; accessing all pixels one by one from the central point to the periphery, and performing region growth to obtain a new generation region corresponding to the central point; obtaining a boundary corresponding to the newly generated area according to the newly generated area; and traversing each pixel in the image until each area in the image and the boundary corresponding to each area are obtained. The method achieves the aim of automatically obtaining the closed boundary of each area in the image, thereby realizing the technical effects that the boundary of the target in the image can be automatically extracted only by applying the method without manually tracing on the drawing board by a designer, and the closure of the boundary can be ensured.

Description

Image processing method and device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
The color filling game is a leisure game suitable for both children and adults. The user spends a certain time to color the black and white boundary image according to the prompt of the game, and finally, an image with rich colors is finished. Such games can foster user patience, care and confidence, foster peace and mind of the user, and are also suitable for wearing away time, and thus are also referred to as curative games.
The playing method of the game is simple, and under the cooperation of a good user interface and an interaction mode, the task can be independently completed when the game is played for the first time without the help of other people. Therefore, the key competitiveness of the color filling game is the sense of screen appearance, the sense of gradation of painting, and the subdivision of the area. The color-filling areas in many digital color-filling games are particularly thin and need to be designed with great mind. In order to design a rich and fine color filling map, a visual designer may outline all boundaries on the electronic drawing board in a stroke. In this way, very rich and fine color fill maps can be created, but with very low efficiency. And the individual creativity is limited and the output cannot be guaranteed.
Aiming at the problems of low efficiency and low output capacity caused by manually performing boundary delineation in the related technology, an effective solution is not provided at present.
Disclosure of Invention
The present application is directed to an image processing method for solving at least one of the problems of the related art.
In order to achieve the above object, according to one aspect of the present application, there is provided a method of image processing.
The method of image processing according to the present application includes:
acquiring an image;
determining pixels in the image that are not accessed;
selecting a pixel from the non-accessed pixels as a central point;
accessing all pixels one by one from the central point to the periphery, and performing region growth to obtain a new generation region corresponding to the central point;
obtaining a boundary corresponding to the newly generated area according to the newly generated area;
and traversing each pixel in the image until each area in the image and the boundary corresponding to each area are obtained.
Further, as in the foregoing image processing method, accessing each pixel one by one from the central point to the periphery, and performing region growing to obtain a new generated region corresponding to the central point includes:
obtaining each domain pixel on the eight-connected neighborhood corresponding to the central point;
determining the similarity between each domain pixel on the eight connected neighborhood and the central point;
when the similarity is higher than a preset value, marking the corresponding domain pixel as a new region pixel in the new generation region and putting the new region pixel in a queue Q as a queue element;
and taking out one queue element from the queue Q, obtaining a field pixel corresponding to the taken-out queue element from the pixels which are not accessed, marking the field pixel as a pixel in the new generation area and putting the pixel into the queue Q as the queue element if the field pixel with the similarity higher than a preset value with the taken-out queue element exists, circulating according to the above steps until the queue element in the queue Q is empty, determining all pixels in the new area, and obtaining the new generation area.
Further, as the foregoing image processing method, determining the similarity between each domain pixel on the eight connected neighborhood and the central point includes:
determining a first Euclidean distance D between each domain pixel Xi on the eight connected domains and the central point S; and calculating the Euclidean distance D by the following formula:
D=(S-Xi)*(S-Xi)。
further, as the foregoing image processing method, after obtaining the corresponding boundary according to the newly generated region, the method further includes:
judging whether a plurality of boundary pixels exist in the boundary and are arranged in a V shape;
if a plurality of boundary pixels are arranged in a V shape, one or more boundary pixels are adjusted to be arranged in a straight line.
Further, as the aforementioned method for processing an image, after obtaining the boundary corresponding to the newly generated region according to the newly generated region, the method further includes:
determining whether an internal hole completely surrounded by the new generation region exists within the new generation region;
when the inner hole exists, a closed boundary is generated at the edge of the inner hole.
Further, the method of image processing as described above, after obtaining each region and the boundary corresponding to each region, further includes:
determining the pixel average value of each existing area and the number corresponding to each existing area, and obtaining a first corresponding relation;
counting a region pixel average value of all pixels in the new generation region;
calculating a second Euclidean distance between the pixel average value of the area and the pixel average value of each existing area;
and determining the number corresponding to the new generation region according to the second Euclidean distance and the first corresponding relation.
Further, as the aforementioned method for image processing, after determining the number corresponding to the new generation region, the method further includes:
determining a survival area which cannot be matched with the corresponding number;
determining all adjacent areas of the survival area;
determining an adjacent area with the brightness closest to the survival area in all the adjacent areas;
combining the survivor regions into the nearest neighbor region.
In order to achieve the above object, according to another aspect of the present application, there is provided an apparatus for image processing.
An apparatus for image processing according to the present application includes: an apparatus for image processing, comprising:
an image acquisition unit for acquiring an image;
an unvisited pixel determination unit for determining unvisited pixels in the image;
a central point selecting unit for selecting a pixel from the non-accessed pixels as a central point;
the new generation region determining unit is used for accessing all pixels one by one from the central point to the periphery and carrying out region growth to obtain a new generation region corresponding to the central point;
the boundary determining unit is used for obtaining a boundary corresponding to the new generation region according to the new generation region;
and the traversing unit is used for traversing each pixel in the image until each area in the image and the boundary corresponding to each area are obtained.
Further, in the apparatus for image processing as described above, the new generation region determination unit includes:
the domain pixel module is used for obtaining each domain pixel on the eight-connected neighborhood corresponding to the central point;
the similarity determining module is used for determining the similarity between each domain pixel on the eight-connected neighborhood and the central point;
the pixel processing module is used for marking the corresponding field pixel as a new area pixel in the new generation area and putting the new area pixel into a queue Q as a queue element when the similarity is higher than a preset value;
and the traversal module is used for taking out a queue element from the queue Q, acquiring a domain pixel corresponding to the taken-out queue element from the pixels which are not accessed, marking the domain pixel as a pixel in the new generation area and putting the pixel into the queue Q as the queue element if the domain pixel has the similarity higher than the preset value with the taken-out queue element, and determining all pixels in the new area and obtaining the new generation area according to the circulation until the queue element in the queue Q is empty.
Further, the apparatus for image processing as described above further includes: a label unit; the labeling unit includes:
the existing region information module is used for determining the pixel average value of each existing region and the number corresponding to each existing region and obtaining a first corresponding relation;
the average module is used for counting the regional pixel average value of all pixels in the new generation region;
the distance calculation module is used for calculating a second Euclidean distance between the area pixel average value and the pixel average value of each existing area;
and the number determining module is used for determining the number corresponding to the new generation region according to each second Euclidean distance and the first corresponding relation.
In the embodiment of the present application, a method and an apparatus for image processing are adopted, and the method includes: acquiring an image; determining pixels in the image that are not accessed; selecting a pixel from the non-accessed pixels as a central point; accessing all pixels one by one from the central point to the periphery, and performing region growth to obtain a new generation region corresponding to the central point; obtaining a boundary corresponding to the newly generated area according to the newly generated area; and traversing each pixel in the image until each area in the image and the boundary corresponding to each area are obtained. The method achieves the aim of automatically obtaining the closed boundary of each area in the image, thereby realizing the technical effects that the boundary of the target in the image can be automatically extracted only by applying the method without manually tracing on the drawing board by a designer, and the closure of the boundary can be ensured.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic flow diagram of a method of image processing according to one embodiment of the present application;
FIG. 2a of FIG. 2 is an original image before processing and FIG. 2b is an image after processing by an embodiment of the present application;
FIG. 3 is a schematic view of an eight-connected region according to one embodiment of the present application;
FIG. 4 is a schematic diagram of smoothing a region boundary according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating a marking operation for a region according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a determination strategy for pixel allocation of survivor areas according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a complete flow for processing an image and obtaining a boundary image according to an embodiment of the present application; and
fig. 8 is a schematic structural diagram of functional modules of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to one embodiment of the present application, a method of image processing is provided. As shown in fig. 1, the method includes steps S1 to S6 as follows:
s1, acquiring an image;
specifically, the step may be to import the image into software or a system implementing the method of the present application;
s2, determining pixels which are not accessed in the image;
that is, before the image is completely unprocessed, all pixels in the image are not accessed, and after the boundary acquisition of the image is started, it means that part of the pixels are processed or identified, and the identified pixels are marked to avoid repeated identification, preferably, the pixels in the image can be divided into accessed pixels and unaccessed pixels; if all pixels in the image have been visited, go directly to step S6;
s3, selecting one pixel from the pixels which are not accessed as a central point;
specifically, the selection of the central point may be performed randomly or by a specific algorithm, which is not specifically limited herein, and only needs to be performed for pixels that are not accessed;
s4, accessing all pixels one by one from the central point to the periphery, and performing region growing to obtain a new generation region corresponding to the central point;
that is, with the central point as a starting point, the four sides are expanded, each pixel is visited one by one, whether different pixels belong to the same region can be judged by judging the Euclidean distance between adjacent pixels and the like, and the new generation region is obtained;
s5, obtaining a boundary corresponding to the newly generated area according to the newly generated area;
that is, after the newly generated region is obtained, the boundary corresponding to the newly generated region is identified by performing identification processing on the pixels of the boundary;
s6, traversing each pixel in the image until each area in the image and a boundary corresponding to each area are obtained;
that is, after accessing each pixel according to the method in the steps S1 to S5, the regions and the boundaries corresponding to the regions in the image can be obtained.
In some embodiments, as in the foregoing image processing method, accessing each pixel one by one from the central point to the periphery, and performing region growing to obtain a new generated region corresponding to the central point includes:
obtaining each domain pixel on the eight-connected neighborhood corresponding to the central point;
generally, the number of domain pixels corresponding to the center point is 8;
determining the similarity between each domain pixel on the eight connected neighborhood and the central point;
specifically, the similarity can be obtained by comparing information such as euclidean distance and brightness;
when the similarity is higher than a preset value, marking the corresponding domain pixel as a new region pixel in the new generation region and putting the new region pixel in a queue Q as a queue element;
and taking out one queue element from the queue Q, obtaining a field pixel corresponding to the taken-out queue element from the pixels which are not accessed, marking the field pixel as a pixel in the new generation area and putting the pixel into the queue Q as the queue element if the field pixel with the similarity higher than a preset value with the taken-out queue element exists, circulating according to the above steps until the queue element in the queue Q is empty, determining all pixels in the new area, and obtaining the new generation area.
Specifically, as shown in fig. 3, with S as a central point, eight connected neighborhoods, i.e., the upper, lower, left, and right regions of S are sequentially visited, which indicates the X position. If the distance D between S and Xi (Xi represents 0-8 neighborhoods), is smaller than a specified threshold value T (i.e. the similarity is larger than a preset value), then Xi is put into a specified queue Q.
Preferably, the distance D is calculated based on the euclidean distance between two pixels, and D is (S-Xi) × (S-Xi). In the application, the distance D is calculated by adopting a square, so that the obtained data can be ensured to be positive.
Based on the queue Q, a queue element is taken out as a new center point SN(after a queue element is taken out, the element in the queue Q is deleted correspondingly); and repeating the steps until the queue Q is empty, namely when the distance D between the image and the last element in the queue is not less than a specified threshold value T, the new generation region with S as the center point is completely identified. At this point, the region growing ends. The newly generated region is denoted as M. In addition, a minimum pixel number threshold value in each region can be preset, if the pixel number in the new generation region is less than the pixel number threshold value, the new generation region is directly abandoned, and one pixel which is not visited is determined in the image as a central point again, and the new generation region is obtained by region growing.
In some embodiments, as in the foregoing method for processing an image, after obtaining the boundary corresponding to the newly generated region according to the newly generated region, the method further includes:
judging whether a plurality of boundary pixels exist in the boundary and are arranged in a V shape;
if a plurality of boundary pixels are arranged in a V shape, one or more boundary pixels are adjusted to be arranged in a straight line.
Specifically, as shown in fig. 4, if the edge pixels have a V-shaped arrangement, the pixel arrangement is uniformly changed to a straight shape. Fig. 4 shows two cases of changing the V-shaped arrangement into a horizontal-shaped arrangement (fig. 4a) and a vertical-shaped arrangement (fig. 4 b). By this operation, the region edge is finished smoothly.
In some embodiments, the method of image processing as described above, after obtaining the boundary corresponding to the newly generated region according to the newly generated region, further includes:
determining whether an internal hole completely surrounded by the new generation region exists within the new generation region; the internal hole refers to the internal part of the newly generated region, and partial pixels which do not belong to the same region as the newly generated region exist;
when the inner hole exists, a closed boundary is generated at the edge of the inner hole.
Thus, the convexity of the region can be effectively ensured by this step.
In some embodiments, after obtaining the regions and the boundaries corresponding to the regions, the method of image processing further includes:
determining the pixel average value of each existing area and the number corresponding to each existing area, and obtaining a first corresponding relation;
counting a region pixel average value of all pixels in the new generation region;
calculating a second Euclidean distance between the pixel average value of the area and the pixel average value of each existing area;
and determining the number corresponding to the new generation region according to the second Euclidean distance and the first corresponding relation.
Specifically, the regions with the same color or high similarity in the picture can be determined through the step, and the regions with high similarity can be corresponding to one number.
For example, assuming that there is a region M, as shown in fig. 5, in step 201, the average value of all pixels in the region M is counted. If the image has a plurality of channels, the average value of each channel is counted respectively. After the statistics is completed, the process proceeds to step 202, and the attribution of the area M is determined.
Preferably, the attribution of M is determined by comparing the distance between the average value of the pixels of the M region and the average value of the pixels of the existing region. The distance is preferably calculated in the form of a euclidean distance. If the distance is smaller than the specified threshold value, the M is classified as the same region of the existing region, and the same number is given. Otherwise, taking M as a new region, and adding the new region into the existing region attribute list.
In some embodiments, the method of image processing as described above, after determining the number corresponding to the new generation region, further includes:
determining a survival area which cannot be matched with the corresponding number;
determining all adjacent areas of the survival area;
determining an adjacent area with the brightness closest to the survival area in all the adjacent areas;
assigning the survivor area to the nearest neighbor area.
The steps can ensure that each target in the graph can generate a closed edge.
Specifically, although the euclidean distance between the remaining region and the adjacent region is large due to the image blur or the like in the region where the respective regions are adjacent to each other, the above object cannot be achieved due to the presence of the remaining region because a closed boundary is obtained in the present application.
All the pixels in the survivor area can be allocated to the adjacent area through the steps, and therefore, the closed boundary can be obtained. For example, the strategy of allocation is shown in fig. 6, where "? "denotes a pixel U which is not assigned to any region, and the number on the adjacent pixel denotes the number of the adjacent region to which the pixel belongs.
Preferably, the home area of the unassigned pixels U is determined by voting. As shown in fig. 6 (left), if the nearest neighboring region of U belongs to the same region, U is also classified as the neighboring region. If the pixels in the neighborhood of U belong to different regions, then U is "voted" for the region to which the shortest distance pixel belongs by calculating the distance of U from each neighboring pixel. Preferably, the distance is calculated using the euclidean distance.
Specifically, by combining the methods in the above embodiments, the image shown in fig. 2a in fig. 2 is processed through the processing flow shown in fig. 7, so as to obtain the boundary image shown in fig. 2 b.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present invention, there is also provided an apparatus for image processing for implementing the above-described method for image processing, as shown in fig. 7, the apparatus including:
an image acquisition unit 1 for acquiring an image;
an unvisited pixel determination unit 2 for determining unvisited pixels in the image;
a central point selecting unit 3, configured to select a pixel from the non-accessed pixels as a central point;
a newly generated region determining unit 4, configured to access the pixels one by one from the central point to the periphery, and perform region growing to obtain a newly generated region corresponding to the central point;
a boundary determining unit 5, configured to obtain a boundary corresponding to the new generation region according to the new generation region;
and the traversing unit 6 is used for traversing each pixel in the image until each area in the image and the boundary corresponding to each area are obtained.
Specifically, the specific process of implementing the functions of each module in the apparatus according to the embodiment of the present invention may refer to the related description in the method embodiment, and is not described herein again.
In some embodiments, the new generation region determination unit includes:
the domain pixel module is used for obtaining each domain pixel on the eight-connected neighborhood corresponding to the central point;
the similarity determining module is used for determining the similarity between each domain pixel on the eight-connected neighborhood and the central point;
the pixel processing module is used for marking the corresponding field pixel as a new area pixel in the new generation area and putting the new area pixel into a queue Q as a queue element when the similarity is higher than a preset value;
and the traversal module is used for taking out a queue element from the queue Q, acquiring a domain pixel corresponding to the taken-out queue element from the pixels which are not accessed, marking the domain pixel as a pixel in the new generation area and putting the pixel into the queue Q as the queue element if the domain pixel has the similarity higher than the preset value with the taken-out queue element, and determining all pixels in the new area and obtaining the new generation area according to the circulation until the queue element in the queue Q is empty.
Specifically, the specific process of implementing the functions of each module in the apparatus according to the embodiment of the present invention may refer to the related description in the method embodiment, and is not described herein again.
In some embodiments, the apparatus for image processing as described above, further comprises: a label unit; the labeling unit includes:
the existing region information module is used for determining the pixel average value of each existing region and the number corresponding to each existing region and obtaining a first corresponding relation;
the average module is used for counting the regional pixel average value of all pixels in the new generation region;
the distance calculation module is used for calculating a second Euclidean distance between the area pixel average value and the pixel average value of each existing area;
and the number determining module is used for determining the number corresponding to the new generation region according to each second Euclidean distance and the first corresponding relation.
Specifically, the specific process of implementing the functions of each module in the apparatus according to the embodiment of the present invention may refer to the related description in the method embodiment, and is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A method of image processing, comprising:
acquiring an image;
determining pixels in the image that are not accessed;
selecting a pixel from the non-accessed pixels as a central point;
accessing all pixels one by one from the central point to the periphery, and performing region growth to obtain a new generation region corresponding to the central point; it includes:
obtaining each neighborhood pixel on the eight-connected neighborhood corresponding to the central point;
determining the similarity between each neighborhood pixel on the eight connected neighborhood and the central point;
when the similarity is higher than a preset value, marking the corresponding neighborhood pixels as new region pixels in the new generation region and putting the new region pixels into a queue Q as queue elements;
taking out a queue element from the queue Q, obtaining a neighborhood pixel corresponding to the taken out queue element from pixels which are not accessed, if a neighborhood pixel with the similarity higher than a preset value with the taken out queue element exists, marking the neighborhood pixel as a pixel in the new generation area and placing the pixel in the queue Q as a queue element, and circulating according to the cycle until the queue element in the queue Q is empty, determining all pixels in the new area and obtaining the new generation area;
obtaining a boundary corresponding to the newly generated area according to the newly generated area;
and traversing each pixel in the image until each area in the image and the boundary corresponding to each area are obtained.
2. The method of image processing according to claim 1, wherein determining a similarity of each neighborhood pixel on the eight connected neighborhood to the center point comprises:
determining each neighborhood pixel X on the eight connected neighborhoodiA first Euclidean distance D from the center point S; and calculating the Euclidean distance D by the following formula:
D=(S-Xi)*(S-Xi)。
3. the method according to claim 1, further comprising, after obtaining the corresponding boundary from the newly generated region:
judging whether a plurality of boundary pixels exist in the boundary and are arranged in a V shape;
if a plurality of boundary pixels are arranged in a V shape, one or more boundary pixels are adjusted to be arranged in a straight line.
4. The method of image processing according to claim 1, further comprising, after obtaining the boundary corresponding to the newly generated region according to the newly generated region:
determining whether an internal hole completely surrounded by the new generation region exists within the new generation region;
when the inner hole exists, a closed boundary is generated at the edge of the inner hole.
5. The method of claim 1, further comprising, after obtaining each region and the boundary corresponding to each region:
determining the pixel average value of each existing area and the number corresponding to each existing area, and obtaining a first corresponding relation;
counting a region pixel average value of all pixels in the new generation region;
calculating a second Euclidean distance between the pixel average value of the area and the pixel average value of each existing area;
and determining the number corresponding to the new generation region according to the second Euclidean distance and the first corresponding relation.
6. The method of image processing according to claim 1, further comprising, after determining the number corresponding to the newly generated region:
determining a survival area which cannot be matched with the corresponding number;
determining all adjacent areas of the survival area;
determining an adjacent area with the brightness closest to the survival area in all the adjacent areas;
combining the survivor regions into the nearest neighbor region.
7. An apparatus for image processing, comprising:
an image acquisition unit for acquiring an image;
an unvisited pixel determination unit for determining unvisited pixels in the image;
a central point selecting unit for selecting a pixel from the non-accessed pixels as a central point;
the new generation region determining unit is used for accessing all pixels one by one from the central point to the periphery and carrying out region growth to obtain a new generation region corresponding to the central point; the new generation region determination unit includes:
the neighborhood pixel module is used for obtaining each neighborhood pixel on the eight-connected neighborhood corresponding to the central point;
a similarity determining module, configured to determine a similarity between each neighborhood pixel on the eight connected neighborhood and the central point;
the pixel processing module is used for recording corresponding neighborhood pixels as new area pixels in the new generation area and placing the new area pixels into a queue Q as queue elements when the similarity is higher than a preset value;
a traversal module, configured to take out a queue element from the queue Q, obtain a neighborhood pixel corresponding to the taken out queue element from the pixels that are not accessed, mark the neighborhood pixel as a pixel in the new generation region and place the pixel in the queue Q as a queue element if there is a neighborhood pixel whose similarity to the taken out queue element is higher than a preset value, and determine all pixels in the new region and obtain the new generation region according to the cycle until the queue element in the queue Q is empty;
the boundary determining unit is used for obtaining a boundary corresponding to the new generation region according to the new generation region;
and the traversing unit is used for traversing each pixel in the image until each area in the image and the boundary corresponding to each area are obtained.
8. The apparatus for image processing according to claim 7, further comprising: a label unit; the labeling unit includes:
the existing region information module is used for determining the pixel average value of each existing region and the number corresponding to each existing region and obtaining a first corresponding relation;
the average module is used for counting the regional pixel average value of all pixels in the new generation region;
the distance calculation module is used for calculating a second Euclidean distance between the area pixel average value and the pixel average value of each existing area;
and the number determining module is used for determining the number corresponding to the new generation region according to each second Euclidean distance and the first corresponding relation.
CN201910394946.3A 2019-05-13 2019-05-13 Image processing method and device Active CN110215693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910394946.3A CN110215693B (en) 2019-05-13 2019-05-13 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910394946.3A CN110215693B (en) 2019-05-13 2019-05-13 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110215693A CN110215693A (en) 2019-09-10
CN110215693B true CN110215693B (en) 2020-03-24

Family

ID=67820888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910394946.3A Active CN110215693B (en) 2019-05-13 2019-05-13 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110215693B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112783992B (en) * 2019-11-08 2023-10-20 腾讯科技(深圳)有限公司 Map functional area determining method and device based on interest points
CN112396698B (en) * 2020-11-20 2023-03-28 上海莉莉丝网络科技有限公司 Method, system and computer readable storage medium for demarcating map area boundary in game map

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN108053377A (en) * 2017-12-11 2018-05-18 北京小米移动软件有限公司 Image processing method and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7162095B2 (en) * 2003-06-13 2007-01-09 National Chung Cheng University Method of automatically determining the region of interest from an image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730515A (en) * 2017-10-12 2018-02-23 北京大学深圳研究生院 Panoramic picture conspicuousness detection method with eye movement model is increased based on region
CN108053377A (en) * 2017-12-11 2018-05-18 北京小米移动软件有限公司 Image processing method and equipment

Also Published As

Publication number Publication date
CN110215693A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
WO2021012837A1 (en) Method and apparatus for determining recommendation information implantation position, device and storage medium
CN104915972B (en) Image processing apparatus, image processing method and program
CN103473799B (en) The method for dynamically processing of a kind of picture and device, terminal unit
CN110215693B (en) Image processing method and device
Duanmu et al. Fast CU partition decision using machine learning for screen content compression
CN106485199A (en) A kind of method and device of body color identification
CN108629783B (en) Image segmentation method, system and medium based on image feature density peak search
CN110378247A (en) Virtual objects recognition methods and device, storage medium and electronic device
CN101201903A (en) Image processing apparatus, method for controlling image processing apparatus
CN104899853A (en) Image region dividing method and device
CN101686338A (en) System and method for partitioning foreground and background in video
CN108647634A (en) Framing mask lookup method, device, computer equipment and storage medium
CN103051915B (en) Manufacture method and manufacture device for interactive three-dimensional video key frame
CN109376659A (en) Training method, face critical point detection method, apparatus for face key spot net detection model
CN101393649A (en) Apparatus and method for rendering multi-viewpoint images
CN107801013A (en) White balancing treatment method and device, electronic installation and computer-readable recording medium
CN102682115B (en) Dot density thematic map making method based on Voronoi picture
CN108737875A (en) Image processing method and device
CN105007475B (en) Produce the method and apparatus of depth information
CN101596362B (en) Method and device for displaying game pictures and game system
CN106097354A (en) A kind of combining adaptive Gauss Face Detection and the hand images dividing method of region growing
CN112489142A (en) Color identification method, device, equipment and storage medium
CN108428209A (en) Methods of High-dimensional Data Visualization, apparatus and system
CN109740527B (en) Image processing method in video frame
JP2005317042A (en) Image processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant