CN107358228B - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN107358228B
CN107358228B CN201710555993.2A CN201710555993A CN107358228B CN 107358228 B CN107358228 B CN 107358228B CN 201710555993 A CN201710555993 A CN 201710555993A CN 107358228 B CN107358228 B CN 107358228B
Authority
CN
China
Prior art keywords
preset
pixel
closed
area
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710555993.2A
Other languages
Chinese (zh)
Other versions
CN107358228A (en
Inventor
林楷鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Guangzhou Shirui Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Guangzhou Shirui Electronics Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201710555993.2A priority Critical patent/CN107358228B/en
Publication of CN107358228A publication Critical patent/CN107358228A/en
Application granted granted Critical
Publication of CN107358228B publication Critical patent/CN107358228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Abstract

The invention discloses an image processing method, an image processing device, image processing equipment and a storage medium. The method comprises the following steps: determining whether a pixel point with a pixel value within a preset pixel value range exists in an image to be identified; if yes, confirming at least one closed area formed by the pixel points; and generating and displaying a preset color covering layer at a covering position corresponding to the closed area. The method provided by the invention can automatically and accurately cover the content of the designated area, and is convenient for a user to check the grasping condition of the covered content.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of image technologies, and in particular, to a method, an apparatus, a device, and a storage medium for image processing.
Background
In the learning process, people often need to review some important knowledge points, for example, in the process of learning historical knowledge, important learning needs to be performed on the knowledge points such as names, occurrence times and occurrence places of important historical events. In order to determine whether to grasp the knowledge points, people often use some auxiliary tools such as rulers or paper to block the knowledge points on the textbook to check the grasping condition of the blocked knowledge points, however, the auxiliary tools used for blocking often cannot be matched with the position size of the blocking required by people, and further cannot accurately cover the knowledge points, thereby affecting the checking effect of people on the grasping condition of the knowledge points.
Disclosure of Invention
The invention provides an image processing method, device, equipment and storage medium, which are used for automatically and accurately covering the content of a designated area.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
determining whether a pixel point with a pixel value within a preset pixel value range exists in an image to be identified;
if yes, confirming at least one closed area formed by the pixel points;
and generating and displaying a preset color covering layer at a covering position corresponding to the closed area.
In a second aspect, an embodiment of the present invention further provides an apparatus for image processing, where the apparatus includes:
the pixel point confirmation module is used for determining whether pixel points with pixel values within a preset pixel value range exist in the image to be recognized or not;
the closed region confirmation module is used for confirming at least one closed region formed by pixel points if the pixel points with the pixel values within the range of the preset pixel values exist in the image to be recognized;
and the covering layer generating module is used for generating and displaying a preset color covering layer at the covering position corresponding to the closed area.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of image processing provided by any of the embodiments of the present invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for image processing provided by any embodiment of the present invention.
The method comprises the steps of determining whether a pixel point with a pixel value within a preset pixel value range exists in an image to be identified; if yes, confirming at least one closed area formed by the pixel points; the covering position corresponding to the closed area generates and displays the preset color covering layer, the problems that in the prior art, an auxiliary tool used for shielding cannot be matched with the position size of the required shielding, and further the required shielding content cannot be accurately covered are solved, the content in the designated area is automatically and accurately covered, and the user can conveniently check the grasping condition of the covered content.
Drawings
FIG. 1 is a flow chart of a method of image processing according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of an image to be recognized according to a first embodiment of the present invention;
FIG. 3 is a schematic diagram of generating a mask layer with a predetermined color according to a first embodiment of the present invention;
FIG. 4 is a flowchart of a method of image processing according to a second embodiment of the present invention;
FIG. 5 is a diagram illustrating an image to be recognized according to a second embodiment of the present invention;
FIG. 6 is a schematic diagram of generating a mask layer with a predetermined color according to a second embodiment of the present invention;
FIG. 7 is a schematic diagram of generating a mask layer with a predetermined color according to a second embodiment of the present invention;
FIG. 8 is a schematic structural diagram of an image processing apparatus according to a third embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device in a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the embodiment is applicable to a case where content in an image needs to be covered, and the method can be executed by an image processing apparatus. The method provided by the embodiment specifically comprises the following steps:
and step 110, acquiring an image to be identified.
Preferably, in order to distinguish the content to be covered from the content not to be covered, the background color of the content to be covered can be set to be a color of the pixel value within the preset pixel value range, and then the content to be covered in the image to be recognized can be recognized through the pixel value of each pixel point in the image to be recognized.
There are many ways to obtain the image to be recognized, and the present invention is not limited thereto. For example, a marking pen or other tools can be used to mark the background of the content to be covered in the paper document as a color within a preset pixel value range, and an image of the paper document can be obtained by taking a picture or other modes, and then an image to be recognized can be obtained, and for example, the background of the content to be covered can be displayed in the preset pixel value range by highlighting the text or other tools in the electronic document, and then the image of the electronic document can be obtained by means of screen capture or taking a picture or other modes, and then an image to be recognized can be obtained, and for example, the background of the content to be covered in the original image can be set as the color within the preset pixel value range by an image processing tool, and the processed image can be stored, and then the image to be recognized can be obtained.
Step 120, determining whether a pixel point with a pixel value within a preset pixel value range exists in the image to be recognized, if so, executing step 130, and if not, executing step 150.
After the image to be recognized is obtained, the pixel value of each pixel point in the image to be recognized can be obtained, whether the pixel value of each pixel point is within the range of the preset pixel value is confirmed, if it is confirmed that pixel points with the pixel values within the range of the preset pixel value exist in the image to be recognized, the image to be recognized is further processed, and if it is confirmed that pixel points with the pixel values within the range of the preset pixel value do not exist in the image to be recognized, the processing of the image to be recognized is finished.
Since the parameters representing colors in different color spaces are different, for example, for an RGB color space, each color may be represented by three parameters of R (Red ), G (Green ), and B (Blue ), and for an HSV color space, each color may be represented by three parameters of H (Hue, Saturation), and V (Value, brightness), the pixel Value described in this embodiment may be composed of three parameters of R, G and B, or may be composed of three parameters of H, S and V, which is not limited in this invention.
For example, if the pixel value consists of H, S and V parameters, the R, G and B parameter values of each pixel point of the image to be recognized may be obtained first, and then the H, S and V parameter values of each pixel point may be obtained according to the conversion relationship between the RGB color space and the HSV color space, so as to obtain the pixel value of each pixel point.
Preferably, before the image to be recognized is acquired, a preset pixel value range may be set, for example, if the pixel value is composed of R, G and B, the preset ranges of R, G and B may be set, respectively, and if the pixel value is composed of H, S and V, the preset ranges of H, S and V may be set, respectively. And after the pixel value of each pixel point is obtained, respectively determining whether each parameter value in the pixel value is in a preset range corresponding to each parameter.
And step 130, confirming at least one closed area formed by the pixel points.
Preferably, after confirming that pixel points with pixel values within a preset pixel value range exist in the image to be recognized, at least one closed region formed by the pixel points is confirmed.
The method for confirming at least one closed region formed by the pixel points is not limited in the present invention.
For example, a rectangular coordinate system is established with the upper left corner endpoint of the image as the origin of coordinates, the horizontal direction as the horizontal axis, and the vertical direction as the vertical axis, and the manner of determining at least one closed region composed of pixel points may be: sequentially traversing all pixel points of each line from an original point, confirming whether the currently traversed pixel point is in a preset pixel value range, if so, confirming whether the pixel value of the pixel point in a preset area around the currently traversed pixel point is in the preset pixel value range, if so, storing the currently traversed pixel point and the pixel point into the same list, if not, storing the currently traversed pixel point into a new list, and if not, traversing to the next pixel point until all the pixel points in the image to be identified are traversed. Therefore, the pixel points in one list correspond to the pixel points in one closed area.
The preset area can be set according to the actual situation. If the pixel value of the currently traversed pixel point (m, n) is within the preset pixel value range, whether the pixel value of the pixel point except the currently traversed pixel point (m, n) in the rectangular region formed by the upper left-corner coordinate (m-2, n-2), the upper right-corner coordinate (m, n-2), the lower left-corner coordinate (m-2, n) and the lower right-corner coordinate (m, n) is within the preset pixel value range or not can be determined.
Fig. 2 is a schematic diagram of an image to be recognized according to this embodiment. In fig. 2, a square corresponds to a pixel, wherein the square with an oblique line represents a pixel having a pixel value within a predetermined pixel value range. Then, by the above method for identifying closed regions, three closed regions can be identified, which are a closed region composed of pixel points (0, 0), (1, 0), (2, 0), (3, 0), (0, 1), (1, 1) and (2, 1), a closed region composed of pixel points (9, 0), (10, 0), (11, 0), (12, 0), (9, 1), (10, 1), (11, 1) and (12, 1), and a closed region composed of pixel points (6, 5), (7, 5) and (6, 6).
And 140, generating and displaying a preset color covering layer at the covering position corresponding to the closed area.
After the closed area is confirmed, a preset color covering layer is generated and displayed at a covering position corresponding to the closed area, so that the content of the designated area can be automatically and accurately covered.
For example, the method for determining the coverage position corresponding to the closed region may be: determining the horizontal axis minimum value, the horizontal axis maximum value, the vertical axis minimum value and the vertical axis maximum value of the closed area according to the coordinates of each pixel point in the closed area, determining a rectangular area corresponding to the closed area according to the horizontal axis minimum value, the horizontal axis maximum value, the vertical axis minimum value and the vertical axis maximum value of the closed area, and determining the rectangular area as the covering position corresponding to the closed area.
For example, the preset color of the cover layer may be black, red or yellow.
Fig. 3 is a schematic diagram of generating a preset color mask layer according to this embodiment. After the closed regions are determined in the image to be recognized shown in fig. 2, a rectangular region determined by the horizontal axis minimum value, the horizontal axis maximum value, the vertical axis minimum value, and the vertical axis maximum value corresponding to each closed region is determined as the covering position corresponding to the closed region, a preset color mask layer is generated and displayed on the rectangular region, and a schematic diagram after the preset color mask layer is generated is shown in fig. 3, where a black region in fig. 3 represents the preset color mask layer.
Therefore, when the user needs to check the mastery condition of the covered content, the scheme provided by the embodiment can automatically and accurately cover the content in the designated area, and automatically change the image to be recognized into an electronic question, thereby better helping the user check the mastery condition of the covered content.
And 150, finishing the processing of the image to be recognized.
The embodiment determines whether a pixel point with a pixel value within a preset pixel value range exists in an image to be identified; if yes, confirming at least one closed area formed by the pixel points; the covering position corresponding to the closed area generates and displays the preset color covering layer, the problems that in the prior art, an auxiliary tool used for shielding cannot be matched with the position size of the required shielding, and further the required shielding content cannot be accurately covered are solved, the content in the designated area is automatically and accurately covered, and the user can conveniently check the grasping condition of the covered content.
Example two
Fig. 4 is a flowchart of an image processing method according to a second embodiment of the present invention. The embodiment is further optimized on the basis of the embodiment. The method provided by the embodiment specifically comprises the following steps:
and step 210, acquiring an image to be identified.
Step 220, determining whether a pixel point with a pixel value within a preset pixel value range exists in the image to be recognized, if so, executing step 230, otherwise, executing step 290.
Step 230, confirming at least one closed area formed by the pixel points.
And 240, generating and displaying a preset color covering layer at the covering position corresponding to the closed area.
Preferably, the step of generating and displaying the preset color mask layer at the covering position corresponding to the closed region comprises the following steps:
and 241, acquiring one closed area of the at least one closed area as a target closed area.
And 242, judging whether the number of the pixel points in the target closed region is not less than a first preset threshold, if so, executing 243, and if not, executing 247.
Preferably, after a target closed region is obtained, firstly, whether the number of pixel points in the target closed region is not less than a first preset threshold is determined, if yes, the target closed region is further processed, otherwise, the processing of the target closed region is finished, and then the closed region with too few pixel points in the closed region is filtered through the first preset threshold, so that the region which is not required to be covered by mistake is prevented from being covered, and the accuracy of covering is improved.
Illustratively, the first preset threshold may be 3, 6 or 10.
And 243, determining the horizontal axis minimum value, the horizontal axis maximum value, the vertical axis minimum value and the vertical axis maximum value of the target closed region according to the coordinates of each pixel point in the target closed region.
And 244, determining a rectangular area corresponding to the target closed area according to the horizontal axis minimum value, the horizontal axis maximum value, the vertical axis minimum value and the vertical axis maximum value of the target closed area.
After the number of the pixel points in the target closed region is confirmed to be not smaller than a first preset threshold, the horizontal axis minimum value a, the horizontal axis maximum value b, the vertical axis minimum value c and the vertical axis maximum value d of the target closed region can be determined according to the coordinates of the pixel points in the target closed region, and then the rectangular region corresponding to the target closed region is determined according to the horizontal axis minimum value a, the horizontal axis maximum value b, the vertical axis minimum value c and the vertical axis maximum value d of the target closed region. If the rectangular area with the coordinates of the upper left corner as (a, c), the upper right corner as (b, c), the lower left corner as (a, d) and the lower right corner as (b, d) is determined as the rectangular area corresponding to the target closed area.
Step 245, confirming that the ratio of the number of the pixel points in the target closed region to the number of the pixel points in the rectangular region is greater than a second preset threshold value, if so, executing step 246, and if not, executing step 247.
And step 246, determining the rectangular area as a covering position corresponding to the target closed area.
And after determining a rectangular area corresponding to the target closed area, further determining whether the ratio of the number of the pixel points in the target closed area to the number of the pixel points in the rectangular area is greater than a second preset threshold, if so, determining the rectangular area as a covering position corresponding to the target closed area, and otherwise, finishing the processing of the target closed area.
And 247, judging whether the processing of all the closed areas is finished, if so, executing 248, and if not, executing 241.
And step 248, generating and displaying a preset color mask layer in the rectangular area.
After the covering position corresponding to the closed area is determined, a preset color covering layer is generated and displayed in the rectangular area determined as the covering position corresponding to the closed area, and therefore the content of the designated area is covered.
Fig. 5 is a schematic diagram of an image to be recognized according to this embodiment. In fig. 5, a square corresponds to a pixel, and the square with an oblique line represents a pixel having a pixel value within a predetermined pixel value range. In fig. 5, the confirmed closed regions include four closed regions, which are a first closed region composed of pixel points (0, 0), (1, 0), (2, 0), (3, 0), (0, 1), (1, 1) and (2, 1), a second closed region composed of pixel points (12, 2), (13, 3), (14, 4) and (15, 5), a third closed region composed of pixel points (6, 5), (7, 5) and (6, 6), and a fourth closed region composed of pixel points (2, 10), (3, 10), (4, 10), (2, 11) and (3, 11), respectively. Taking the first preset threshold as an example and the second preset threshold as an example, if the first preset threshold is 4 and the second preset threshold is 75%, the number of pixels in the first closed region can be sequentially determined to be 7 and not less than 4, the ratio of the number of pixels in the first closed region to the number of pixels in the rectangular region corresponding to the first closed region is 87.5% and is greater than 75%, and the rectangular region corresponding to the first closed region is determined to be the covering position corresponding to the first closed region; the number of the pixel points in the second closed area is 4 and not less than 4, and the ratio of the number of the pixel points in the second closed area to the number of the pixel points in the rectangular area corresponding to the second closed area is 25 percent and not more than 75 percent; the number of the pixel points in the third closed region is 3 and less than 4; the number of the pixel points in the fourth closed region is 5 and is not less than 4, the ratio of the number of the pixel points in the fourth closed region to the number of the pixel points in the rectangular region corresponding to the fourth closed region is 83.3 percent and is more than 75 percent, and the rectangular region corresponding to the fourth closed region is determined as the covering position corresponding to the fourth closed region.
Fig. 6 is a schematic diagram of generating a mask layer of a preset color according to this embodiment. As shown in fig. 6, on the basis of confirming the covering position in fig. 5, a preset color mask layer is generated and displayed on a rectangular region corresponding to the first closed region and a rectangular region corresponding to the fourth closed region, and the preset color mask layer is represented by a black region in fig. 6.
Step 250, detecting whether a first preset touch operation occurs on the preset color mask layer, if so, executing step 260.
And step 260, hiding the preset color cover layer and displaying the original content corresponding to the covering position.
After the preset color masking layer is generated and displayed in the rectangular area, when the first preset touch operation is detected to occur on the preset color masking layer, the preset color masking layer is hidden and original content corresponding to the covering position is displayed, so that a user can conveniently check the covered content.
Illustratively, the first preset operation may be a single-click operation, a double-click operation, or a slide operation.
Step 270, detecting whether a second preset touch operation occurs on the preset color mask layer, and if so, executing step 280.
And step 280, displaying a preset color mask layer.
When a second preset touch operation is detected to occur on the hidden preset color mask layer, the preset color mask layer can be displayed, and the content is covered again.
The second predetermined touch operation may be the same as the first predetermined touch operation or may be different from the first predetermined touch operation.
For example, the second preset operation may be a single-click operation, a double-click operation, or a slide operation.
Fig. 7 is a schematic diagram of generating a mask layer of a preset color according to this embodiment. As shown in fig. 7, when a first preset touch operation is performed on a preset color mask layer on a rectangular region corresponding to the first closed region, the preset color mask layer is hidden and the original content corresponding to the rectangular region is displayed, and when a second preset touch operation is performed on the preset color mask layer again, the preset color mask layer is displayed again, as shown in fig. 6.
For example, when it is detected that the third preset touch operation occurs on the preset color mask layer, a corresponding editing area may pop up for the user to input information related to the covered content, for example, when the covered content is a knowledge point to be recited, the user may input the corresponding content in the editing area, and then the user may compare the content input in the editing area with the covered content, so as to facilitate the user to check the grasp of the covered content.
The third preset touch operation is different from the first preset touch operation and the second preset touch operation.
And step 290, ending the processing of the image to be displayed.
In the embodiment, the number of the pixel points in the closed region is not less than a first preset threshold value; confirming that the ratio of the number of the pixel points in the closed area to the number of the pixel points in the rectangular area is greater than a second preset threshold value, realizing the filtering of the closed area, avoiding mistakenly covering the content in the image to be recognized, and improving the accuracy of the covering; if the first preset touch operation on the preset color mask layer is detected, hiding the preset color mask layer and displaying original content corresponding to the covering position, so that a user can conveniently check the covered content; after the preset color masking layer is hidden and the original content corresponding to the covering position is displayed, if the second preset touch operation is detected to occur on the preset color masking layer, the preset color masking layer is displayed, and the content is covered again.
EXAMPLE III
Fig. 8 is a block diagram of an image processing apparatus according to a third embodiment of the present invention. The device is suitable for the condition that the display content of the image needs to be covered, and can be realized by software and/or hardware. The device includes: a pixel confirmation module 310, a closed region confirmation module 320, and a mask generation module 330, wherein,
the pixel point confirming module 310 is configured to determine whether a pixel point with a pixel value within a preset pixel value range exists in the image to be identified;
the closed region confirming module 320 is configured to confirm at least one closed region formed by pixel points if it is determined that the pixel points with the pixel values within the preset pixel value range exist in the image to be recognized;
and the cover layer generating module 330 is configured to generate and display a preset color cover layer at a covering position corresponding to the closed region.
In the foregoing scheme, optionally, the cover layer generating module includes:
the closed region determining unit is used for determining the minimum value of the horizontal axis, the maximum value of the horizontal axis, the minimum value of the vertical axis and the maximum value of the vertical axis of the closed region according to the coordinates of all pixel points in the closed region;
the rectangular area determining unit is used for determining a rectangular area corresponding to the closed area according to the transverse axis minimum value, the transverse axis maximum value, the longitudinal axis minimum value and the longitudinal axis maximum value of the closed area;
and the covering layer generating unit is used for determining the rectangular area as a covering position corresponding to the closed area, and generating and displaying a preset color covering layer in the rectangular area.
In the foregoing scheme, optionally, the cover layer generating module further includes:
and the number confirmation unit is used for confirming that the number of the pixel points in the closed region is not less than a first preset threshold value.
In the foregoing scheme, optionally, the cover layer generating module further includes:
and the proportion confirming unit is used for confirming that the proportion of the number of the pixel points in the closed region to the number of the pixel points in the rectangular region is greater than a second preset threshold value.
In the foregoing scheme, optionally, the method further includes:
and the covering layer hiding module is used for hiding the preset color covering layer and displaying the original content corresponding to the covering position if the first preset touch operation on the preset color covering layer is detected.
In the foregoing scheme, optionally, the method further includes:
and the covering layer display module is used for displaying the preset color covering layer if detecting that a second preset touch operation occurs on the preset color covering layer.
The embodiment is characterized in that the pixel point confirmation module is used for determining whether pixel points with pixel values within a preset pixel value range exist in the image to be identified; the closed region confirmation module is used for confirming at least one closed region formed by the pixel points if the closed region confirmation module exists; the covering layer generation module is used for generating and displaying a preset color covering layer at a covering position corresponding to a closed area, solves the problems that an auxiliary tool used for shielding in the prior art cannot be matched with the position and the size of required shielding, and further cannot accurately cover the required shielding content, realizes automatic and accurate covering of the content in a designated area, and is convenient for a user to check the grasping condition of the covered content.
Example four
Fig. 9 is a schematic structural diagram of an electronic apparatus according to a fourth embodiment of the present invention, as shown in fig. 4, the electronic apparatus includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the electronic device may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430 and the output device 440 in the electronic apparatus may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 420 serves as a computer-readable storage medium, and may be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method of image processing in any embodiment of the present invention (e.g., the pixel point confirming module 310, the closed region confirming module 320, and the mask generating module 330 in the image processing apparatus). The processor 410 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the memory 420, that is, implements the operations for the electronic device described above.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to an electronic device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input image information and generate key signal inputs related to user settings and function control of the electronic apparatus. The output device 440 may include a display device such as a display screen.
EXAMPLE five
The fifth embodiment of the present invention further provides a storage medium containing computer-executable instructions, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for image processing provided in any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the method according to any embodiment of the present invention.
It should be noted that, in the embodiment of the image processing apparatus, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
The device can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects for executing the method. For technical details not described in detail in this embodiment, reference may be made to the method provided in any embodiment of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (12)

1. A method of image processing, comprising:
determining whether a pixel point with a pixel value within a preset pixel value range exists in an image to be identified, wherein the image to be identified has content to be covered, and the background of the content is marked as a color within the preset pixel value range;
if yes, confirming at least one closed area formed by the pixel points;
determining the minimum value of the horizontal axis, the maximum value of the horizontal axis, the minimum value of the vertical axis and the maximum value of the vertical axis of the closed area according to the coordinates of each pixel point in the closed area;
determining a rectangular area corresponding to the closed area according to the horizontal axis minimum value, the horizontal axis maximum value, the vertical axis minimum value and the vertical axis maximum value of the closed area;
and determining the rectangular area as a covering position corresponding to the closed area, and generating and displaying a preset color covering layer in the rectangular area.
2. The method according to claim 1, wherein before determining the horizontal axis minimum value, the horizontal axis maximum value, the vertical axis minimum value and the vertical axis maximum value of the closed region according to the coordinates of each pixel point in the closed region, the method further comprises:
and confirming that the number of the pixel points in the closed region is not less than a first preset threshold value.
3. The method according to claim 2, wherein the determining the rectangular area as the covering position corresponding to the closed area further comprises, before generating and displaying a preset color mask layer in the rectangular area:
and determining that the ratio of the number of the pixel points in the closed area to the number of the pixel points in the rectangular area is greater than a second preset threshold value.
4. The method according to claim 1, wherein after generating and displaying a preset color mask at a corresponding covering position of the closed area, the method further comprises:
if the first preset touch operation is detected to occur on the preset color mask layer, hiding the preset color mask layer and displaying the original content corresponding to the covering position.
5. The method of claim 4, wherein after hiding the pre-set color mask and displaying the original content corresponding to the overlay position, further comprising:
and if the second preset touch operation is detected to occur on the preset color mask layer, displaying the preset color mask layer.
6. An apparatus for image processing, comprising:
the device comprises a pixel point confirming module, a color matching module and a color matching module, wherein the pixel point confirming module is used for determining whether pixel points with pixel values within a preset pixel value range exist in an image to be identified, the image to be identified has content to be covered, and the background of the content is marked as a color within the preset pixel value range;
the closed region confirmation module is used for confirming at least one closed region formed by pixel points if the pixel points with the pixel values within the range of the preset pixel values exist in the image to be recognized;
the covering layer generating module is used for generating and displaying a covering layer with a preset color at a covering position corresponding to the closed area;
the cover layer generation module comprises:
the closed region determining unit is used for determining the minimum value of the horizontal axis, the maximum value of the horizontal axis, the minimum value of the vertical axis and the maximum value of the vertical axis of the closed region according to the coordinates of all pixel points in the closed region;
the rectangular area determining unit is used for determining a rectangular area corresponding to the closed area according to the transverse axis minimum value, the transverse axis maximum value, the longitudinal axis minimum value and the longitudinal axis maximum value of the closed area;
and the covering layer generating unit is used for determining the rectangular area as a covering position corresponding to the closed area, and generating and displaying a preset color covering layer in the rectangular area.
7. The apparatus of claim 6, wherein the skin generation module further comprises:
and the number confirmation unit is used for confirming that the number of the pixel points in the closed region is not less than a first preset threshold value.
8. The apparatus of claim 7, wherein the skin generation module further comprises:
and the proportion confirming unit is used for confirming that the proportion of the number of the pixel points in the closed region to the number of the pixel points in the rectangular region is greater than a second preset threshold value.
9. The apparatus of claim 6, further comprising:
and the covering layer hiding module is used for hiding the preset color covering layer and displaying the original content corresponding to the covering position if the first preset touch operation on the preset color covering layer is detected.
10. The apparatus of claim 9, further comprising:
and the covering layer display module is used for displaying the preset color covering layer if detecting that a second preset touch operation occurs on the preset color covering layer.
11. An electronic device, characterized in that the device comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of image processing according to any one of claims 1-5.
12. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of image processing according to any one of claims 1 to 5.
CN201710555993.2A 2017-07-10 2017-07-10 Image processing method, device, equipment and storage medium Active CN107358228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710555993.2A CN107358228B (en) 2017-07-10 2017-07-10 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710555993.2A CN107358228B (en) 2017-07-10 2017-07-10 Image processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107358228A CN107358228A (en) 2017-11-17
CN107358228B true CN107358228B (en) 2021-06-15

Family

ID=60292736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710555993.2A Active CN107358228B (en) 2017-07-10 2017-07-10 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107358228B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872275B (en) * 2017-12-04 2023-05-23 北京金山安全软件有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN112651056A (en) * 2019-10-11 2021-04-13 中国信息通信研究院 Anti-screenshot display method, device and system
CN111161374A (en) * 2019-12-17 2020-05-15 稿定(厦门)科技有限公司 Method and device for circle point drawing
KR102159048B1 (en) * 2019-12-26 2020-09-23 주식회사 폴라리스쓰리디 Method for generating scan path of autonomous mobile robot and computing device for executing the method
CN112036810A (en) * 2020-08-11 2020-12-04 广州番禺电缆集团有限公司 Cable monitoring method, device, equipment and storage medium based on intelligent equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016179310A1 (en) * 2015-05-04 2016-11-10 Smith Andrew Dennis Computer-assisted tumor response assessment and evaluation of the vascular tumor burden
US9679187B2 (en) * 2015-06-17 2017-06-13 Apple Inc. Finger biometric sensor assembly including direct bonding interface and related methods

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004265214A (en) * 2003-03-03 2004-09-24 Seiko Epson Corp Charging managing device and method and program used for the same
CN103973891B (en) * 2014-05-09 2016-06-01 平安付智能技术有限公司 For the data safety processing method of software interface
CN104077792A (en) * 2014-07-04 2014-10-01 厦门美图网科技有限公司 Image processing method with cartoon effect
CN106372126A (en) * 2016-08-24 2017-02-01 广东小天才科技有限公司 Photography-based question search method and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016179310A1 (en) * 2015-05-04 2016-11-10 Smith Andrew Dennis Computer-assisted tumor response assessment and evaluation of the vascular tumor burden
US9679187B2 (en) * 2015-06-17 2017-06-13 Apple Inc. Finger biometric sensor assembly including direct bonding interface and related methods

Also Published As

Publication number Publication date
CN107358228A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN107358228B (en) Image processing method, device, equipment and storage medium
KR102595704B1 (en) Image detection method, device, electronic device, storage medium, and program
JP6250901B2 (en) A robot system in which a CNC and a robot controller are connected via a communication network
CN103927719A (en) Picture processing method and device
CN110443212B (en) Positive sample acquisition method, device, equipment and storage medium for target detection
WO2021017272A1 (en) Pathology image annotation method and device, computer apparatus, and storage medium
WO2015074521A1 (en) Devices and methods for positioning based on image detection
EP0558054A2 (en) Image filing method
CN103927718A (en) Picture processing method and device
CN111275645A (en) Image defogging method, device and equipment based on artificial intelligence and storage medium
WO2019011342A1 (en) Cloth identification method and device, electronic device and storage medium
JP2018066943A (en) Land category change interpretation support device, land category change interpretation support method, and program
JP2023181346A (en) Programming device and program
CN110867243B (en) Image annotation method, device, computer system and readable storage medium
CN107481227B (en) Teaching blackboard writing image processing method and device, intelligent teaching equipment and storage medium
CN104995591A (en) Image processing device and program
US7120296B2 (en) Information processing method
CN105701784A (en) Image processing method capable of real time visualization
CN116168345B (en) Fire detection method and related equipment
CN112698904A (en) Interface rendering method, system, equipment and computer readable storage medium
CN110737417A (en) demonstration equipment and display control method and device of marking line thereof
CN105825161A (en) Image skin color detection method and system thereof
CN112613452B (en) Personnel line-crossing identification method, device, equipment and storage medium
US9678990B2 (en) Construction drawing evaluation systems and methods
CN109766530A (en) Generation method, device, storage medium and the electronic equipment of chart frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant