CN113496454A - Image processing method and device, computer readable medium and electronic equipment - Google Patents
Image processing method and device, computer readable medium and electronic equipment Download PDFInfo
- Publication number
- CN113496454A CN113496454A CN202010193020.0A CN202010193020A CN113496454A CN 113496454 A CN113496454 A CN 113496454A CN 202010193020 A CN202010193020 A CN 202010193020A CN 113496454 A CN113496454 A CN 113496454A
- Authority
- CN
- China
- Prior art keywords
- layer
- target
- image
- processed
- layers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the application provides an image processing method and device, a computer readable medium and electronic equipment. The image processing method comprises the following steps: acquiring an image to be processed, wherein the image to be processed comprises at least two image layers; identifying a target layer from the at least two layers according to the position information of a target area on the image to be processed and the element position information on the at least two layers, wherein at least part of elements on the target layer are located in the target area; and splitting the image to be processed according to the target image layer. According to the technical scheme of the embodiment of the application, the processing speed of the image layer of the image is increased, and the processing efficiency of the image is further improved.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, a computer-readable medium, and an electronic device.
Background
The image may be generated by overlapping a plurality of layers, and each layer may include elements such as text or graphics. In the current technical scheme, a user can select and configure layers in an image to obtain a required image, and the image processing efficiency is low due to complex operation. Therefore, how to increase the processing speed of the image layer of the image and further ensure the processing efficiency of the image becomes an urgent technical problem to be solved.
Disclosure of Invention
Embodiments of the present application provide an image processing method and apparatus, a computer-readable medium, and an electronic device, so that a processing speed of a layer of an image can be increased at least to a certain extent, and thus, an image processing efficiency is ensured.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided an image processing method, including:
acquiring an image to be processed, wherein the image to be processed comprises at least two image layers;
identifying a target layer from the at least two layers according to the position information of a target area on the image to be processed and the element position information on the at least two layers, wherein at least part of elements on the target layer are located in the target area;
and splitting the image to be processed according to the target image layer.
According to an aspect of an embodiment of the present application, there is provided an image processing apparatus including:
the acquisition module acquires an image to be processed, wherein the image to be processed comprises at least two image layers;
the determining module is used for identifying a target layer from the at least two layers according to the position information of a target area on the image to be processed and the element position information on the at least two layers, wherein at least part of elements on the target layer are located in the target area;
and the processing module is used for splitting the image to be processed according to the target image layer.
Based on the foregoing, in some embodiments of the present application, the determining module is configured to: acquiring element coordinate information on the at least two image layers; and matching the element coordinate information with the coordinate information of the target area, and identifying the layer matched with the element coordinate information and the coordinate information of the target area as the target layer.
Based on the foregoing, in some embodiments of the present application, the determining module is configured to: determining an element area range corresponding to each layer based on element coordinate information on each layer; determining an area range corresponding to the target area based on the coordinate information of the target area; and identifying the layer of which the element area range is within the area range corresponding to the target area as the target layer.
Based on the foregoing, in some embodiments of the present application, the determining module is configured to: determining a designated layer, in which at least part of elements in the at least two layers are located in the target area and located on the uppermost layer, according to the element position information on the at least two layers, and acquiring element coordinate information on the designated layer; and traversing the layers below the appointed layer based on the appointed layer, and acquiring element coordinate information on the layers below the appointed layer.
Based on the foregoing, in some embodiments of the present application, the processing module is configured to: and splitting the layer of the image to be processed according to the target layer to obtain a target layer set and a non-target layer set except the target layer.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: and generating and displaying corresponding preview images according to the target layer set and the non-target layer set respectively.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: combining layers included in the target layer set, and hiding layers included in the non-target layer set to generate a first picture; or combining the layers included in the non-target layer set, and hiding the layers included in the target layer set to generate a second picture; or combining the layers included in the target layer set to generate a third target picture, and combining the layers included in the non-target layer set to generate a fourth picture.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: and generating a fifth picture according to the elements positioned in the target area on the target layer.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: acquiring a layer set to be combined; acquiring identification information corresponding to a target combined layer according to the request information for layer combination; acquiring the target combination layer from the layer set to be combined based on the identification information corresponding to the target combination layer; and combining the target combination layers to obtain a sixth picture.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: obtaining a target identification information set according to the identification information corresponding to the target combination layer; and matching the identification information of the layer to be combined in the layer set to be combined with the identification information in the target identification information set, and determining the layer to be combined with the identification information matched with the identification information in the target identification information set as the target combination layer.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: and modifying the display state of the layer to be combined except the target combined layer in the layer set to be combined into a hidden state.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: displaying a target area editing interface according to the request information for splitting the image to be processed; and determining the target area in the image to be processed according to the target area information detected on the target area editing interface.
Based on the foregoing solutions, in some embodiments of the present application, the processing module is further configured to: acquiring a file to be edited; and analyzing the file to be edited to obtain the image to be processed.
According to an aspect of embodiments of the present application, there is provided a computer-readable medium on which a computer program is stored, which, when executed by a processor, implements a method of processing an image as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of processing images as described in the above embodiments.
In the technical solutions provided in some embodiments of the present application, an image to be processed is obtained, where the image to be processed includes at least two image layers, a target image layer is identified from the at least two image layers according to position information of a target area on the image to be processed and element position information on the at least two image layers, at least part of elements on the target image layer are located in the target area, and then the image to be processed is automatically split according to the target image layer. Therefore, the specific image layer in the image does not need to be manually selected for splitting, the manual operation amount and the operation difficulty are reduced, the processing efficiency of the image layer of the image is improved, and the processing efficiency of the image is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
FIG. 2 shows a flow diagram of a method of processing an image according to an embodiment of the present application;
FIG. 3 shows a flowchart of step S220 of the method of processing the image of FIG. 2 according to one embodiment of the present application;
FIG. 4 shows a flowchart of step S320 of the method of processing the image of FIG. 3 according to one embodiment of the present application;
FIG. 5 shows a flowchart of step S310 in the method of processing the image of FIG. 3 according to one embodiment of the present application;
fig. 6 is a schematic flowchart illustrating layer combination further included in the image processing method according to an embodiment of the present application;
FIG. 7 shows a flowchart of step S630 of the method of processing the image of FIG. 6 according to one embodiment of the present application;
FIG. 8 is a schematic flow diagram illustrating the acquisition of a target region further included in the method of processing the image of FIG. 2 according to one embodiment of the present application;
FIG. 9 illustrates a schematic flow chart for acquiring an image to be processed, further included in the image processing method of FIG. 2, according to an embodiment of the present application;
FIG. 10 shows a flow diagram of a method of processing an image according to an embodiment of the present application;
fig. 11 and 12 are schematic diagrams illustrating a specific application scenario of a method for processing an image according to an embodiment of the present application;
FIG. 13 shows a block diagram of an apparatus for processing an image according to an embodiment of the present application;
FIG. 14 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture may include a terminal device (e.g., one or more of a smartphone 101, a tablet computer 102, and a portable computer 103 shown in fig. 1, but may also be a desktop computer, etc.), a network 104, and a server 105. The network 104 serves as a medium for providing communication links between terminal devices and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
A user may use a terminal device to interact with the server 105 over the network 104 to receive or transmit information or the like. The server 105 may be a server that provides various services. A user may obtain, through terminal device 103 (or terminal device 101 or 102), an image to be processed stored in server 105, where the image to be processed includes at least two layers, and identify a target layer from the at least two layers according to position information of a target area on the image to be processed and element position information on the at least two layers, where at least part of elements on the target layer are located in the target area, and split the image to be processed according to the target layer.
It should be noted that the image processing method provided in the embodiment of the present application is generally executed by a terminal device, and accordingly, the image processing apparatus is generally disposed in the terminal device. However, in other embodiments of the present application, the server 105 may also have similar functions to the server, so as to execute the scheme of the image processing method provided in the embodiments of the present application.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
fig. 2 shows a flow diagram of a method of processing an image according to an embodiment of the application. Referring to fig. 2, the image processing method at least includes steps S210 to S230, which are described in detail as follows:
in step S210, an image to be processed is obtained, where the image to be processed includes at least two image layers.
The image to be processed may be an image including multiple image layers, each image layer may have different elements such as characters or graphics, and the multiple image layers are stacked to obtain the image to be processed.
In an embodiment of the present application, the image to be processed may be stored in a database designated by the terminal device or the server in advance, for example, a database of images to be processed, and the like. The user can acquire the image to be processed by browsing data stored in a designated database of the terminal device or the server.
In another embodiment of the present application, the image to be processed may also be uploaded to a terminal device or a server by a user during processing. Specifically, a user may display a file uploading interface by triggering a specific area on the interface (for example, clicking an "upload" button on the interface, etc.), and the user may determine a to-be-processed image to be uploaded through the file uploading interface, for example, paste the to-be-processed image file into the file uploading interface, or browse a storage space of a terminal device or a server through the file uploading interface, so as to select the to-be-processed image file to be uploaded for uploading, and so on.
It should be noted that, the method for acquiring the to-be-processed image in the present application is not limited to the above-mentioned acquisition method, and other acquisition methods may be used according to actual implementation requirements, and the present application is not limited to this.
In step S220, according to the position information of the target area on the image to be processed and the element position information on the at least two image layers, a target image layer is identified from the at least two image layers, where at least part of elements on the target image layer are located in the target area.
The target area may be an area that needs to be processed on the image to be processed. It should be understood that the target area may be located at any position on the image to be processed, such as an upper position, a lower position, or a middle position of the image to be processed. Note that, in an example, the target area may be an area having a regular shape, such as a rectangle, a circle, a square, or the like; in another example, the target area may also be an area having an irregular shape, such as a polygon, etc., which is not particularly limited in this application.
The element position information may be information indicating a position of the element with respect to the image to be processed. For example, the element position information may be coordinate information or orientation information or the like with respect to the image to be processed. It should be understood that the element position information should be determined relative to the image to be processed to facilitate subsequent processing.
In an embodiment of the present application, the target area on the image to be processed may be determined manually according to actual usage requirements, for example, a user may define a certain area in the image to be processed as the target area, or may determine the target area by selecting a predetermined position, for example, selecting an upper half, a lower half, a left half, or a right half of the image to be processed as the target area, and so on.
In step S220, according to the position information of the target area on the image to be processed and the position information of the element on the layer included in the image to be processed, the relative relationship between the position of the element and the target area, such as whether there is an overlap between the two, whether the element is located in the target area, or whether the element is located outside the target area, may be determined. Thus, a target layer with at least part of elements located in the target area can be identified from layers included in the image to be processed.
It should be noted that at least part of the elements on the target layer are located in the target area, and may be that part of the elements are located in the target area, or that all the elements are located in the target area. In an example of the present application, a layer in which all elements on the layer are located in a target area may be used as a target layer; in another example of the present application, a layer in which part of elements on the layer are located in a target area may also be used as a target layer, and a person skilled in the art may configure the target layer according to actual implementation needs, which is not particularly limited in the present application.
In step S230, the image to be processed is split according to the target layer.
The splitting may be to classify layers of the image to be processed, so as to divide the layers included in the image to be processed into different categories.
In this embodiment, according to the determined target layer, the layers included in the image to be processed may be divided, for example, the layers may be divided into a target layer and a non-target layer. Specifically, matching may be performed according to identification information (for example, a layer name or a layer number) of a target layer and identification information of a layer included in an image to be processed, and determining a layer with the same identification information as the target layer and determining a layer with different identification information as a non-target layer.
In the embodiment shown in fig. 2, an image to be processed is obtained, where the image to be processed includes at least two layers, and a target layer in which at least part of elements are located in a target area is identified according to position information of the target area on the image to be processed and element position information on the at least two layers, so as to split the image to be processed according to the target layer. Therefore, a user can limit a target area in the image to be processed according to actual needs to acquire a target layer and automatically split the image, the operation is visual and convenient, the layer of the image to be processed is not required to be manually selected, the workload and the operation difficulty of manual operation are reduced, the processing speed of the layer of the image is increased, and the processing efficiency of the image is further improved.
Based on the embodiment shown in fig. 2, fig. 3 shows a flowchart of step S220 in the image processing method of fig. 2 according to an embodiment of the present application. Referring to fig. 3, the step S220 at least includes steps S310 to S320, which are described in detail as follows:
in step S310, coordinate information of elements on the at least two layers is obtained.
Wherein the element coordinate information may be coordinate information indicating a position of the element on the image to be processed. In an example, the element coordinate information may be coordinate information of an element that the layer has on the image to be processed, such as a coordinate of each point in the element; in another example, the element coordinate information may also be the most valued point coordinate of the element in the layer, such as the coordinate of the point with the smallest and largest abscissa, the coordinate of the point with the largest and smallest ordinate, and so on, and according to the most valued point coordinate of the element, the position and the coverage of the element in the image to be processed may be determined. Therefore, it is understood that the element coordinate information may be any form of coordinate data that can be used to indicate the position and coverage of the element on the image to be processed, and those skilled in the art can select the element coordinate information in a corresponding form according to actual implementation needs, which is not limited in this application.
In this embodiment, element coordinate information of each element may be included in each layer, and the element coordinate information may be associated with each layer in the form of data. In an actual use process, the element coordinate information associated with each layer may be obtained according to the identification information (e.g., a layer name or a layer number) of the layer.
In step S320, the element coordinate information is matched with the coordinate information of the target area, and a layer where the element coordinate information is matched with the coordinate information of the target area is identified as the target layer.
In this embodiment, according to the obtained element coordinate information, the obtained element coordinate information may be matched with the coordinate information of the target area to determine whether the element is located in the target area, and if the element is located in the target area, it may be determined that the element coordinate information of the element is matched with the coordinate information of the target area, and a layer whose element coordinate information is matched with the coordinate information of the target area is determined as a target layer.
In the embodiment shown in fig. 3, according to the matching between the element coordinate information and the coordinate information of the target area, whether the element is located in the target area can be accurately determined, so as to identify the target layer from the layers included in the image to be processed, and ensure the accuracy of the identification of the target layer.
Based on the embodiments shown in fig. 2 and fig. 3, fig. 4 shows a flowchart of step S320 in the image processing method of fig. 3 according to an embodiment of the present application. Referring to fig. 3, the step S320 at least includes steps S410 to S430, which are described in detail as follows:
in step S410, an element area range corresponding to each layer is determined based on the element coordinate information on each layer.
In this embodiment, according to the obtained element coordinate information on the layer, the element area range of all elements in the layer in the image to be processed may be determined. Specifically, according to the element coordinate information, the minimum value of the abscissa and the minimum value of the ordinate in the element coordinate information may be combined to obtain the start coordinate of the element area range corresponding to the layer, for example, if the minimum value of the abscissa in the element coordinate information is 4, the minimum value of the ordinate is 8, the start coordinate of the layer is (4,8), and so on. Obtaining the width of an element area range corresponding to the layer according to the maximum value of the abscissa in the element coordinate information, namely subtracting the minimum value of the abscissa from the maximum value of the abscissa in the element coordinate information to obtain the width; and obtaining the height of the element area range corresponding to the layer according to the maximum value of the vertical coordinate in the element coordinate information, namely subtracting the minimum value of the vertical coordinate from the maximum value of the vertical coordinate in the element coordinate information to obtain the height of the area range corresponding to the layer. Therefore, the element area range corresponding to the layer can be obtained according to the coordinate origin, the width and the height.
In step S420, an area range corresponding to the target area is determined based on the coordinate information of the target area.
In this embodiment, according to the coordinate information of the target area, the start coordinate, the width, and the height of the target area may be obtained to obtain the area range corresponding to the target area. The calculation method of the start coordinate, the width and the height of the target area is as described above, and is not described herein again.
In step S430, a layer whose element area range is within the area range corresponding to the target area is identified as the target layer.
In this embodiment, according to the obtained element area range corresponding to the layer and the area range corresponding to the target area, the element area range corresponding to each layer is compared with the area range corresponding to the target area, and if the element area range corresponding to the layer is within the area range corresponding to the target area, the layer is determined to be the target layer.
Specifically, the initial coordinates, the widths, and the heights of the two may be compared to determine whether the element area range corresponding to the layer is within the area range corresponding to the target area, and the initial coordinate of the element area range corresponding to the layer is set as (x)1,y1) Width of w1Height of h1The start coordinate of the region range corresponding to the target region is (x)0,y0) Width of w0Height of h0If x0<x1,y0<y1,x0+w0>x1+w1And y is0+h0>y1+h1If the element area range corresponding to the layer is within the area range corresponding to the target layer, it may be determined that the layer is the target layer.
In other embodiments of the present application, it may also be determined whether the element area range corresponding to the layer is matched with the area range of the target area according to a comparison between the maximum value in the element coordinate information of the layer and the maximum value in the coordinate information of the target area.
It can be understood that, in a layer whose element area range is within the area range corresponding to the target area, an element area range corresponding to a certain element on the layer may be within the area range corresponding to the target area, or element area ranges corresponding to all elements on the layer may all be located within the area range corresponding to the target area. Therefore, when the image to be processed is split subsequently, the user can conveniently acquire the elements contained in the target area. Those skilled in the art can configure the device according to actual implementation requirements, and the present application is not limited to this.
In the embodiment shown in fig. 4, according to the element coordinate information on the layer and the coordinate information of the target area, an element area range corresponding to the layer and an area range corresponding to the target area are respectively determined, and the element area range corresponding to each layer is compared with the area range corresponding to the target area, so as to identify the target layer from which the element area range is within the area range corresponding to the target area. By comparing the element area range corresponding to the image layer with the area range corresponding to the target area, the target image layer can be accurately identified, and the accuracy of target image layer identification is improved, so that the subsequent splitting effect is ensured.
Based on the embodiments shown in fig. 2 and fig. 3, fig. 5 shows a flowchart of step S310 in the image processing method of fig. 3 according to an embodiment of the present application. Referring to fig. 5, the step S310 at least includes steps S510 to S520, which are described in detail as follows:
in step S510, according to the element position information on the at least two layers, a specified layer in which at least some elements in the at least two layers are located in the target area and on the uppermost layer is determined, and the element coordinate information on the specified layer is obtained.
In this embodiment, according to the element position information on the layer of the image to be processed, it may be determined that at least some elements in the layer included in the image to be processed are located in the target area and in the designated layer on the uppermost layer, and in an actual process, the layer having a boundary with the target area may be identified in advance according to the element position information on the layer, that is, at least some elements on the layer overlap with the target area. And then according to the layers which are in boundary with the target area and the hierarchical relation of each layer, taking the layer at the top as a specified layer, and acquiring element coordinate information on the specified layer.
In step S520, according to the specified layer, traversing layers below the specified layer, and acquiring element coordinate information on the layers below the specified layer.
In this embodiment, based on the determined specified layer, all layers below the specified layer are traversed, and element coordinate information on all layers below the specified layer is obtained.
In the embodiment shown in fig. 5, the element coordinate information on the specified layer is obtained according to the predetermined specified layer, and then all layers below the specified layer are traversed based on the specified layer and the hierarchical relationship of each layer, and the element coordinate information on all layers below the specified layer is obtained. Therefore, the element coordinate information of all layers in the image to be processed can be avoided from being acquired for subsequent matching calculation, the calculation amount is reduced, and the identification efficiency of the target layer is improved.
Based on the embodiment shown in fig. 2, in an embodiment of the present application, splitting the to-be-processed image according to the target layer includes:
and splitting the layer of the image to be processed according to the target layer to obtain a target layer set and a non-target layer set except the target layer.
In this embodiment, according to the determined target layer, the layers included in the image to be processed are split, and the layers included in the image to be processed are divided into a target layer and a non-target layer, so as to obtain a target layer set and a non-target layer set.
In an embodiment of the present application, a type identifier may be added to an image layer included in an image to be processed, so as to distinguish the image layer as a target image layer or a non-target image layer. It should be noted that the type identifier may be a letter identifier (for example, the letter "a" represents a target layer, the letter "B" represents a non-target layer, and the like), or the type identifier may be a number identifier (for example, the number "1" represents a target layer, and the number "2" represents a non-target layer), and the like, which is not particularly limited in this application.
According to the type identifier of each layer, the identifier information of the layer may be stored in different sets according to the corresponding type identifier, that is, the element in the target layer set is the identifier information of the layer with the type identifier as the target layer, and the element in the non-target layer set is the identifier information of the layer with the type identifier as the non-target layer, so as to achieve the purpose of splitting all layers of the image to be processed.
In the embodiment, the layers included in the image to be processed can be automatically split according to the determined target layer without manually confirming and selecting one by one, so that the manual operation amount is reduced, and the image processing efficiency is further improved.
Based on the foregoing embodiment, in an embodiment of the present application, the image processing method further includes:
and generating and displaying corresponding preview images according to the target layer set and the non-target layer set respectively.
In this embodiment, preview images respectively corresponding to the two sets may be generated according to layers in the target layer set and layers in the non-target layer set. Specifically, the display state of the layer may be modified to generate a corresponding preview image, for example, a preview image corresponding to the target layer set is generated, and the display state of the layer belonging to the non-target layer set may be modified to a hidden state, so as to achieve the purpose of not displaying the layer in the non-target layer set, thereby obtaining the preview image corresponding to the target layer set, and so on.
In the embodiment, the corresponding preview images are generated and respectively displayed according to the split target layer set and the split non-target layer set, so that a user can intuitively know whether the split of the layers is actually needed and correct, the user can conveniently change and confirm the split result in time, and the accuracy of the split result is ensured.
Based on the foregoing embodiments, in some embodiments of the present application, the image processing method further includes:
combining layers included in the target layer set, and hiding layers included in the non-target layer set to generate a first picture; or
Combining the layers included in the non-target layer set, and hiding the layers included in the target layer set to generate a second picture; or
Combining the layers contained in the target layer set to generate a third picture, and
and combining the layers contained in the non-target layer set to generate a fourth picture.
In this embodiment, according to the layers included in the target layer set and the layers included in the non-target layer set, the layers are combined to generate a first picture corresponding to the target layer set and a second picture corresponding to the non-target layer set, which may improve the application range of the processing method, for example, if a user wants to acquire a picture in the target area, the first picture may be correspondingly generated to obtain a picture composed of layers in the target area, if the user wants to acquire a picture outside the target area, the second picture may be correspondingly generated, and if the user needs to split the layers in the target area and the layers outside the target area, the third picture and the fourth picture may be correspondingly generated respectively to obtain split pictures, and so on. Therefore, different use requirements of users can be met, and the use range of the image processing method is further widened.
The plurality of image layers are combined to generate the first picture, the second picture or the third picture and the fourth picture, when the images are called subsequently, for example, webpage display and the like, the plurality of image layers are not required to be called, and only the corresponding first picture, the corresponding second picture or the corresponding third picture and the corresponding fourth picture are required to be called, so that the calling request information of the image layers is reduced, the realization is facilitated, and the occupied storage space is smaller.
Based on the embodiment shown in fig. 2, in an embodiment of the present application, the image processing method further includes:
and generating a fifth picture according to the elements positioned in the target area on the target layer.
In this embodiment, the elements on each layer located in the target area may be determined according to the element position information on each layer and the position information of the target area, and then the elements on each layer located in the target area are split and recombined from the layers, so as to generate the fifth picture. Therefore, the user can acquire the elements in the target area and generate the fifth picture without redrawing, and the image processing efficiency is improved.
It should be noted that the element located in the target region may be a part of the element located in the target region, or may be all of the element located in the target region, and those skilled in the art may perform the screening according to actual implementation needs, and the present application is not particularly limited to this.
Based on the embodiment shown in fig. 2, fig. 6 is a schematic flowchart illustrating layer combination further included in the image processing method according to an embodiment of the present application. Referring to fig. 6, the step of combining layers at least includes steps S610 to S640, which are described in detail as follows:
in step S610, a layer set to be combined is obtained.
The layer set to be combined may be a set of all layers in the image to be processed.
In this embodiment, the image to be processed may be analyzed in advance, and the layer included in the image to be processed is obtained, so as to form the layer set to be combined.
In step S620, according to the request information for layer combination, identification information corresponding to the target combination layer is obtained.
The request information for layer combination may be information for requesting layer combination. According to actual use requirements, a user can send request information for layer combination to the terminal device by triggering a specific area (for example, a key of "layer combination") on the interface.
In this embodiment, when request information for layer combination is received, identification information of a target combination layer to be combined by a user is obtained. Specifically, the terminal device may display, on the interface, the identification information of the layer to be combined in the layer set to be combined, for example, display in a form of a drop-down list. It should be noted that the identification information may be information uniquely determined with the layer to be combined, and according to the identification information, the layer to be combined corresponding to the identification information may be correspondingly determined.
The user can select the layer corresponding to the identification information as the target combined layer by browsing the identification information. And the terminal equipment acquires the identification information corresponding to the target combined layer according to the selection of the user (such as checking or clicking the corresponding identification information).
In step S630, based on the identification information of the target combination layer, the target combination layer is obtained from the to-be-combined layer set.
In this embodiment, according to the obtained identification information of the target combination layer, the identification information may be matched with a layer to be combined in the layer set to be combined, so as to determine, from the layer set to be combined, a layer to be combined corresponding to the identification information as the target combination layer, and obtain the target combination layer.
In step S640, the target combination layers are combined to obtain a sixth picture.
In the embodiment shown in fig. 6, by obtaining the identification information of the target combination layer, obtaining the corresponding target combination layer from the layer to be combined according to the identification information, and then performing combination according to the target combination layer to automatically generate the sixth picture, the user can select the corresponding layer in the image to be processed according to the actual use requirement to generate the sixth picture actually required. Therefore, the image layers in the image to be processed can be conveniently processed by the user, and the image processing efficiency is improved.
Based on the embodiments shown in fig. 2 and fig. 6, fig. 7 shows a flowchart of step S630 in the image processing method of fig. 6 according to an embodiment of the present application. Referring to fig. 6, the step S630 at least includes steps S710 to S720, which are described in detail as follows:
in step S710, a target identification information set is obtained according to the identification information corresponding to the target combination layer.
In this embodiment, according to the obtained identification information corresponding to the target combination layer, the identification information may be integrated to obtain a target identification information set, where the target identification information set includes identification information of all target combination layers.
In step S720, matching the identification information of the layer to be combined in the layer set to be combined with the identification information in the target identification information set, and determining the layer to be combined with the identification information matched with the identification information in the target identification information set as the target combination layer.
In this step, the identification information of the layers to be combined in the layer set to be combined may be matched with the identification information in the target identification set one by one, and if the two identification information are consistent, it is determined that the layer to be combined is the target combination layer to be subjected to layer combination by the user, and the target combination layer is obtained from the layer set to be combined.
In the embodiment shown in fig. 7, the target combination layer in the layer set to be combined is determined by comparing the target identification information set with the identification information of the layer to be combined in the layer set to be combined. The target combination layer can be accurately identified from the layer to be combined, the target combination layer is prevented from being identified wrongly, the accuracy rate of the identification of the target combination layer is ensured, and the subsequent combination effect of layer combination is further ensured.
Based on the foregoing embodiment, in an embodiment of the present application, the processing method further includes:
and modifying the display state of the layer to be combined except the target combined layer in the layer set to be combined into a hidden state.
In this embodiment, the display state of the layer to be combined, except for the target combination layer, in the layer set to be combined is modified into a hidden state, a preview image after layer combination can be obtained when the image to be processed is displayed, a user can determine whether layer combination is correct according to the preview image, and if the layer combination is incorrect, corresponding editing can be performed, for example, a certain target combination layer is added or a certain target combination layer is removed, and the like. The situation that the sixth picture needs to be repeatedly generated for multiple times if an error occurs is avoided, and the image processing efficiency is improved.
Based on the embodiment shown in fig. 2, fig. 8 is a schematic flowchart illustrating a target region acquisition process further included in the image processing method of fig. 2 according to an embodiment of the present application. Referring to fig. 8, acquiring the target area at least includes steps S810 to S820, which are described in detail as follows:
in step S810, a target area editing interface is displayed according to the request information for splitting the image to be processed.
The request information for splitting the image to be processed may be information for requesting splitting of the image layer of the image to be processed. If the user needs to split the layer of the image to be processed, in an example, the user may send request information for splitting the image to be processed by triggering a specific area on the interface (for example, clicking a "layer splitting" key, etc.); in another example, the user may also trigger a corresponding physical key through an input device (e.g., a keyboard or a mouse) configured on the terminal device to send the request information for splitting the image to be processed, which is not particularly limited in this application.
The target area editing interface may be an interface for editing the target area. When request information for splitting the image to be processed is received, a target area editing interface can be displayed on the interface for a user to edit. In an example, the target area editing interface may provide a limiting frame in advance, and the user may determine the target area by adjusting the position and size of the limiting frame on the image to be processed; in another example, the target area editing interface may also determine the target area according to the motion trajectory of the user input device, for example, the user may draw a corresponding area in the image to be processed by using a mouse to serve as the target area, and the like, which is not particularly limited in this application.
In step S820, the target area in the image to be processed is determined according to the target area information detected on the target area editing interface.
The target area information may be information indicating a position of the target area in the image to be processed. And according to the target area information, the specific position of the target area in the image to be processed can be correspondingly determined.
In this embodiment, the target area editing interface may obtain target area information input by the user, for example, the size and coordinates of the bounding box or the coordinates of the trajectory moved by the mouse, and the terminal device may determine the specific position of the target area in the image to be processed according to the target area information.
In the embodiment shown in fig. 8, through setting of the target area editing interface, a user can determine a corresponding target area according to actual use requirements to split layers in the target area, the operation is simple and intuitive, the user can accurately adjust the position and the size of the target area, the use habits of the user are met, and the efficiency of determining the target area is improved.
Fig. 9 is a schematic flowchart illustrating a process of acquiring an image to be processed, further included in the image processing method of fig. 2 according to an embodiment of the present application, based on the embodiment illustrated in fig. 2. Referring to fig. 9, acquiring the to-be-processed image at least includes steps S910 to S920, which are described in detail as follows:
in step S910, a file to be edited is acquired;
the file to be edited may be an image file that needs to be processed, for example, the file to be edited may be drawing paper created by a drawing staff or engineering drawing designed by an engineering staff.
In an embodiment of the present application, the file to be edited may be a file stored on the terminal device or the server, and the file to be edited may be obtained by reading the file stored in the terminal device or the server.
In step S920, the file to be edited is analyzed to obtain the image to be processed.
In this embodiment, according to the obtained file to be edited, the file to be edited may be parsed, so as to obtain image information of the file to be edited, for example, the image information may include, but is not limited to, layer information included in the file to be edited and element coordinate information on each layer. Based on the analysis result, the file to be edited can be displayed on the interface, namely the file to be edited is the image to be processed.
In the embodiment shown in fig. 9, the file to be edited is acquired and analyzed to generate the image to be processed, so that the user can process the file to be edited, the application range of the image processing method is expanded, and the image processing method is convenient for the user to use.
Based on the technical solution of the above embodiment, a specific application scenario of the embodiment of the present application is introduced as follows:
FIG. 10 shows a flow diagram of a method of processing an image according to an embodiment of the present application. Referring to fig. 10, the image processing method at least includes steps S1010 to S1100, which are described in detail as follows:
in step S1010, an image to be processed is acquired.
The image to be processed includes at least two image layers, which may be obtained by analyzing a file to be edited by a user, for example, the file may be drawing paper of a painter or engineering paper of an engineer.
In step S1020, a combined layer or a split layer is determined.
In this step, it may be determined to perform layer combination or layer splitting according to the received command request, for example, if request information for performing layer combination is received, this indicates that an operation of combining layers is performed, and if request information for splitting an image to be processed is received, this indicates that an operation of splitting layers is performed. If the operation of combining layers is performed, the process proceeds to step S1080, and if the operation of splitting layers is performed, the process proceeds to step S1030.
In step S1030, a target area is determined.
In this step, if a request for splitting the image to be processed is received, a target area editing interface may be displayed on the interface, and the target area is determined according to target area information detected by the target area editing interface.
In step S1040, a target layer is determined according to the position information of the target area and the element position information on the layer of the image to be processed.
In this step, it may be determined whether at least part of the elements of the layer are located in the target area according to the element position information on the layer, and if so, it is determined that the layer is the target layer.
In step S1050, the image to be processed is split according to the target layer, so as to obtain a target layer set and a non-target layer set.
In step S1060, a first picture is generated according to the target layer included in the target layer set.
In step S1070, a second picture is generated according to the non-target layer included in the non-target layer set.
In step S1080, the identification information of the target combination layer is acquired.
In this step, according to the request information for layer combination, identification information corresponding to the target combined layer is obtained. Specifically, the user may select, by browsing the identification information, the layer corresponding to the identification information as the target combined layer. And the terminal equipment acquires the identification information corresponding to the target combined layer according to the selection of the user (such as checking or clicking the corresponding identification information).
In step S1090, a target combination layer is obtained according to the identification information of the target combination layer.
In this step, according to the obtained identification information of the target combination layer, a layer in which the identification information in the image to be processed matches with the identification information of the target combination layer is obtained to serve as the target combination layer.
In step S1100, the target combination layers are combined to obtain a sixth picture.
In the embodiment shown in fig. 10, a user may select a combined layer or a split layer according to an actual use requirement, and obtain a layer that the user needs to split or combine, so as to automatically generate a required target picture. Therefore, the manual operation amount and the operation difficulty are greatly reduced, the processing speed of the image layer of the image is increased, and the processing efficiency of the image is further improved.
Fig. 11 and 12 are schematic diagrams illustrating a specific application scenario of the image processing method according to an embodiment of the present application (hereinafter, an image a to be processed is explained as an example).
As shown in fig. 11, an image a to be processed is obtained, a user may define a target area 1110 in the image a to be processed according to an actual use requirement, and a terminal device may determine a target layer according to the target area and element position information on the layer in the image to be processed (hereinafter, an example in which all elements on the target layer are located in the target area is described). In the image a to be processed, although the area range of the iron tower is located at the boundary with the target area 1110, the area range of the iron tower is not completely located in the target area, and therefore, the image layer where the iron tower is located is not the target image layer.
The moon and the cloud at the left position in the image to be processed a are not bordered by the target area 1110, so the layer where the moon and the cloud at the left position in the image to be processed are located is not the target layer. Only the cloud at the right position in the image a to be processed is completely within the target area 1110, and thus, the layer where the cloud is located is the target layer. And splitting the image A to be processed according to the identified target image layer to obtain a first image B and a second image C.
Therefore, a user can split the image to be processed by limiting the target area to obtain the required target picture, the operation is simple, the trouble that the user needs to select a specific required split layer is reduced, and the image processing efficiency is improved.
As shown in fig. 12, when a user needs to perform graph layer combination, a target combination graph layer may be selected from the to-be-combined graph layer set, for example, a graph layer D and a graph layer E in fig. 12 are selected as the target combination graph layer. The terminal device combines the image layer D and the image layer E based on the relevant information of the image layer, such as element coordinate information on the image layer, according to the command for performing the image layer combination, so as to obtain a required picture, as shown in an image F in fig. 12.
Therefore, a user can select the layers to be combined in the layer set to be combined as the target combined layers according to actual use requirements and combine the layers, the user can conveniently operate the image, the user can quickly obtain the required combined image, and the image processing efficiency is improved.
Embodiments of the apparatus of the present application are described below, which may be used to perform the image processing method in the above-described embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the image processing method described above in the present application.
Fig. 13 shows a block diagram of an apparatus for processing an image according to an embodiment of the present application.
Referring to fig. 13, an image processing apparatus according to an embodiment of the present application includes:
an obtaining module 1310 configured to obtain an image to be processed, where the image to be processed includes at least two image layers;
a determining module 1320, configured to identify a target layer from the at least two layers according to position information of a target area on the image to be processed and position information of elements on the at least two layers, where at least some elements on the target layer are located in the target area;
the processing module 1330 splits the image to be processed according to the target layer.
Based on the foregoing, in some embodiments of the present application, the determining module 1320 is configured to: acquiring element coordinate information on the at least two image layers; and matching the element coordinate information with the coordinate information of the target area, and identifying the layer matched with the element coordinate information and the coordinate information of the target area as the target layer.
Based on the foregoing, in some embodiments of the present application, the determining module 1320 is configured to: determining an element area range corresponding to each layer based on element coordinate information on each layer; determining an area range corresponding to the target area based on the coordinate information of the target area; and identifying the layer of which the element area range is within the area range corresponding to the target area as the target layer.
Based on the foregoing, in some embodiments of the present application, the determining module 1320 is configured to: determining a designated layer, in which at least part of elements in the at least two layers are located in the target area and located on the uppermost layer, according to the element position information on the at least two layers, and acquiring element coordinate information on the designated layer; and traversing the layers below the appointed layer based on the appointed layer, and acquiring element coordinate information on the layers below the appointed layer.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is configured to: and splitting the layer of the image to be processed according to the target layer to obtain a target layer set and a non-target layer set except the target layer.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: and generating and displaying corresponding preview images according to the target layer set and the non-target layer set respectively.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: combining layers included in the target layer set, and hiding layers included in the non-target layer set to generate a first picture; or combining the layers included in the non-target layer set, and hiding the layers included in the target layer set to generate a second picture; or combining the layers included in the target layer set to generate a third picture, and combining the layers included in the non-target layer set to generate a fourth picture.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: and generating a fifth picture according to the elements positioned in the target area on the target layer.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: acquiring a layer set to be combined; acquiring identification information corresponding to a target combined layer according to the request information for layer combination; acquiring the target combination layer from the layer set to be combined based on the identification information corresponding to the target combination layer; and combining the target combination layers to obtain a sixth picture.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: obtaining a target identification information set according to the identification information corresponding to the target combination layer; and matching the identification information of the layer to be combined in the layer set to be combined with the identification information in the target identification information set, and determining the layer to be combined with the identification information matched with the identification information in the target identification information set as the target combination layer.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: and modifying the display state of the layer to be combined except the target combined layer in the layer set to be combined into a hidden state.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: displaying a target area editing interface according to the request information for splitting the image to be processed; and determining the target area in the image to be processed according to the target area information detected on the target area editing interface.
Based on the foregoing, in some embodiments of the present application, the processing module 1330 is further configured to: acquiring a file to be edited; and analyzing the file to be edited to obtain the image to be processed.
FIG. 14 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system of the electronic device shown in fig. 14 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 14, the computer system includes a Central Processing Unit (CPU)1401, which can perform various appropriate actions and processes, such as performing the methods described in the above embodiments, according to a program stored in a Read-Only Memory (ROM) 1402 or a program loaded from a storage portion 1408 into a Random Access Memory (RAM) 1403. In the RAM 1403, various programs and data necessary for system operation are also stored. The CPU 1401, ROM 1402, and RAM 1403 are connected to each other via a bus 1404. An Input/Output (I/O) interface 1405 is also connected to the bus 1404.
The following components are connected to the I/O interface 1405: an input portion 1406 including a keyboard, a mouse, and the like; an output portion 1407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage portion 1408 including a hard disk and the like; and a communication section 1409 including a Network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 1409 performs communication processing via a network such as the internet. The driver 1410 is also connected to the I/O interface 1405 as necessary. A removable medium 1411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1410 as necessary, so that a computer program read out therefrom is installed into the storage section 1408 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 1409 and/or installed from the removable medium 1411. When the computer program is executed by a Central Processing Unit (CPU)1401, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (15)
1. A method of processing an image, comprising:
acquiring an image to be processed, wherein the image to be processed comprises at least two image layers;
identifying a target layer from the at least two layers according to the position information of a target area on the image to be processed and the element position information on the at least two layers, wherein at least part of elements on the target layer are located in the target area;
and splitting the image to be processed according to the target image layer.
2. The processing method according to claim 1, wherein identifying a target layer from the at least two layers according to the position information of the target area on the image to be processed and the element position information on the at least two layers comprises:
acquiring element coordinate information on the at least two image layers;
and matching the element coordinate information with the coordinate information of the target area, and identifying the layer matched with the element coordinate information and the coordinate information of the target area as the target layer.
3. The processing method according to claim 2, wherein matching the element coordinate information with the coordinate information of the target area, and identifying a layer whose element coordinate information matches the coordinate information of the target area as the target layer comprises:
determining an element area range corresponding to each layer based on element coordinate information on each layer;
determining an area range corresponding to the target area based on the coordinate information of the target area;
and identifying the layer of which the element area range is within the area range corresponding to the target area as the target layer.
4. The processing method according to claim 2, wherein obtaining element coordinate information on the at least two image layers comprises:
determining a designated layer, in which at least part of elements in the at least two layers are located in the target area and located on the uppermost layer, according to the element position information on the at least two layers, and acquiring element coordinate information on the designated layer;
and traversing the layers below the appointed layer based on the appointed layer, and acquiring element coordinate information on the layers below the appointed layer.
5. The processing method according to claim 1, wherein splitting the image to be processed according to the target image layer comprises:
and splitting the layer of the image to be processed according to the target layer to obtain a target layer set and a non-target layer set except the target layer.
6. The processing method according to claim 5, characterized in that it further comprises:
combining layers included in the target layer set, and hiding layers included in the non-target layer set to generate a first picture; or
Combining the layers included in the non-target layer set, and hiding the layers included in the target layer set to generate a second picture; or
And combining the layers included in the target layer set to generate a third picture, and combining the layers included in the non-target layer set to generate a fourth picture.
7. The processing method according to claim 1, characterized in that it further comprises:
and generating a fifth picture according to the elements positioned in the target area on the target layer.
8. The processing method according to claim 1, characterized in that it further comprises:
acquiring a layer set to be combined;
acquiring identification information corresponding to a target combined layer according to the request information for layer combination;
acquiring the target combination layer from the layer set to be combined based on the identification information corresponding to the target combination layer;
and combining the target combination layers to obtain a sixth picture.
9. The processing method according to claim 8, wherein obtaining the target combination layer from the to-be-combined layer set based on the identification information corresponding to the target combination layer comprises:
obtaining a target identification information set according to the identification information corresponding to the target combination layer;
and matching the identification information of the layer to be combined in the layer set to be combined with the identification information in the target identification information set, and determining the layer to be combined with the identification information matched with the identification information in the target identification information set as the target combination layer.
10. The processing method of claim 8, further comprising:
and modifying the display state of the layer to be combined except the target combined layer in the layer set to be combined into a hidden state.
11. The method for processing the image according to claim 1, further comprising:
displaying a target area editing interface according to the request information for splitting the image to be processed;
and determining the target area in the image to be processed according to the target area information detected on the target area editing interface.
12. The processing method according to claim 1, wherein acquiring the image to be processed comprises:
acquiring a file to be edited;
and analyzing the file to be edited to obtain the image to be processed.
13. An apparatus for processing an image, comprising:
the acquisition module acquires an image to be processed, wherein the image to be processed comprises at least two image layers;
the determining module is used for identifying a target layer from the at least two layers according to the position information of a target area on the image to be processed and the element position information on the at least two layers, wherein at least part of elements on the target layer are located in the target area;
and the processing module is used for splitting the image to be processed according to the target image layer.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out a method of processing an image according to any one of claims 1 to 12.
15. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out a method of processing an image according to any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010193020.0A CN113496454A (en) | 2020-03-18 | 2020-03-18 | Image processing method and device, computer readable medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010193020.0A CN113496454A (en) | 2020-03-18 | 2020-03-18 | Image processing method and device, computer readable medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113496454A true CN113496454A (en) | 2021-10-12 |
Family
ID=77993406
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010193020.0A Pending CN113496454A (en) | 2020-03-18 | 2020-03-18 | Image processing method and device, computer readable medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113496454A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114510169A (en) * | 2022-01-19 | 2022-05-17 | 中国平安人寿保险股份有限公司 | Image processing method, device, equipment and storage medium |
CN114554089A (en) * | 2022-02-21 | 2022-05-27 | 阿里巴巴(中国)有限公司 | Video processing method, device, equipment, storage medium and computer program product |
-
2020
- 2020-03-18 CN CN202010193020.0A patent/CN113496454A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114510169A (en) * | 2022-01-19 | 2022-05-17 | 中国平安人寿保险股份有限公司 | Image processing method, device, equipment and storage medium |
CN114554089A (en) * | 2022-02-21 | 2022-05-27 | 阿里巴巴(中国)有限公司 | Video processing method, device, equipment, storage medium and computer program product |
CN114554089B (en) * | 2022-02-21 | 2023-11-28 | 神力视界(深圳)文化科技有限公司 | Video processing method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3692438B1 (en) | Automatic generation of a graphic user interface (gui) based on a gui screen image | |
US10733754B2 (en) | Generating a graphical user interface model from an image | |
CN110968309B (en) | Template generation method and device, electronic equipment and storage medium | |
US11132114B2 (en) | Method and apparatus for generating customized visualization component | |
CN111045653B (en) | System generation method and device, computer readable medium and electronic equipment | |
CN109309844B (en) | Video speech processing method, video client and server | |
CN111752557A (en) | Display method and device | |
CN106484726B (en) | A kind of page display method and device | |
CN112445693A (en) | Page update detection method, device, equipment and storage medium | |
CN110519155B (en) | Information processing method and system | |
CN115861609A (en) | Segmentation labeling method of remote sensing image, electronic device and storage medium | |
CN112256254A (en) | Method and device for generating layout code | |
CN113496454A (en) | Image processing method and device, computer readable medium and electronic equipment | |
CN113177390A (en) | Intelligent document processing method and device, computer readable medium and electronic equipment | |
US9881210B2 (en) | Generating a computer executable chart visualization by annotating a static image | |
AU2015264474A1 (en) | Systems and methods for programming behavior of a website to respond to capabilities of different devices | |
KR102624944B1 (en) | Method, computer device, and computer program for real-time inspector on live commerce platform | |
CN111782740A (en) | Seat data processing method, computing device and storage medium | |
CN110531972A (en) | The edit methods and device of resource layout Resource Properties | |
CN109522429A (en) | Method and apparatus for generating information | |
CN112308074B (en) | Method and device for generating thumbnail | |
CN111984839B (en) | Method and device for drawing user portrait | |
CN111782309B (en) | Method and device for displaying information and computer readable storage medium | |
US20200364448A1 (en) | Digital assessment user interface with editable recognized text overlay | |
CN113900602B (en) | Intelligent printing method and system for automatically eliminating target object filling information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |