CN114677432A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN114677432A
CN114677432A CN202210293355.9A CN202210293355A CN114677432A CN 114677432 A CN114677432 A CN 114677432A CN 202210293355 A CN202210293355 A CN 202210293355A CN 114677432 A CN114677432 A CN 114677432A
Authority
CN
China
Prior art keywords
size
elements
original image
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210293355.9A
Other languages
Chinese (zh)
Inventor
郭凯
黄荣军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gaoding Xiamen Technology Co Ltd
Original Assignee
Gaoding Xiamen Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gaoding Xiamen Technology Co Ltd filed Critical Gaoding Xiamen Technology Co Ltd
Priority to CN202210293355.9A priority Critical patent/CN114677432A/en
Publication of CN114677432A publication Critical patent/CN114677432A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device and a storage medium, belongs to the technical field of image processing, and solves the problems of low generation efficiency and inflexible size adjustment of media files with various size requirements. The method comprises the following steps: acquiring an original image of a media file and a target size of a target image to be generated; determining the attribute of each element according to the position of each element in the original image; scaling the canvas size and the element sizes of the target image according to the size of the original image and the target size; and adjusting the position of each element after zooming in the target image according to the attribute of each element to obtain the target image. The embodiment of the invention is suitable for generating images with different sizes by the media file.

Description

Image processing method, device and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
With the rapid development of the internet, the popularization of media files is an indispensable part in each company or enterprise. Generally, one media file needs to be launched on a plurality of platforms and a plurality of electronic devices, and since the launching sizes of the media files are different according to different launching scenes, the same media file can adapt to different launching scenes only by comprising a plurality of size types. Currently, the size of a media file can be manually adjusted based on the original media file to meet the size requirement of the corresponding scene. In the manual adjustment method, the adjustment efficiency is low. If the target media file is automatically generated by the equipment, the size of the currently generated media file is usually fixed, visual aesthetics is lacked, and the size cannot be flexibly adjusted according to the requirements of users.
Disclosure of Invention
An object of embodiments of the present invention is to provide an image processing method, an image processing apparatus, and a storage medium, so as to solve at least the problems of low generation efficiency and inflexible size adjustment of media files with various size requirements.
In order to achieve the above object, an embodiment of the present invention provides an image processing method, including: acquiring an original image of a media file and a target size of a target image to be generated; determining the attribute of each element according to the position of each element in the original image; scaling the canvas size and the element sizes of the target image according to the size of the original image and the target size; and adjusting the position of each element after zooming in the target image according to the attribute of each element to obtain the target image.
Further, the determining the attribute of each element according to the position of each element in the original image includes: determining an element with a difference value between the size and the canvas size of the original image within a set range as a background element; determining elements completely distributed in the canvas of the original image as normal elements; determining an element with at least one edge exceeding the canvas edge of the original image as an overflow element; and determining an element with at least one edge coinciding with the canvas edge of the original image as a welting element.
Further, before determining the attribute of each element according to the position of each element in the original image, the method further includes: acquiring a real text area in the original image by using a visual direct-viewing method; and determining the real text area as the real position of the element in which the text area is located.
Further, after determining the attributes of the elements according to the positions of the elements in the original image, the method further includes: generating an axis to be aligned of each normal element in the original image in a one-dimensional direction, and determining a pixel value of each axis to be aligned; counting the number of the axes to be aligned with the same pixel value in the same direction; and taking the alignment axis to be determined with the largest number as the actual alignment axis of the corresponding normal element.
Further, the scaling the canvas size and the element sizes of the target image according to the size of the original image and the target size comprises: setting a canvas size of the target image at the target size; when the element is a background element, scaling the background element by a maximum value of a ratio between the target size and a canvas size of the original image; when the element is a normal element, scaling the size of the normal element and the pixel value of the actual alignment axis according to the proportion between the target size and the size of the original image; and when the elements are overflow elements and welt elements, scaling the sizes of the overflow elements and the welt elements according to the proportion between the target size and the size of the original image.
Further, the adjusting, according to the attribute of each element, the position of each element in the target image after scaling includes: traversing all normal elements on the actual alignment axis, determining the shifted pixel values of the normal elements overflowing the canvas, and moving the normal elements into the canvas of the target image along the actual alignment axis according to the shifted pixel values; aiming at the scaled overflow element, adjusting the position of the overflow element in the target image according to the area of the overflow element in the canvas of the original image; and aiming at the scaled welting element, adjusting the position of the welting element in the target image according to the edge of the welting element, which is coincided with the canvas edge of the original image.
Further, after the determining the attributes of the respective elements, the method further includes: and traversing the overlapping relation of all normal elements in the original image.
Further, after the adjusting the position of the scaled elements in the target image, the method further includes: checking whether all normal elements have new overlapping relations; and when a new overlapping relation exists, reducing the actual alignment axis of the normal element in the new overlapping relation by a preset proportion for a specified time to obtain the target image.
Accordingly, an embodiment of the present invention further provides an image processing apparatus, including: the acquisition module is used for acquiring an original image of the media file and the target size of a target image to be generated; the attribute determining module is used for determining the attribute of each element according to the position of each element in the original image; a scaling module, configured to scale a canvas size and each element size of the target image according to the size of the original image and the target size; and the adjusting module is used for adjusting the position of each element in the target image after zooming according to the attribute of each element and obtaining the target image.
Accordingly, the embodiment of the present invention also provides a machine-readable storage medium, which stores instructions for causing a machine to execute the image processing method as described above.
According to the technical scheme, the attribute of each element is determined according to the position of each element in an original image of a media file provided by a user, the canvas size of the target image and the size of each element are scaled according to the size of the original image and the target size of the target image to be generated, and then the position of each element on the target image after scaling is adjusted according to the attribute of each element to obtain the target image. According to the embodiment of the invention, the target image corresponding to the target size can be automatically generated only by providing the original image and the target size of the media file by the user, so that multi-size scaling is realized, the design work is simplified, the manufacturing efficiency is improved, the problem of low manual adjustment efficiency in the prior art is solved, and the problem that the user requirements cannot be met through automatic generation of equipment is also solved.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the embodiments of the invention without limiting the embodiments of the invention. In the drawings:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is an exemplary diagram of elements of different attributes provided by embodiments of the present invention;
FIG. 3 is a diagram of exemplary text elements provided by an embodiment of the present invention;
FIG. 4 is a diagram illustrating an example of a processing result for a normal element in the prior art;
FIG. 5 is an exemplary illustration of an alignment axis of a normal element provided by an embodiment of the invention;
FIG. 6 is an exemplary diagram of normal element overflow provided by embodiments of the present invention;
FIG. 7 is an exemplary diagram of an overlapping relationship provided by an embodiment of the invention;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating embodiments of the invention, are given by way of illustration and explanation only, not limitation.
The applicant finds that if a media file of one theme needs to be applied to different scenes, manual adjustment is usually required or the device is automatically generated, but efficiency or user requirements cannot be guaranteed.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
step 101, acquiring an original image of a media file and a target size of a target image to be generated;
102, determining the attribute of each element according to the position of each element in the original image;
103, scaling the canvas size and the element sizes of the target image according to the size of the original image and the target size;
and 104, adjusting the position of each element in the target image after zooming according to the attribute of each element, and obtaining the target image.
In the embodiment of the invention, the original image of the media file is processed at the web port, and the media file is the media file which needs to be processed into the target image, and can be an advertisement of a website page and the like. In addition, the format of the original image is a PSD format and comprises information such as pictures, characters and the like.
In embodiments of the present invention, the original image of the media file may be obtained by any suitable means, for example, from a gallery, or uploaded by a user, etc. The number of target sizes of the target image can also be set according to actual needs, and for example, the target size may be one size or multiple sizes.
When the original image is acquired, the elements in the original image are similar to the existence of a container box, so in step 102, the attribute of each element can be determined by the position of each element in the original image.
Specifically, an element whose difference between the size and the canvas size of the original image is within a set range is determined as a background element, for example, when the difference between the width and height of the element and the width and height of the canvas is within 1%, the element is determined as the background element, and/or the error between the element and the canvas is within 3 pixels. And determining the elements completely distributed in the canvas of the original image as normal elements. Elements having at least one edge beyond the canvas edge of the original image are determined to be overflow elements, for example, one or more edges may all exceed the canvas edge of the original image. And determining an element with at least one edge coinciding with the canvas edge of the original image as a welting element. For example, as shown in fig. 2, the difference between the background element and the canvas is hardly visible, one edge of one overflow element exceeds the edge of the canvas, two edges of the other overflow element both exceed the edges of the canvas, one edge of one welt element coincides with the edge of the canvas, and two edges of the other welt element coincides with the edge of the canvas.
In an implementation manner of the embodiment of the present invention, a special element, that is, a text element, exists in an original image of a media file, and since there may be effects such as special effects or borders when inputting text, as shown in fig. 3, a text area becomes an area where a cross line exists, and if the attribute of the element is determined by using the area where the cross line exists, there is a deviation. As shown in fig. 3, if the attribute is determined to be a normal element according to the position of the cross line of the "i is the character 1", if the attribute is determined to be an overflow element according to the position of the cross line of the "i is the character 2", but the diagonal line area of the "i is the character 2" is the real area of the character, and the attribute should be a normal element. Therefore, since the embodiment of the present invention is applied to the web side, before determining the attribute of each element according to the position of each element in the original image through the application program interface function of getbackingclientrect carried by the browser, the area where the character is located is actually measured by using a direct vision method, so as to obtain the real area of the character in the original image, and the real area of the character is determined as the real position of the element where the real area of the character is located, that is, the real position of "i am the character 2" in fig. 3 is the position of the oblique line area, and is also the real position of the element where the real area of the character is located, so as to determine that the attribute of the element is a normal element. Therefore, in the embodiment of the invention, before the attribute of the element is determined, the real position of the character element in the original image can be determined by a visual direct-view method, and then accurate attribute information can be obtained in subsequent attribute determination.
For step 103, the canvas size of the target image and the sizes of the elements are scaled according to the size of the original image and the target size. The scaling of the canvas size, background element, normal element, overflow element, and welt element is described below, respectively.
And setting the size of the canvas of the target image according to the target size, for example, when the target size is 50 × 100, the size of the canvas of the target image is 50 × 100.
For a background element, scaling the background element by a maximum value of a ratio between the target size and a canvas size of the original image. For example, when the target size is 50 × 100, the canvas size of the original image is 100 × 50, and the size of the background element in the original image is 100 × 50, the aspect ratio between the target size and the canvas size of the original image is 0.5 and 2, respectively, and the maximum value 2 in the ratios is the ratio for scaling the background element, so the background element is scaled to 200 × 100. It is emphasized that the background element allows overflow, and thus the scaled background element may overflow the canvas of the target image.
As for the normal elements, since the normal elements in the prior art are scaled with their respective centers of gravity as centers in the process of scaling, as shown in fig. 4, the normal elements that are left aligned in the original image no longer satisfy the condition of left alignment after image processing. Therefore, in the embodiment of the present invention, the concept of aligning axes is introduced for normal elements. After determining the attributes of the elements according to the positions of the elements in the original image, generating an axis to be aligned of each normal element in the original image in a one-dimensional direction for the normal elements in the original image, and taking the normal elements in fig. 5 as an example, generating a left alignment axis, a center X alignment axis, and a right alignment axis (or an upper alignment axis, a center Y alignment axis, and a lower alignment axis) in the one-dimensional direction. To facilitate subsequent movement of the normal elements, the actual alignment axis of each normal element needs to be determined. After the axis to be aligned of each normal element is generated, the pixel value of each axis to be aligned is determined, and taking a left alignment axis, a center X alignment axis, and a right alignment axis as examples, the pixel value of each axis to be aligned is the pixel value in the X direction. For example, there are 7 normal elements in the original image, and the pixel values of the axes to be aligned of the 7 normal elements are shown in table 1 below.
Then, the number of the axes to be aligned having the same pixel value in the same direction is counted, and taking the normal element in table 1 as an example, the axis to be aligned having the largest number is taken as the actual alignment axis of the corresponding normal element.
TABLE 1
Figure BDA0003561114890000081
Starting from a pixel value 20 where a left alignment axis of the normal element 1 is located, checking whether a left alignment axis with the same pixel value exists, and finding that the pixel values where the left alignment axes of the normal elements 2, 3 and 4 are located are the same as the normal element 1, that is, the number of the left alignment axes where the pixel values 20 where the left alignment axes are located are the same is 4; then looking at the pixel value 40 of the center X alignment axis of the normal element 1, wherein the same center X alignment axis does not exist, so that the number of axes to be aligned of the pixel value is 1; looking at the pixel value 60 where the right alignment axis of the normal element 1 is located, the right alignment axis identical to the right alignment axis does not exist, and therefore the number of axes to be aligned of the pixel value is 1. In summary, the left alignment axis with the largest number is used as the actual alignment axis.
Then, the pixel value 20 of the left alignment axis of the normal element 2 is viewed, which is similar to the case of the normal element 1, and the left alignment axis thereof is also taken as the actual alignment axis thereof.
For the normal element 3, the number of left alignment axes with the same pixel value 20 is 4; the number of the central X alignment axes with the same pixel value 60 as the central X alignment axis is 2; the number of right alignment axes having the same pixel value 100 as the right alignment axis is also 2. Therefore, the left alignment axis having the largest number is used as its actual alignment axis.
For the normal element 4, the number of left alignment axes with the same pixel value 20 is 4; the number of central X alignment axes with the same pixel value 50 as the central X alignment axes is 1; the number of right alignment axes with the same pixel value 100 is 3. Therefore, the left alignment axis having the largest number is used as its actual alignment axis.
Since the pixel values of the actual alignment axes of the normal 1, 2, 3, and 4 are all 20, the 4 normal elements are group relationship elements, and after the subsequent image processing is finished, the 4 normal elements should also be left-aligned elements.
For the normal element 5, the number of left alignment axes having the same pixel value 40 as the left alignment axis is 1, the number of center X alignment axes having the same pixel value 60 as the center X alignment axis is 2, and the number of right alignment axes having the same pixel value 80 as the right alignment axis is 3, so that the right alignment axis having the largest number is used as the actual alignment axis of the normal element 5.
For the normal element 6, the number of left alignment axes having the same pixel value 60 as the left alignment axis is 2, the number of center X alignment axes having the same pixel value 80 as the center X alignment axis is 1, and the number of right alignment axes having the same pixel value 100 as the right alignment axis is 2. For the undetermined alignment axes with the same number, determining an actual alignment axis according to the canvas position of the normal element in the original image, wherein if the normal element completely or mostly exists on the left side of the canvas, taking the left alignment axis of the normal element as the actual alignment axis; if the normal element completely or mostly exists on the right side of the canvas, taking the right alignment axis of the normal element as the actual alignment axis of the normal element; and if the parts of the normal elements existing on the two sides of the canvas are the same, taking the central X alignment axis as the actual alignment axis. Thus, taking the example that the normal element 6 exists entirely or mostly on the left side of the canvas, its left alignment axis is taken as its actual alignment axis.
For the normal element 7, the number of left alignment axes having the same pixel value 60 as that of the left alignment axis is 2, the number of center X alignment axes having the same pixel value 70 as that of the center X alignment axis is 1, and the number of right alignment axes having the same pixel value 80 as that of the right alignment axis is 3. Therefore, the right alignment axis having the largest number is set as the actual alignment axis of the normal element 7.
Since the pixel values of the actual alignment axes of the normal 5 and the normal 7 are both 80, the 2 normal elements are group relationship elements, and after the subsequent image processing is finished, the 2 normal elements should be elements arranged in left alignment.
For the normal element 8, the number of axes to be aligned, on which the same pixel value exists, is the same as 1 for all the axes to be aligned. For the undetermined alignment axes with the same number, determining an actual alignment axis according to the canvas position of the normal element in the original image, wherein if the normal element completely or mostly exists on the left side of the canvas, taking the left alignment axis of the normal element as the actual alignment axis; if the normal element completely or mostly exists on the right side of the canvas, taking the right alignment axis of the normal element as the actual alignment axis of the normal element; and if the parts of the normal elements existing on the two sides of the canvas are the same, taking the central X alignment axis as the actual alignment axis. Thus, taking as an example that the portions of the normal element 8 existing on both sides of the canvas are the same, the center X alignment axis thereof is taken as its actual alignment axis. Finally, the actual alignment axes of all normal elements are shown in table 2.
TABLE 2
Normal elements 1,2,3,4 5,7 6 8
Actual alignment axis Left alignment Axis-20 Right alignment Axis-80 Left alignment Axis-60 Center X is aligned with axis-100
As for the processing mode for determining the actual alignment axis by using the axis to be aligned generated in the Y direction, the processing mode is similar to that described above, and details are not described here.
Through the implementation mode, the actual alignment axis of each normal element is obtained, and when the normal elements are zoomed, the size of the normal elements and the pixel value of the actual alignment axis are zoomed according to the proportion between the size of the original image and the target size. Wherein, for the scaling of the normal element size, firstly, the width-height product P-target (W-target) H-target of the target size and the width-height product P-original (W-original) H-original of the original image are calculated, and then the square root of the ratio of the width-height product P-target and the width-height product P-original is calculated
Figure BDA0003561114890000101
The normal element size is then scaled by this ratio, i.e. the width and height of the normal element are scaled separately. In addition, it is also necessary to scale the pixel value where the actual alignment axis of the normal element is located, for example, if the alignment axis to be determined is generated in the X direction, the finally obtained actual alignment axis of the normal element is also a left alignment axis, a center X alignment axis, and a right alignment axis in the X direction. When calculating the ratio between the size of the original image and the target size, only the ratio in the X direction, i.e., the ratio of width is also considered, if the width in the size of the original image is 100, the pixel value of the actual alignment axis of a certain normal element in the original image is 10, the width in the target size is 50,then the ratio between the size of the original image and the target size is 0.5, and the pixel value of the actual alignment axis of the normal element in the target image is 5. As for the processing mode of the axis to be aligned generated in the Y direction, the processing mode is similar to that described above, and details are not described here.
And when the elements are overflow elements and welt elements, scaling the sizes of the overflow elements and the welt elements according to the proportion between the size of the original image and the target size. Similarly, the width-height product of the target dimension, P-target, and the width-height product of the original image, P-original, and the square root of the ratio of the width-height product of the target dimension, P-target, and W-original, H-original, are calculated first, and then the square root of the ratio of the width-height product of the target dimension and the width-height product of the original image, P-original, and W-original, H-original, are calculated
Figure BDA0003561114890000111
And then scaling the overflow element size and the welt element size by the proportion, namely scaling the width and the height of the overflow element size and the welt element size respectively. There may be a special case for the welt element, and the size of the welt element may be larger than the size of the canvas after the welt element is scaled, so it needs to be determined whether the width and height of the scaled welt element are larger than the width and height of the target size. If one of the widths and heights is larger than the predetermined value, the scaled widths and heights are multiplied by a predetermined coefficient, for example, the predetermined coefficient is 0.9, and the widths and heights of the welt elements are forcibly reduced.
After the elements are scaled, there may be a case that the scaled elements do not meet the user requirements, so step 104 is required to adjust the positions of the scaled elements in the target image according to the attributes of different elements. The canvas and the background elements in the zooming process belong to accurate zooming and do not need to be adjusted. The following describes situations that may occur after scaling of the normal element, overflow element and welt element, which do not satisfy the user requirements.
Firstly, traversing all normal elements on the actual alignment axis, determining the shifted pixel values of the normal elements overflowing the canvas, and moving the normal elements into the canvas of the target image along the actual alignment axis according to the shifted pixel values. For example, when the normal element is shifted by a pixel value of 50, i.e., it overflows the canvas by 50, and its actual alignment axis is the right alignment axis, the normal element is shifted by a pixel value of 50 along its right alignment axis into the canvas of the target image. In addition, it should be noted that when the normal element and other normal elements belong to a group relation element, the maximum value of the offset in the group relation element is taken as a reference when determining the offset pixel value of the normal element, and the group relation element A, B, C in fig. 6 is taken as an example, the actual alignment axis thereof is the right alignment axis, and the group relation elements all overflow more or less over the canvas after scaling, then when determining the offset pixel value of the normal element, the group relation element should be moved as a whole by the pixel value f into the canvas by taking the maximum value of the offset, i.e., the pixel value f offset by the group relation element C. That is, for a set of relationship elements, all normal elements in the set of relationship elements are shifted with respect to the maximum pixel value shifted in the set of relationship elements when adjusting the shifted pixel value.
And aiming at the scaled overflow element, adjusting the position of the overflow element in the target image according to the area of the overflow element in the canvas of the original image. For example, when the area of the overflow element in the canvas of the original image occupies 40% of the total area of the overflow element, the scaled overflow element also needs to ensure that the area of the overflow element in the canvas of the target image occupies 40% of the total area of the overflow element, so that the position of the overflow element in the target image is adjusted according to the standard.
And aiming at the scaled welting element, adjusting the position of the welting element in the target image according to the edge of the welting element, which is coincided with the canvas edge of the original image. For example, when the right side of the welt element coincides with the canvas edge of the original image, it should be ensured that the right side of the welt element coincides with the canvas edge of the target image after scaling, so that the position of the welt element in the target image is adjusted by the criterion.
According to the embodiment of the invention, the target image corresponding to the target size can be automatically generated only by providing the original image and the target size of the media file by a user, so that multi-size scaling is realized, the design work is simplified, the manufacturing efficiency is improved, the problem of low manual adjustment efficiency in the prior art is solved, and the problem that the user requirements cannot be met through automatic generation of equipment is also solved.
In addition, in an implementation manner of the embodiment of the present invention, in the process of processing the original image, due to the size scaling, the space of the canvas may be changed drastically, and a phenomenon that elements in a non-overlapping relationship overlap may occur. If the natural scenery elements are overlapped, the effect is not influenced, but for some people, character elements and even trademarks of merchants, the overlapping needs to be avoided as much as possible. Therefore, in the embodiment of the present invention, after determining the attributes of the respective elements, the overlapping relationships of all normal elements in the original image are traversed, so as to know which normal elements have overlapping relationships therebetween and which normal elements do not have overlapping relationships therebetween. After the positions of the elements after zooming in and out of the target image are adjusted, whether new overlapping relations exist in all normal elements needs to be checked again. And when a new overlapping relation exists, reducing the actual alignment axis of the normal element in the new overlapping relation by a preset proportion for a specified time to obtain the target image. The preset proportion can be set according to the user requirement, for example, 10%, the specified number is at most 3 times, that is, scaling is performed 3 times and 10% along the respective actual alignment axes of the normal elements in the new overlapping relationship, and the scaling is controlled 3 times because the normal elements cannot be excessively reduced due to non-overlapping between pursuit elements. As shown in fig. 7, the left side is the overlapping relationship between the normal elements A, B, C, D, E in the original image, wherein the normal element C, D, E is a grouping relationship element, which has an overlapping relationship with the normal element B. The right side is the zoomed image, in which a new overlap relationship appears, i.e. the overlap relationship between normal element a and normal element B, so only the overlap relationship between normal element a and normal element B needs to be processed. That is, the example in fig. 7 only needs to be gradually reduced by a preset ratio along the respective actual alignment axes of the normal element a and the normal element B, and after the 1 st reduction, if there is an overlapping relationship between the two, the reduction is continued by the preset ratio, and after the 2 nd reduction, if there is no overlapping relationship between the two, the image at this time can be used as the target image. And if the image is reduced for the 3 rd time and the overlapping relationship still exists between the image and the target image, the image is not reduced, and the image at the moment is directly taken as the target image.
In addition, when the grouped relation elements exist in the new overlapping relation, when the normal elements in the new overlapping relation are reduced, the target image is obtained by reducing the normal elements in the new overlapping relation for a specified number of times along the actual alignment axis of the grouped relation elements in the new overlapping relation in a preset proportion, so that the grouped relation elements are still in the grouped relation after the overlapping relation is adjusted.
Through the implementation mode, important elements (for example, normal elements) are prevented from overlapping due to scaling in the image processing process, and the problem that the important elements are covered to influence the display effect is avoided.
Correspondingly, fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 8, the apparatus 80 includes: an obtaining module 81, configured to obtain an original image of a media file and a target size of a target image to be generated; an attribute determining module 82, configured to determine an attribute of each element according to a position of each element in the original image; a scaling module 83, configured to scale the canvas size and the element sizes of the target image according to the size of the original image and the target size; and an adjusting module 84, configured to adjust, according to the attribute of each element, a position of each element in the target image after scaling, and obtain the target image.
Further, the attribute determination module is specifically configured to: determining an element with a difference value between the size and the canvas size of the original image within a set range as a background element; determining elements completely distributed in the canvas of the original image as normal elements; determining an element with at least one edge exceeding the canvas edge of the original image as an overflow element; and determining an element with at least one edge coinciding with the canvas edge of the original image as a welting element.
Further, the attribute determination module is further configured to: before determining the attribute of each element according to the position of each element in the original image, acquiring a real text area in the original image by using a visual direct-viewing method; and determining the real text area as the real position of the element in which the text area is located.
Further, the apparatus further includes an alignment axis generating module 85, configured to, after determining the attribute of each element according to the position of each element in the original image, generate an alignment axis to be determined for each normal element in the original image in a one-dimensional direction, and determine a pixel value of each alignment axis to be determined; counting the number of the axes to be aligned with the same pixel value in the same direction; and taking the alignment axis to be determined with the largest number as the actual alignment axis of the corresponding normal element.
Further, the scaling module is specifically configured to: setting a canvas size of the target image at the target size; when the element is a background element, scaling the background element by the maximum value of the proportion between the target size and the canvas size of the original image; when the element is a normal element, scaling the size of the normal element and the pixel value of the actual alignment axis according to the proportion between the target size and the size of the original image; and when the elements are overflow elements and welt elements, scaling the sizes of the overflow elements and the welt elements according to the proportion between the target size and the size of the original image.
Further, the adjusting module is further configured to: traversing all normal elements on the actual alignment axis, determining the shifted pixel values of the normal elements overflowing the canvas, and moving the normal elements into the canvas of the target image along the actual alignment axis according to the shifted pixel values; for the scaled overflow element, adjusting the position of the overflow element in the target image according to the area of the overflow element in the canvas of the original image; and aiming at the scaled welting element, adjusting the position of the welting element in the target image according to the edge of the welting element, which is coincided with the canvas edge of the original image.
Further, the apparatus further includes an overlap relation determining module 86, configured to traverse the overlap relation of all normal elements in the original image after determining the attribute of each element.
Further, the adjusting module is further configured to: after the positions of the elements after zooming in and out are adjusted in the target image, checking whether all normal elements have new overlapping relations or not; and when a new overlapping relation exists, reducing the actual alignment axis of the normal element in the new overlapping relation by a preset proportion for a specified time to obtain the target image.
The specific working principle and benefits of the image processing apparatus provided by the embodiment of the present invention are similar to those of the image processing method provided by the embodiment of the present invention, and will not be described herein again.
In addition, another aspect of the embodiments of the present invention also provides a machine-readable storage medium, on which instructions are stored, the instructions being used for causing a machine to execute the image processing method according to the above-mentioned embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An image processing method, characterized by comprising:
acquiring an original image of a media file and a target size of a target image to be generated;
determining the attribute of each element according to the position of each element in the original image;
scaling the canvas size and the element sizes of the target image according to the size of the original image and the target size;
and adjusting the position of each element after zooming in the target image according to the attribute of each element to obtain the target image.
2. The method according to claim 1, wherein the determining the attributes of the elements according to the positions of the elements in the original image comprises:
determining an element with a difference value between the size and the canvas size of the original image within a set range as a background element;
determining elements completely distributed in the canvas of the original image as normal elements;
determining an element with at least one edge exceeding the canvas edge of the original image as an overflow element;
and determining an element with at least one edge coinciding with the canvas edge of the original image as a welt element.
3. The image processing method according to claim 1, wherein before said determining attributes of respective elements from their positions in the original image, the method further comprises:
acquiring a real text area in the original image by using a visual direct-viewing method;
and determining the real text area as the real position of the element in which the text area is located.
4. The image processing method according to claim 2, wherein after said determining attributes of respective elements according to their positions in the original image, the method further comprises:
generating an axis to be aligned of each normal element in the original image in a one-dimensional direction, and determining a pixel value of each axis to be aligned;
counting the number of the axes to be aligned with the same pixel value in the same direction;
and taking the alignment axis to be determined with the largest number as the actual alignment axis of the corresponding normal element.
5. The image processing method of claim 4, wherein the scaling the canvas size and the respective element sizes of the target image according to the size of the original image and the target size comprises:
setting a canvas size of the target image at the target size;
when the element is a background element, scaling the background element by a maximum value of a ratio between the target size and a canvas size of the original image;
when the element is a normal element, scaling the size of the normal element and the pixel value of the actual alignment axis according to the proportion between the target size and the size of the original image;
and when the elements are overflow elements and welt elements, scaling the sizes of the overflow elements and the welt elements according to the proportion between the target size and the size of the original image.
6. The method according to claim 2, wherein the adjusting the position of the scaled elements in the target image according to the attributes of the elements comprises:
traversing all normal elements on the actual alignment axis, determining the shifted pixel values of the normal elements overflowing the canvas, and moving the normal elements into the canvas of the target image along the actual alignment axis according to the shifted pixel values;
for the scaled overflow element, adjusting the position of the overflow element in the target image according to the area of the overflow element in the canvas of the original image;
and aiming at the scaled welting element, adjusting the position of the welting element in the target image according to the edge of the welting element, which is coincided with the canvas edge of the original image.
7. The image processing method according to claim 4, wherein after said determining the attributes of the respective elements, the method further comprises:
and traversing and recording the overlapping relation of all normal elements in the original image.
8. The method according to claim 7, wherein after the adjusting the position of the scaled elements in the target image, the method further comprises:
checking whether all normal elements have new overlapping relations;
and when a new overlapping relation exists, reducing the actual alignment axis of the normal element in the new overlapping relation by a preset proportion for a specified time to obtain the target image.
9. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring an original image of the media file and the target size of a target image to be generated;
the attribute determining module is used for determining the attribute of each element according to the position of each element in the original image;
a scaling module, configured to scale a canvas size and each element size of the target image according to the size of the original image and the target size;
and the adjusting module is used for adjusting the position of each element in the target image after zooming according to the attribute of each element and obtaining the target image.
10. A machine-readable storage medium having stored thereon instructions for causing a machine to perform the image processing method of any one of claims 1 to 8.
CN202210293355.9A 2022-03-23 2022-03-23 Image processing method, device and storage medium Pending CN114677432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210293355.9A CN114677432A (en) 2022-03-23 2022-03-23 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210293355.9A CN114677432A (en) 2022-03-23 2022-03-23 Image processing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114677432A true CN114677432A (en) 2022-06-28

Family

ID=82075292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210293355.9A Pending CN114677432A (en) 2022-03-23 2022-03-23 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114677432A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051632A1 (en) * 2022-09-09 2024-03-14 北京沃东天骏信息技术有限公司 Image processing method and apparatus, medium, and device
CN117808933A (en) * 2024-02-29 2024-04-02 成都索贝数码科技股份有限公司 Image element decomposition and reconstruction method and device
CN117808933B (en) * 2024-02-29 2024-05-24 成都索贝数码科技股份有限公司 Image element decomposition and reconstruction method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024051632A1 (en) * 2022-09-09 2024-03-14 北京沃东天骏信息技术有限公司 Image processing method and apparatus, medium, and device
CN117808933A (en) * 2024-02-29 2024-04-02 成都索贝数码科技股份有限公司 Image element decomposition and reconstruction method and device
CN117808933B (en) * 2024-02-29 2024-05-24 成都索贝数码科技股份有限公司 Image element decomposition and reconstruction method and device

Similar Documents

Publication Publication Date Title
WO2020192391A1 (en) Ocr-based image conversion method and apparatus, device and readable storage medium
US9983760B2 (en) Apparatus, method and computer readable recording medium for arranging a plurality of items automatically in a canvas
US11657510B2 (en) Automatic sizing and placement of text within a digital image
CN109741287B (en) Image-oriented filtering method and device
CN112348836A (en) Method and device for automatically extracting building outline
US10389936B2 (en) Focus stacking of captured images
US20160203381A1 (en) Method and apparatus for adsorbing straight line/line segment, method and apparatus for constructing polygon
CN114677432A (en) Image processing method, device and storage medium
CN104219428A (en) Camera installation device
CN106909869A (en) A kind of sampling grid partitioning method and device of matrix two-dimensional code
CN111899352A (en) Part model processing method, system, equipment and storage medium based on CATIA
CN108460003B (en) Text data processing method and device
CN111640109A (en) Model detection method and system
CN116311135A (en) Data dimension reduction method, data dimension reduction system and controller for semantic information
CN112784541A (en) Method and device for replacing picture in document, electronic equipment and storage medium
CN111122390A (en) Interface tension measuring method based on artificial intelligence model
CN107527323A (en) The scaling method and device of lens distortion
CN113409375B (en) Image processing method, image processing apparatus, and non-volatile storage medium
CN114817209A (en) Monitoring rule processing method and device, processor and electronic equipment
CN112906708A (en) Picture processing method and device, electronic equipment and computer storage medium
CN114612926A (en) Method and device for counting number of people in scene, computer equipment and storage medium
CN112863165B (en) Logistics enterprise fleet management method and system based on 5G
CN110874524B (en) Method, system and device for changing visual effect of document
CN108256530A (en) Image-recognizing method, device and equipment
CN113392811B (en) Table extraction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination