Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Please refer to fig. 1, fig. 2, fig. 3, fig. 4, and fig. 5. The embodiment of the specification provides an image generation method. The image generation method takes terminal equipment as an execution subject. The terminal device may be a mobile device, such as a smart phone, a tablet electronic device, a portable computer, a Personal Digital Assistant (PDA), an in-vehicle device, a POS machine, or a smart wearable device. Alternatively, the terminal device may also be a desktop device, such as a television, a server, an industrial Personal Computer (PC), a kiosk, or a smart self-service terminal (kiosk). The image generation method of the present embodiment may include the following steps.
Step S10: and acquiring a plurality of images selected by a user.
In the present embodiment, the storage format of the image includes, but is not limited to, bitmap format (BMP), joint photographic experts group format (JPEG), and Tagged Image File Format (TIFF). The color space of the image includes, but is not limited to, YUV color space, YCbCr color space, RGB color space, and HSL color space, etc. The plurality of images may be all the same size, may be partially the same size, or may be all different sizes.
The terminal device can provide an image selection interface. And displaying the image identifications of the plurality of images on the image selection interface. The image identification may be used to identify the image, and may include, for example, a name of the image, etc. The terminal equipment can acquire image selection instructions aiming at a plurality of image identifications; the image corresponding to the image identifier pointed by the image selection instruction can be obtained. The terminal device may display the image identifiers of the multiple images through one window, or may display the image identifiers of the multiple images through multiple windows. The image selecting instruction can be generated by triggering of user operation. For example, the terminal device may generate the image selection instruction upon detecting that the image identifier is pressed, pressed for a long time (pressed for more than a predetermined time), clicked, double-clicked, or swiped.
Of course, other image information can also be displayed on the image selection interface. The terminal equipment can also acquire an image selection instruction aiming at a plurality of other image information; and acquiring images corresponding to other image information pointed by the image selection instruction. For example, a thumbnail of an image may also be displayed on the image selection interface. The thumbnail may be used to reduce the image content of the display image.
Step S12: a first canvas is created.
In this embodiment, the first canvas may be used to carry image content of the plurality of images. The height of the first canvas may be greater than or equal to the sum of the heights of the plurality of images; the width of the first image may be greater than or equal to a maximum of the widths of the plurality of images.
The terminal equipment can directly create the first canvas after acquiring the plurality of images selected by the user. Or, the terminal device may further obtain a plurality of images selected by the user; collecting an image merging instruction; a first canvas may be created. The image merging instruction can be generated by triggering of user operation. For example, the terminal device may be pressed, pressed long, clicked, double-clicked, or swiped through any combination of one or more keys upon detection, thereby generating an image merge instruction. The keys may be virtual keys, physical keys, or the like. As another example, the terminal device may further recognize a preset gesture, and then generate an image merging instruction. The preset gesture may be, for example, a leftward swipe, a rightward swipe, or the like.
The terminal device may create a first canvas according to a first preset rule. The first preset rule may be: taking SumT as the height of the first canvas; let MaxW be the width of the first canvas. SumT may be the sum of the heights of the plurality of images; MaxW may be the largest of the widths of the plurality of images. Alternatively, the first preset rule may further be: taking SumT + (N-1) multiplied by G as the height of the first canvas; let MaxW be the width of the first canvas. N may be the number of the plurality of images; g may be an image gap in the longitudinal direction. Of course, the first preset rule may also be other contents, which are not described herein again.
Step S14: drawing image content of at least one image in the first canvas.
In this embodiment, the terminal device may draw the image contents of at least one image in the first canvas one by one. In the process of drawing, aiming at an image to be drawn, the terminal device can determine a drawing area of the image corresponding to the first canvas; the image content of the image may be rendered in the drawing area. The height of the drawing area may be greater than or equal to the height of the image. The width of the drawing area may be greater than or equal to the width of the image.
The terminal device may determine, according to a second preset rule, a drawing area of the image corresponding to the first canvas. The second preset rule may be associated with the first preset rule. For example, the first preset rule may be: taking SumT as the height of the first canvas; let MaxW be the width of the first canvas. Accordingly, the second preset rule may be: taking the position which is longitudinally away from the upper boundary of the first canvas and is SumY as the upper boundary of the drawing area; the distance from the upper boundary of the first canvas in the longitudinal direction is SumY + H
iAs the lower boundary of the drawing area; the left boundary of the first canvas is in the transverse direction
As the left boundary of the drawing area; the left boundary of the first canvas is in the transverse direction
As the right border of the drawing area. SumY may be the sum of the heights of the rendered images in the plurality of images; h
iMay be the height of the image; w
iMay be the width of the image. Of course, the second preset rule may be other contents. For example, the second preset rule may be: taking the position which is longitudinally away from the upper boundary of the first canvas and is SumY as the upper boundary of the drawing area; the distance from the upper boundary of the first canvas in the longitudinal direction is SumY + H
iAs the lower boundary of the drawing area; taking the left boundary of the first canvas as the left boundary of the drawing area; taking the lateral distance from the left boundary of the first canvas as W
iAs the right border of the drawing area. Of course, those skilled in the art will appreciate that the above first preset rule and the second preset rule are only examples. In an actual process, the first preset rule and the second preset rule may also be other contents, respectively.
Each pixel point of the image can correspond to one pixel point in the drawing area. The terminal device may use the attribute value of each pixel point of the image as the attribute value of the corresponding pixel point in the drawing area; thereby enabling the image content of the image to be rendered in the drawing area. The attribute value of the pixel point can be used to represent the color of the pixel point. According to the difference of the color space of the image, the attribute values of the pixel points can be different. For example, when the color space of the image is an RGB color space, the attribute value of the pixel point may be a 24-bit encoded RGB value. Of course, when the color space of the image is other color spaces, the attribute value of the pixel point may also be other encoding values.
Step S16: a target image is generated based on the first canvas.
In this embodiment, the target image may include image contents of the plurality of images. The image content of the plurality of images may be arranged vertically in the target image. The image content of the plurality of images may be able to remain intact in the target image.
In an implementation manner of this embodiment, the terminal device may draw all image contents of the plurality of images in the first canvas; the first canvas may then be stored as the target image.
In another implementation manner of this embodiment, limited by the capacity of the memory space of the terminal device, the terminal device may not be able to draw all the image contents of the plurality of images in the first canvas. In this way, in order to prevent the memory overflow, in the process of rendering, the terminal device may compare the data amount of the image with the remaining capacity of the memory space for the image to be rendered.
When the data amount of the image is less than or equal to the remaining capacity, the terminal device may draw the image content of the image in the first canvas. When the data size of the image is larger than the remaining capacity, the terminal device considers that the memory is likely to overflow, and may generate the first image based on the image content drawn in the first canvas, thereby realizing the release of the memory space. The data size of an image is understood to mean the memory space required for reading the image into the memory. For example, an image having a size of 900 × 600 pixels and an RGB color space has a data size of 900 × 600 × 3 — 1620000 bytes, and a memory space required for reading the image into a memory is 1620000 bytes. The first image may include image content of an image that has been rendered in the plurality of images. The terminal device may directly store the first canvas as a first image. Such that the first image may be the same size as the first canvas. Alternatively, the first canvas may include a blank area in view of the terminal device drawing only image content of a portion of the plurality of images in the first canvas. The terminal device may remove the blank area; the first canvas after the blank area is removed may be stored as a first image. Such that the first image may be smaller in size than the first canvas. The terminal device may create a second canvas; image content of at least one undrawn image can be drawn in the second canvas; a second image may be generated based on the second canvas; a target image may be generated based on the first image and the second image. The second canvas may be for carrying image content of an undrawn image of the plurality of images. The height of the second canvas may be greater than or equal to the sum of the heights of the undrawn images of the plurality of images; the determination of the second canvas height may be as described above for the determination of the first canvas height. For example, the terminal device may use the sum of the heights of the undrawn images in the plurality of images as the height of the second canvas. In addition, the process of drawing the image content of the image on the second canvas by the terminal device may be similar to the process of drawing the image content of the image on the first canvas.
Further, the terminal device may draw all image contents of the plurality of images, which are not drawn in the image, in the second canvas; the second canvas may then be stored as a second image. The second image may include image content of an undrawn image of the plurality of images. The terminal device may merge the second image and the first image into a target image. The merging means may for example comprise splicing or the like.
Further, limited by the capacity of the memory space of the terminal device, the terminal device may not be able to draw all the image contents of the plurality of images that are not drawn in the second canvas. In this way, in order to prevent the memory overflow, in the process of drawing, the terminal device may further compare the data amount of the image with the remaining capacity of the memory space for the image to be drawn. When the data amount of the image is less than or equal to the remaining capacity, the terminal device may draw the image content of the image in the second canvas. When the data size of the image is larger than the remaining capacity, the terminal device considers that the memory is likely to overflow, and may generate a second image based on the image content drawn in the second canvas, thereby achieving the release of the memory space. The terminal device may create a third canvas; image content of at least one undrawn image can be drawn in the third canvas; a third image may be generated based on the third canvas; a target image may be generated based on the first image, the second image, and the third image. It can be seen that in this embodiment, the terminal device may further create three or more canvases according to the capacity of the memory space and the data amount of the plurality of images; the image contents of the plurality of images can be respectively drawn in the three or more canvases; the three or more canvases can be stored as images, respectively; the target image may be generated based on the stored image.
In this embodiment, the terminal device may obtain a plurality of images selected by a user; a first canvas may be created for carrying image content of the plurality of images; image content of at least one image may be drawn in the first canvas; a target image may be generated based on the first canvas. Thus, the terminal equipment can combine the image contents of a plurality of images into one image under the condition of keeping the image contents complete.
Please refer to fig. 6. The embodiment of the specification also provides an image generation device. The image generation apparatus may include the following units.
The acquiring unit 20 is configured to acquire a plurality of images selected by a user;
a creating unit 22 for creating a first canvas; the first canvas is used for bearing image contents of the plurality of images;
a drawing unit 24 for drawing image content of at least one image in the first canvas;
a generating unit 26 for generating a target image based on the first canvas.
Please refer to fig. 7. The embodiment of the specification further provides the terminal equipment. The terminal device may include a memory and a processor.
In this embodiment, the memory may be implemented in any suitable manner. For example, the memory may be a read-only memory, a mechanical hard disk, a solid state disk, a U disk, or the like. The memory may be used to store computer instructions.
In this embodiment, the processor may be implemented in any suitable manner. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The processor may execute the computer instructions to perform the steps of: acquiring a plurality of images selected by a user; creating a first canvas; the first canvas is used for bearing image contents of the plurality of images; drawing image content of at least one image in the first canvas; a target image is generated based on the first canvas.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and the same or similar parts in each embodiment may be referred to each other, and each embodiment focuses on differences from other embodiments. Particularly, for the embodiment of the image generation apparatus and the embodiment of the terminal device, since they are basically similar to the embodiment of the image generation method, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the embodiment of the image generation method.
In addition, it is understood that one skilled in the art, after reading this specification document, may conceive of any combination of some or all of the embodiments listed in this specification without the need for inventive faculty, which combinations are also within the scope of the disclosure and protection of this specification.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate a dedicated integrated circuit chip 2. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardbyscript Description Language (vhr Description Language), and the like, which are currently used by Hardware compiler-software (Hardware Description Language-software). It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present specification can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solutions of the present specification may be essentially or partially implemented in the form of software products, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present specification.
The description is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
While the specification has been described with examples, those skilled in the art will appreciate that there are numerous variations and permutations of the specification that do not depart from the spirit of the specification, and it is intended that the appended claims include such variations and modifications that do not depart from the spirit of the specification.