CN115689888A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115689888A
CN115689888A CN202211342493.8A CN202211342493A CN115689888A CN 115689888 A CN115689888 A CN 115689888A CN 202211342493 A CN202211342493 A CN 202211342493A CN 115689888 A CN115689888 A CN 115689888A
Authority
CN
China
Prior art keywords
image
target
images
processed
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211342493.8A
Other languages
Chinese (zh)
Inventor
付新宇
韦桂锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Novastar Electronic Technology Co Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Priority to CN202211342493.8A priority Critical patent/CN115689888A/en
Publication of CN115689888A publication Critical patent/CN115689888A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring a plurality of images to be processed through a target storage component, wherein the plurality of images to be processed correspond to a plurality of video sources; executing splicing operation on a plurality of images to be processed through a target storage component to obtain a target spliced image, wherein the target spliced image is located in the target storage component; the method comprises the steps of executing zooming operation on a target spliced image through a zooming module to obtain a target display image, wherein image parameters of the target display image are the same as target display parameters of display equipment, the zooming module is connected with a target storage component and is used for zooming the target spliced image, and the problem that the use experience of a user is poor due to high delay of the image in the display process in the image processing method in the related art is solved through the embodiment.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
In the related art, in the process of displaying an image (or a video) using an LED (light emitting diode) screen, since most of front-end video sources adopt a standard resolution, and the LED screen generally does not adopt the standard resolution in the process of using the LED screen, there may be a case that the image (or the video) cannot be normally displayed in the process of displaying the image (or the video) using the LED screen. For example, there is a black border between the image and the LED display screen, or the image size exceeds the display size of the LED screen.
In order to solve the above technical problem, in the related art, when displaying an image on a screen, the original image needs to be stitched and scaled to an image that conforms to the size of the screen, and in the process of stitching and scaling the original image, a DDR (Double Data Rate Synchronous Random Access Memory) needs to be used multiple times to store the stitched Data and the scaled Data. Each time DDR storage is carried out, at least 1 frame of delay is needed to be added, so that at least 2 frames of delay are needed for an image to be displayed on an LED screen, and the use experience of a user is influenced.
Therefore, the image processing method in the related art has a problem of poor use experience of a user due to high delay of the image in the display process.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, which solve the problem that the image processing method in the related art has poor user experience due to high delay of an image in a display process.
A first aspect of an embodiment of the present application provides a method, including: acquiring a plurality of images to be processed through a target storage component, wherein the plurality of images to be processed correspond to a plurality of video sources; executing splicing operation on the multiple images to be processed through the target storage component to obtain a target spliced image, wherein the target spliced image is located in the target storage component; and performing zooming operation on the target spliced image through a zooming module to obtain a target display image, wherein the image parameters of the target display image are the same as the target display parameters of display equipment, the zooming module is connected with the target storage component, and the zooming module is used for zooming the target spliced image.
In an exemplary embodiment, the acquiring a plurality of images to be processed by the target storage means includes: acquiring images sent by each video source in the multiple video sources through the target storage component to obtain the multiple images to be processed; and respectively storing the plurality of images to be processed in a plurality of storage areas in the target storage component through the target storage component, wherein the plurality of storage areas correspond to the multi-channel video source.
In an exemplary embodiment, the performing, by the target storage component, a stitching operation on the plurality of images to be processed to obtain a target stitched image includes: acquiring a preset splicing format corresponding to the multiple video sources through the target storage component, wherein the preset splicing format is used for indicating splicing positions of the multiple images to be processed in the target spliced image; and executing the splicing operation on the plurality of images to be processed through the target storage component according to the preset splicing format to obtain the target spliced image.
In an exemplary embodiment, the performing, by the scaling module, a scaling operation on the target stitched image to obtain a target display image includes: calculating a first image parameter of the target stitched image through the scaling module; under the condition that the first image parameter is smaller than the target display parameter, executing an amplification operation on the target spliced image through the scaling module to obtain the target display image, wherein the scaling operation comprises the amplification operation; and under the condition that the first image parameter is larger than the target display parameter, executing a reducing operation on the target spliced image through the scaling module to obtain the target display image, wherein the scaling operation comprises the reducing operation.
In an exemplary embodiment, the performing, by the scaling module, a magnification operation on the target stitched image to obtain the target display image includes: calculating the number of image pixels included in the target spliced image through the scaling module to obtain a first pixel number; calculating the number of image pixels included in the target display image through the scaling module to obtain a second number of pixels; and inserting pixel points of a first target number into the target spliced image through the scaling module to obtain the target display image, wherein the first target number is a difference value obtained by subtracting the first pixel number from the second pixel number.
In an exemplary embodiment, the inserting, by the scaling module, a first target number of pixel points into the target mosaic image to obtain the target display image includes: inserting a line of pixel points to be interpolated into the target spliced image at intervals of a first preset number of lines of pixel points through the zooming module to obtain a target display image; or, a row of the pixels to be interpolated is correspondingly inserted every second preset number of rows of the pixels in the target mosaic image through the scaling module, so as to obtain the target display image, wherein the number of the pixels to be interpolated in the target display image is the first target number.
In an exemplary embodiment, the performing, by the scaling module, a reduction operation on the target stitched image to obtain the target display image includes: calculating the number of image pixels included in the target spliced image through the scaling module to obtain a third number of pixels; calculating the number of image pixels included in the target display image through the scaling module to obtain a fourth pixel number; and deleting pixel points of a second target number in the target spliced image through the scaling module to obtain the target display image, wherein the second target number is a difference value obtained by subtracting the fourth pixel number from the third pixel number.
A second aspect of an embodiment of the present application provides an apparatus, including: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of images to be processed through a target storage component, and the images to be processed correspond to a plurality of video sources; the splicing unit is used for executing splicing operation on the multiple images to be processed through the target storage component to obtain a target spliced image, wherein the target spliced image is positioned in the target storage component; and the zooming unit is used for performing zooming operation on the target spliced image through a zooming module to obtain a target display image, wherein the image parameters of the target display image are the same as the target display parameters of the display device, the zooming module is connected with the target storage component, and the zooming module is used for zooming the target spliced image.
In one exemplary embodiment, the obtaining unit includes: the first acquisition module is used for acquiring images sent by each video source in the multiple video sources through the target storage component to obtain the multiple images to be processed; and the storage module is used for respectively storing the plurality of images to be processed in a plurality of storage areas in the target storage component through the target storage component, wherein the plurality of storage areas correspond to the plurality of video sources.
In one exemplary embodiment, the splicing unit includes: a second obtaining module, configured to obtain, by using the target storage component, a preset stitching format corresponding to the multiple video sources, where the preset stitching format is used to indicate stitching positions of the multiple images to be processed in the target stitched image; and the splicing module is used for executing the splicing operation on the plurality of images to be processed through the target storage component according to the preset splicing format to obtain the target spliced image.
In one exemplary embodiment, the scaling unit includes: the calculation module is used for calculating a first image parameter of the target spliced image through the scaling module; the zooming module is used for executing zooming operation on the target spliced image through the zooming module under the condition that the first image parameter is smaller than the target display parameter to obtain the target display image, wherein the zooming operation comprises the zooming operation; and the zooming-out module is used for executing zooming-out operation on the target spliced image through the zooming-out module under the condition that the first image parameter is larger than the target display parameter to obtain the target display image, wherein the zooming-out operation comprises the zooming-out operation.
In one exemplary embodiment, the amplification module includes: the first calculation submodule is used for calculating the number of image pixels included in the target spliced image through the scaling module to obtain a first pixel number; the second calculating submodule is used for calculating the number of image pixels included in the target display image through the scaling module to obtain a second number of pixels; and the inserting sub-module is used for inserting pixel points of a first target number into the target spliced image through the zooming module to obtain the target display image, wherein the first target number is a difference value obtained by subtracting the first pixel number from the second pixel number.
In one exemplary embodiment, the insertion sub-module includes: the first inserting subunit is used for correspondingly inserting a line of pixel points to be interpolated every other first preset number of lines of pixel points in the target spliced image through the zooming module to obtain the target display image; or, a second inserting subunit, configured to insert a column of the pixels to be interpolated every second preset number of columns of pixels in the target stitched image through the scaling module, to obtain the target display image, where the number of the pixels to be interpolated in the target display image is the first target number.
In one exemplary embodiment, the reduction module includes: the third calculation sub-module is used for calculating the number of image pixels included in the target spliced image through the scaling module to obtain a third number of pixels; the fourth calculating submodule is used for calculating the number of image pixels included in the target display image through the scaling module to obtain a fourth pixel number; and the deleting submodule is used for deleting pixel points of a second target number in the target spliced image through the zooming module to obtain the target display image, wherein the second target number is a difference value obtained by subtracting the fourth pixel number from the third pixel number.
A third aspect of embodiments of the present application provides an electronic device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor, when executing the computer program, implements the method according to the first aspect.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the method according to the first aspect as described above.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on an electronic device, causes the electronic device to perform the method of any one of the first aspect.
Compared with the prior art, the embodiment of the application has the beneficial effects that: splicing a plurality of images to be processed in a target storage component, and zooming the spliced images through a zooming module connected with the target storage component to acquire the plurality of images to be processed through the target storage component, wherein the plurality of images to be processed correspond to a plurality of video sources; executing splicing operation on a plurality of images to be processed through a target storage component to obtain a target spliced image, wherein the target spliced image is located in the target storage component; the method comprises the steps that a zooming module performs zooming operation on a target spliced image to obtain a target display image, wherein image parameters of the target display image are the same as target display parameters of display equipment, the zooming module is connected with a target storage component and is used for zooming the target spliced image, a certain degree of delay is generated when the target storage component stores the target spliced image once, and in the process of splicing and zooming a plurality of images to be processed, storage is at least required to be performed on the target storage component twice in the related technology.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
FIG. 1 is a schematic diagram of a hardware environment for an alternative image processing method according to an embodiment of the application;
FIG. 2 is a schematic diagram of an alternative scaling and displaying of an image according to an embodiment of the application;
FIG. 3 is a schematic diagram of an alternative image stitching according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an alternative scaling and display of an image according to an embodiment of the application;
FIG. 5 is a schematic flow chart diagram of an alternative image processing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an alternative method for scaling and displaying an image according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another alternative image stitching according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a method for determining a color value of an insertion pixel according to an embodiment of the present disclosure;
FIG. 9 is a block diagram of an alternative image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an alternative electronic device according to an embodiment of the application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In addition, in the description of the present application, the terms "first," "second," "third," and the like are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
According to an aspect of an embodiment of the present application, there is provided an image processing method. Alternatively, in the present embodiment, the image processing method described above may be applied to a hardware environment constituted by the terminal device 102 and the server 104 as shown in fig. 1. As shown in fig. 1, the terminal device 102 is connected to the server 104 through a network, and may be configured to provide services (e.g., application services, etc.) for the terminal device or a client installed on the terminal device, and may be configured with a database on the server or separately from the server, and configured to provide data storage services for the server 104.
The network may include, but is not limited to, at least one of: wired networks, wireless networks. The wired network may include, but is not limited to, at least one of: wide area networks, metropolitan area networks, local area networks, which may include, but are not limited to, at least one of the following: WIFI (Wireless Fidelity), bluetooth. The terminal device 102 may be, but is not limited to, a smart phone, a smart computer, a smart tablet, and the like.
As shown in fig. 2, in a conventional scheme, multiple video sources are first spliced into a large image, and then reduced and enlarged to match the resolution of the LED screen. In the whole image processing process, the splicing and the zooming respectively need to pass through one DDR storage, and each time the DDR storage is passed through, the delay of at least 1 frame needs to be increased, so that the whole set of image can be displayed on the LED screen only by the delay of at least 2 frames. As shown in fig. 3, the image to be stitched needs to be stored in the DDR, and then needs to be stitched outside the DDR.
As shown in fig. 4, in another conventional scheme, images output by multiple video sources are scaled to a suitable size, and then are spliced to match the resolution of an LED screen. In the whole image processing process, the splicing and the zooming respectively need to pass through one DDR storage, and each time the DDR storage is passed through, the delay of at least 1 frame needs to be increased, so that the whole set of image processing also needs the delay of at least 2 frames to be displayed on the LED screen.
In order to solve the problem that two times of time delay are caused by the fact that two times of storage are required on a DDR in the process of splicing images, in this embodiment, a scheme of reading the images from the DDR and then splicing the images is not used, but multiple paths of images are directly spliced into a complete image inside the DDR and are used as cache image data of a subsequent scaling module, and the cache image data is called in real time through the scaling module integrated with a reduction and amplification module to be processed, so that the splicing and the scaling share one storage space (namely, one common DDR) and finally the splicing and the scaling share one frame time delay is achieved.
The image processing method according to the embodiment of the present application may be executed by the server 104, the terminal device 102, or both the server 104 and the terminal device 102. Taking the terminal device 102 as an example to execute the image processing method in the present embodiment, fig. 5 is a schematic flowchart of an alternative image processing method according to the embodiment of the present application, and as shown in fig. 5, the flowchart of the method may include the following steps:
step S502, a plurality of images to be processed are obtained through the target storage component, wherein the plurality of images to be processed correspond to the multipath video source.
The image processing method in this embodiment may be applied to a scene in which multiple images to be processed are subjected to image stitching and zooming, where the multiple images to be processed may be images sent by each video source in multiple video sources, video image frames sent by each video source in multiple video sources, or other types of images, which is not limited in this embodiment.
In this embodiment, a plurality of to-be-processed images, which correspond to a plurality of video sources, may be acquired by the target storage section. Alternatively, the target storage unit may be a target storage unit in the image display device, or may be a target storage unit in an image processing device connected to the image display device. For example, a plurality of images to be processed may be obtained through a DDR storage component in an LED (Light Emitting Diode) display device (that is, the plurality of images to be processed are directly spliced and scaled on the LED display device, and the scaled images are displayed), or the plurality of images to be processed may be obtained through the DDR storage component in an image processing device connected to the LED display device (that is, the plurality of images to be processed are spliced and scaled in the image processing device first, and then the scaled images are displayed on the LED display device).
Optionally, the correspondence between the multiple to-be-processed images and the multiple video sources may be that the multiple to-be-processed images correspond to the multiple video sources, or that the multiple to-be-processed images correspond to a certain video source in the multiple video sources, or that the multiple to-be-processed images correspond to some video sources in the multiple video sources, which is not limited in this embodiment.
Optionally, the target storage component may be a DDR storage component, a RAM (random access memory), or another type of storage component, and the type of the target storage component is not limited in this embodiment.
Alternatively, the above-mentioned process of acquiring a plurality of images to be processed by the target storage means may be: and acquiring the images to be processed sent by the plurality of video sources through the target storage component to obtain a plurality of images to be processed. The video sources may be simultaneously transmitted images to be processed, or sequentially transmitted images to be processed. For example, when there are 4 video sources, source 1, source 2, source 3, and source 4, respectively, the four video sources may simultaneously send images to the DDR storage component, resulting in image to be processed 1 (i.e., the image to be processed sent by source 1), image to be processed 2 (i.e., the image to be processed sent by source 2), image to be processed 3 (i.e., the image to be processed sent by source 3), and image to be processed 4 (i.e., the image to be processed sent by source 4).
It should be noted that, the video source is not only capable of transmitting video, but also capable of transmitting images, because video can be regarded as a segment formed by continuous image frames of one frame, and therefore, in the process of video transmission, image frame transmission (i.e., image transmission) of one frame is performed essentially.
Step S504, executing a splicing operation on the plurality of images to be processed through the target storage component to obtain a target spliced image, wherein the target spliced image is located in the target storage component.
In order to accelerate the transmission efficiency of the image in the related technology, the same image may be divided into a plurality of blocks and then transmitted, and then the plurality of blocks of images are spliced into an original image; or, in order to display more information on the display unit, a plurality of images are generally combined into one image, and then the combined image is displayed, so that the information of the plurality of images can be displayed on the display unit.
Therefore, after the plurality of images to be processed are acquired by the target storage unit, the target stitched image can be obtained by performing a stitching operation on the plurality of images to be processed by the target storage unit, and the target stitched image is stored in the target storage unit.
Optionally, the process of obtaining the target stitched image by performing the stitching operation on the multiple images to be processed by the target storage component may be: and performing splicing operation on the multiple images to be processed by using a splicing algorithm to obtain a target spliced image, wherein the splicing algorithm can be a splicing algorithm based on region correlation or a splicing algorithm based on feature correlation.
For example, the region-based stitching algorithm calculates the difference of the gray values of a region in the image to be registered and a region with the same size in the reference image by using a least square method or other mathematical methods based on the gray values of the image to be stitched, compares the difference and then judges the similarity degree of the overlapping regions of the image to be stitched, thereby obtaining the range and the position of the overlapping regions of the image to be stitched, and thus, the image stitching is realized. The feature-based registration method does not directly utilize pixel values of the images, but derives the features of the images through pixels, and then searches and matches corresponding feature regions of the image overlapping parts by taking the image features as a standard, so as to realize image stitching. The type of the splicing algorithm is not limited in this embodiment.
And S506, performing zooming operation on the target spliced image through a zooming module to obtain a target display image, wherein the image parameters of the target display image are the same as the target display parameters of the display device, and the zooming module is connected with the target storage component and is used for zooming the target spliced image.
Due to the characteristic that an LED screen (an example of a display device) can be spliced, in an actual use process, multiple LED screens are often built into a screen with a particularly high resolution, however, currently, a commonly used video interface supports only 4K resolution at most, so that multiple video interfaces often need to be spliced into one large image for LED screen display, and meanwhile, because the LED screen has the characteristic of random splicing and building, in many cases, the LED screen is not in a standard common resolution, while most of front-end video sources are in a standard resolution, and therefore, video sources are often zoomed to fit the size of the LED screen.
Therefore, to better enable the stitched image to be displayed on the LED screen, the stitched image may be scaled so that the stitched image may fill the LED screen.
Optionally, a zooming module may perform a zooming operation on the target stitched image to obtain a target display image, where image parameters of the target display image are the same as target display parameters of the display device. The image parameter may be an image size, and the target display parameter may be a display size of the display device. For example, when the size of the stitched image is 960pxX540px (960 pixels in the horizontal direction and 540 pixels in the vertical direction), and the display size of the LED screen is 1920pxX1080px, the stitched image needs to be enlarged to 1920pxX px before being displayed on the LED screen completely.
Alternatively, MOSAICs (MOSAIC) stitching and zooming are often used in an actual scene to achieve the purpose of full-covering an LED screen with ultra-high resolution, and both MOSAICs and zooming need to delay video sources respectively, so that the final on-screen display of images is delayed greatly (2 frames of delay is usually needed), and is not friendly to scenes such as live broadcast, movie shooting and the like.
Through the steps S502 to S506, as shown in fig. 6, instead of using a scheme of reading and splicing images from the DDR, multiple paths of images are directly spliced into a complete image inside the DDR, the image is used as cache image data of a subsequent scaling module, and the cache image data is called in real time by a scaling module integrated with a reduction and amplification module to be processed, so that a storage space is shared by splicing and scaling (i.e., one common DDR), and finally, a frame delay of splicing and scaling is realized.
In one exemplary embodiment, acquiring a plurality of images to be processed by a target storage component includes: acquiring an image sent by each video source in a plurality of video sources through a target storage component to obtain a plurality of images to be processed; and respectively storing a plurality of images to be processed in a plurality of storage areas in the target storage component through the target storage component, wherein the plurality of storage areas correspond to the multi-channel video source.
In the process of obtaining the target mosaic image, if the plurality of images to be processed are all stored in the same place, the error rate in the process of executing mosaic operation on the plurality of images to be processed is increased. For example, when the plurality of images to be processed acquired by the target storage component are image 1, image 1', image 2, image 3, and image 4, where image 1 and image 1' are both images sent by video source 1, image 2 is an image sent by video source 2, image 3 is an image sent by video source 3, and image 4 is an image sent by video source 4, in the process of stitching these images, image 1', image 2, and image 3 may be stitched, instead of stitching image 1, image 2, image 3, and image 4.
Optionally, the target storage component may first obtain an image sent by each video source of the multiple video sources to obtain multiple images to be processed, and then store the multiple images to be processed in multiple storage areas of the target storage component respectively by the target storage component, where the multiple storage areas correspond to the multiple video sources.
Optionally, the correspondence between the plurality of storage areas and the multi-channel video source may be: the plurality of storage areas correspond to the plurality of video sources one to one, or the plurality of storage areas correspond to one video source.
Optionally, the multiple images to be processed are stored according to different video sources, so that in the process of splicing the multiple images to be processed, the situation of splicing the images of the same video source does not occur (as long as only one image of each of the multiple storage areas is taken for splicing in each splicing).
Alternatively, when a plurality of images to be processed include a plurality of images of the same video source, the plurality of images may be sequentially stored in the storage areas corresponding thereto according to the target storage unit acquisition time. For example, when the image 1 and the image 1' are both images sent by the video source 1, and the acquisition time of the image 1 is earlier than that of the image 1', the image 1 may be saved in the storage area corresponding to the video source 1, and then the image 1' may be saved.
Alternatively, the plurality of images to be processed may be stored in the plurality of storage areas in a queue (i.e., in the same storage area, the image to be stored first is spliced first). For example, when the image stored in the storage area 1 is image 1-image 1 '-image 1", the image stored in the storage area 2 is image 2-image 2' -image 2", the image stored in the storage area 3 is image 3-image 3 '-image 3", and the image stored in the storage area 4 is image 4-image 4' -image 4", the stitching operation may be performed on the images 1, 2, 3, and 4 to obtain a stitched image, and then the stitching operation may be performed on the images 1', 2', 3', 4', 1", 2", 3", and 4" in sequence to obtain the stitched image.
Through this embodiment, acquire the image that every way video source in the multichannel video source sent earlier, obtain a plurality of pending images, according to the video source of every pending image, store a plurality of pending images respectively in the storage area that corresponds with it, can reduce the probability of making mistakes in the concatenation in-process to a plurality of pending images, promote the accuracy of the target concatenation image that generates.
In an exemplary embodiment, performing a stitching operation on a plurality of images to be processed by a target storage component to obtain a target stitched image includes: acquiring a preset splicing format corresponding to a plurality of video sources through a target storage component, wherein the preset splicing format is used for indicating splicing positions of a plurality of images to be processed in a target spliced image; and executing splicing operation on the plurality of images to be processed through the target storage component according to a preset splicing format to obtain a target spliced image.
In the process of splicing the multiple images to be processed, the multiple images to be processed can be spliced according to multiple splicing modes. For example, when the plurality of images to be processed are the image 1, the image 2, the image 3, and the image 4, the images may be spliced in a form of "one", in a form of "i", or in a form of "tian", which is not limited in this embodiment.
Therefore, in the process of splicing the multiple images to be processed, the images to be processed can be spliced according to the preset splicing format corresponding to the multiple video sources, and optionally, the preset splicing format corresponding to the multiple video sources can be obtained through the target storage component, wherein the preset splicing format is used for indicating the splicing positions of the multiple images to be processed in the target spliced image; and then, executing splicing operation on the plurality of images to be processed through a target storage component according to a preset splicing format to obtain a target spliced image.
For example, when the obtained preset stitching format is the structure shown in fig. 7, an image corresponding to the video source 1 in the multiple images to be processed may be stitched at the upper left portion of the target stitched image, an image corresponding to the video source 2 may be stitched at the upper right portion of the target stitched image, an image corresponding to the video source 3 may be stitched at the lower left portion of the target stitched image, and an image corresponding to the video source 4 may be stitched at the lower right portion of the target stitched image.
Optionally, the process of obtaining the target stitched image by executing the stitching operation on the multiple images to be processed by the target storage component according to the preset stitching format may be: and according to a preset splicing format, splicing the multiple images to be processed according to the positions of the corresponding video sources in the target spliced image to obtain the target spliced image.
Through this embodiment, a plurality of images to be processed are spliced according to the preset splicing format that the multichannel video source corresponds to, and the accuracy of the generated target spliced image can be improved.
In an exemplary embodiment, the obtaining the target display image by performing a zoom operation on the target stitched image through the zoom module includes: calculating a first image parameter of a target spliced image through a scaling module; under the condition that the first image parameter is smaller than the target display parameter, executing amplification operation on the target spliced image through a scaling module to obtain a target display image, wherein the scaling operation comprises amplification operation; and under the condition that the first image parameter is larger than the target display parameter, executing a reducing operation on the target spliced image through a reducing and enlarging module to obtain a target display image, wherein the reducing and enlarging operation comprises a reducing operation.
In this embodiment, the first image parameter of the target stitched image may be calculated by the scaling module. Optionally, the process of calculating the first image parameter of the target stitched image by the scaling module may be: and respectively calculating the number of pixel points contained in the long edge of the target spliced image and the number of pixel points contained in the short edge of the target spliced image to obtain a first image parameter of the target spliced image, wherein the first image parameter comprises the number of the long edge pixel points and the number of the short edge pixel points.
After the first image parameters are calculated, the operation executed by the multi-target stitched image can be determined according to the first image parameters and the target display parameters. Optionally, the process of determining the operation performed by the multi-target stitched image according to the first image parameter and the target display parameter may be: under the condition that the first image parameter is smaller than the target display parameter, executing amplification operation on the target spliced image through a scaling module to obtain a target display image, wherein the scaling operation comprises amplification operation; and under the condition that the first image parameter is larger than the target display parameter, executing a reducing operation on the target spliced image through a reducing and enlarging module to obtain a target display image, wherein the reducing and enlarging operation comprises a reducing operation.
Optionally, the first image parameter being greater than the target display parameter means: the number of long-edge pixels of the target mosaic image is greater than that of long-edge pixels included in the target display parameters, and the number of short-edge pixels of the target mosaic image is greater than that of short-edge pixels included in the target display parameters. The first image parameter being smaller than the target display parameter means: the number of long-edge pixels of the target mosaic image is smaller than that of long-edge pixels included in the target display parameter, and the number of short-edge pixels of the target mosaic image is smaller than that of short-edge pixels included in the target display parameter. For example, when the image parameter of the target stitched image is 960pxx540px and the display size of the led screen is 1920pxX1080px (i.e., the target display parameter), it may be considered that the first image parameter is smaller than the target display parameter; when the image parameter of the target stitched image is 1920pxx1080px and the display size of the led screen is 960pxX px (i.e., the target display parameter), it may be considered that the first image parameter is larger than the target display parameter.
Optionally, when the first image parameter is not completely greater than or less than the target display parameter (for example, when the number of long-side pixels in the first image parameter is greater than the number of long-side pixels in the target display parameter, and the number of short-side pixels in the first image parameter is less than the number of short-side pixels in the target display parameter), the zooming module may first perform an enlarging operation on the target stitched image, and then perform a reducing operation on the target stitched image, or the zooming module may first perform a reducing operation on the target stitched image, and then perform an enlarging operation on the target stitched image, which is not limited in this embodiment.
Illustratively, when the image parameter of the target stitched image is 2880pxx540px and the display size of the led screen is 1920pxX1080px (i.e., the target display parameter), the zoom-out operation may be performed on the target stitched image first by the zoom module so that the image parameter of the target stitched image becomes 1920pxX px, and then the zoom-in operation may be performed on the target stitched image by the zoom module so that the image parameter of the target stitched image becomes 1920pxX1080px.
It should be noted that, when the first image parameter is consistent with the target display parameter, the target stitched image may be directly used as the target display image without performing a reduction operation or an enlargement operation on the target stitched image, so as to reduce consumption of computing resources.
Through the embodiment, the image operation executed on the target stitched image is determined according to the image parameters of the target stitched image and the display parameters of the display device, the target display image can be generated more efficiently, and the accuracy of the generated target display image can also be improved.
In an exemplary embodiment, the obtaining of the target display image by performing a zoom-in operation on the target stitched image by the zoom module includes: calculating the number of image pixels included in the target spliced image through a scaling module to obtain a first pixel number; calculating the number of image pixels included in the target display image through a scaling module to obtain a second number of pixels; and inserting pixel points of a first target number into the target spliced image through a scaling module to obtain a target display image, wherein the first target number is a difference value obtained by subtracting the first pixel number from the second pixel number.
Optionally, the process of calculating the number of image pixels included in the target stitched image through the scaling module to obtain the first number of pixels may be: the method comprises the steps of calculating the number of pixels of a long edge of a target spliced image through a scaling module to obtain the number of pixels of the long edge, calculating the number of pixels of a short edge of the target spliced image through the scaling module to obtain the number of pixels of the short edge, and determining the product of the number of pixels of the long edge and the number of pixels of the short edge as a first pixel number.
The process of calculating the number of image pixels included in the target display image by the scaling module to obtain the second number of pixels is similar to the process of calculating the number of image pixels included in the target stitched image by the scaling module to obtain the first number of pixels, which is not limited in this embodiment.
After the first pixel quantity and the second pixel quantity are determined, pixel points of the first target quantity can be inserted into the target spliced image through the scaling module to obtain a target display image, wherein the first target quantity is a difference value obtained by subtracting the first pixel quantity from the second pixel quantity. For example, when the image parameter of the target stitched image is 960pxX px, the first pixels are 518400 (i.e., 960X 540), the display size of the LED screen is 1920pxX px, and the first pixels are 2073600 (i.e., 1920X 1080), 1555200 pixels (i.e., 2073600-518400) may be added to the target stitched image, resulting in a target display image.
Optionally, the process of inserting the first target number of pixel points into the target mosaic image through the scaling module to obtain the target display image may be: and inserting pixel points of a first target number around each pixel point of the target spliced image to obtain a target display image. The periphery of each pixel point may be above each pixel point, or in other directions of each pixel point, which is not limited in this embodiment.
It should be noted that the color value of the pixel point inserted into the target stitched image may be determined according to the color value of the pixel point adjacent to the target stitched image, for example, as shown in fig. 8, the average value of the color values of the pixel points around the inserted pixel point may be determined as the color value of the inserted pixel point. When the color value of the pixel point a (the pixel point above the inserted pixel point) is 7, the color value of the pixel point B (the pixel point below the inserted pixel point) is 10, the color value of the pixel point C (the pixel point to the left of the inserted pixel point) is 8, and the color value of the pixel point D (the pixel point to the right of the inserted pixel point) is 7, it can be determined that the color value of the inserted pixel point E is 8 (i.e., (7 +8+ 10)/4).
According to the embodiment, the number difference of the pixel points between the target splicing image and the target display image is determined to be the first target number, the pixel points of the first target number are inserted into the target splicing image, the target display image is obtained, the accuracy of the generated target display image can be improved, and the technical effect of improving the use experience of a user is achieved.
In an exemplary embodiment, inserting, by a scaling module, a first target number of pixel points in a target mosaic image to obtain a target display image includes: inserting a line of pixel points to be interpolated into the target spliced image at intervals of a first preset number of lines of pixel points through a zooming module to obtain a target display image; or, a row of pixels to be interpolated is correspondingly inserted every second preset number of rows of pixels in the target spliced image through the scaling module, so that a target display image is obtained, wherein the number of the pixels to be interpolated in the target display image is the first target number.
In this embodiment, the process of inserting a line of pixels to be interpolated into the target mosaic image by the scaling module every first preset number of lines of pixels correspondingly may be: the method comprises the steps of firstly determining the number of rows of pixels to be interpolated which need to be inserted in a target splicing image and the number of pixels to be interpolated in each row according to a first target number, then determining a first preset number according to the number of rows of pixels to be interpolated and the number of rows of pixels included in the target splicing image, and finally inserting a row of pixels to be interpolated in the target splicing image at intervals of the pixels of the first preset number rows.
Illustratively, when the image parameter of the target stitched image is 1920pxx540px and the display size of the led screen is 1920pxX1080px, it may be determined that 540 rows of pixels need to be inserted into the target stitched image, and the number of pixels in each row is 1920, then one row of pixels may be inserted every other (540/(1080-540) = 1) row in the target stitched image; when the image parameter of the target stitched image is 1920pxx810px and the display size of the led screen is 1920pxX1080px, it may be determined that 270 lines of pixel points need to be inserted in the target stitched image, and the number of the pixel points in each line is 1080, and then one line of pixel points may be inserted in every 3 (810/(1080-810) = 3) lines in the target stitched image.
Optionally, the process of obtaining the target display image by inserting a row of pixels to be interpolated every second preset number of rows of pixels in the target stitched image through the scaling module is similar to the process of obtaining the target display image by inserting a row of pixels to be interpolated every first preset number of rows of pixels in the target stitched image through the scaling module, and this is not repeated in this embodiment.
It should be noted that, the above-mentioned process of obtaining the target display image may be performed simultaneously by inserting a row of pixels to be interpolated in the target stitched image by the scaling module at intervals of a first preset number of rows of pixels to obtain the target display image and inserting a row of pixels to be interpolated in the target stitched image by the scaling module at intervals of a second preset number of rows of pixels to obtain the target display image. For example, when the image parameter of the target stitched image is 1440pxx540px and the display size of the led screen is 1920pxX px, a row of pixels may be inserted in the target stitched image every 1 (540/(1080-540) = 1) row, the number of pixels included in each row is 1440, the image after the pixel is inserted is 1440pxX1080px, a column of pixels may be inserted in the image after the pixel is inserted again every 3 (1440/(1920-1440) = 3) columns, the number of pixels included in each column is 1080, and the image after the pixel is inserted again is 1920X1080px, that is, the target display image.
According to the embodiment, the pixel points are inserted into the target splicing image in rows or columns and converted into the target display image, so that the process of generating the target display image according to the target splicing image can be simplified, and the resource consumption of generating the target display image is saved.
In an exemplary embodiment, the performing, by the scaling module, a reduction operation on the target stitched image to obtain the target display image includes: calculating the number of image pixels included in the target spliced image through a scaling module to obtain a third number of pixels; calculating the number of image pixels included in the target display image through a scaling module to obtain a fourth pixel number; and deleting pixel points of a second target number in the target spliced image through a scaling module to obtain a target display image, wherein the second target number is a difference value obtained by subtracting a fourth pixel number from a third pixel number.
When the first image parameter of the target stitched image is greater than the target display parameter, in order to enable the target stitched image to be completely displayed on the display device, the first image parameter of the target display image needs to be reduced to the target display parameter. For example, when the image parameter of the target stitched image is 3840pxX2160px and the target display parameter of the display device is 1920pxX1080px, it is necessary to reduce the number of long-edge pixels as well as the number of short-edge pixels of the target stitched image by half (i.e., from 3840pxX2160px to 1920pxX1080 px).
Optionally, the number of image pixels included in the target stitched image may be calculated by the scaling module to obtain a third number of pixels, the number of image pixels included in the target display image may be calculated by the scaling module to obtain a fourth number of pixels, and then the scaling module deletes the pixel points of the second target number in the target stitched image to obtain the target display image, where the second target number is a difference value obtained by subtracting the fourth number of pixels from the third number of pixels. The second target number is a difference value obtained by subtracting the fourth pixel number from the third pixel number.
The process of calculating the number of image pixels included in the target stitched image by the scaling module to obtain the third number of pixels and the process of calculating the number of image pixels included in the target display image by the scaling module to obtain the fourth number of pixels are similar to the process of calculating the number of image pixels included in the target stitched image by the scaling module to obtain the first number of pixels, which is not described again in this embodiment.
Optionally, the process of deleting the pixel points of the second target number in the target mosaic image through the scaling module to obtain the target display image may be: deleting a line of pixel points to be interpolated at intervals of a third preset data line pixel point in the target spliced image through a zooming module to obtain a target display image; or, deleting a row of pixel points to be interpolated at intervals of a fourth preset number of pixel points in the target spliced image through the zooming module to obtain a target display image. This is not limited in this embodiment.
Illustratively, when the image parameter of the target stitched image is 3840pxx2160px and the display size of the led screen is 1920pxX1080px, a row of pixel points may be deleted in every 1 row in the target stitched image, the image after the pixel point deletion is 3840pxX1080px, a column of pixel points is continuously deleted in every 1 column in the image after the pixel point deletion, and the image after the pixel point deletion is 1920X1080px, that is, the target display image.
According to the embodiment, the number difference of the pixel points between the target splicing image and the target display image is determined to be the second target number, the pixel points of the second target number are deleted from the target splicing image, the target display image is obtained, the accuracy of the generated target display image can be improved, and the technical effect of improving the use experience of a user is achieved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 9 shows a block diagram of an image processing apparatus according to an embodiment of the present application, which corresponds to the image processing method described in the above embodiment, and only shows portions related to the embodiment of the present application for convenience of description.
According to another aspect of the embodiments of the present application, there is also provided an image processing apparatus for implementing the above-described image processing method. Fig. 9 is a block diagram of an alternative image processing apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus may include:
an obtaining unit 902, configured to obtain, by a target storage component, a plurality of to-be-processed images, where the plurality of to-be-processed images correspond to multiple video sources;
a stitching unit 904, connected to the obtaining unit 902, configured to perform a stitching operation on the multiple images to be processed through the target storage component to obtain a target stitched image, where the target stitched image is located in the target storage component;
and a zooming unit 906, connected to the splicing unit 904, configured to perform a zooming operation on the target spliced image through a zooming module to obtain a target display image, where an image parameter of the target display image is the same as a target display parameter of the display device, the zooming module is connected to the target storage component, and the zooming module is configured to zoom the target spliced image.
It should be noted that the obtaining unit 902 in this embodiment may be configured to execute the step S502, and the splicing unit 904 in this embodiment may be configured to execute the step S504; the scaling unit 906 in this embodiment may be configured to perform the step S506 described above.
Acquiring a plurality of images to be processed through a target storage component by the module, wherein the plurality of images to be processed correspond to a plurality of video sources; executing splicing operation on a plurality of images to be processed through a target storage component to obtain a target spliced image, wherein the target spliced image is located in the target storage component; the target display image is obtained by executing zooming operation on the target spliced image through the zooming module, wherein image parameters of the target display image are the same as target display parameters of the display device, the zooming module is connected with the target storage component and is used for zooming the target spliced image, the problem that the image processing method in the related art is poor in use experience of a user due to high delay of the image in the display process is solved, and the use experience of the user is improved.
In one exemplary embodiment, the acquisition unit includes:
the first acquisition module is used for acquiring images sent by each video source in the multiple video sources through the target storage component to obtain a plurality of images to be processed;
and the storage module is used for respectively storing a plurality of images to be processed in a plurality of storage areas in the target storage component through the target storage component, wherein the plurality of storage areas correspond to the multipath video source.
In one exemplary embodiment, the splicing unit includes:
the second acquisition module is used for acquiring a preset splicing format corresponding to the multi-channel video source through the target storage component, wherein the preset splicing format is used for indicating splicing positions of a plurality of images to be processed in the target spliced image;
and the splicing module is used for executing splicing operation on the multiple images to be processed through the target storage component according to a preset splicing format to obtain a target spliced image.
In one exemplary embodiment, the scaling unit includes:
the calculation module is used for calculating a first image parameter of the target spliced image through the scaling module;
the zooming module is used for performing zooming operation on the target spliced image through the zooming module under the condition that the first image parameter is smaller than the target display parameter to obtain a target display image, wherein the zooming operation comprises the zooming operation;
and the zooming-out module is used for executing zooming-out operation on the target spliced image through the zooming-out module under the condition that the first image parameter is larger than the target display parameter to obtain a target display image, wherein the zooming-out operation comprises zooming-out operation.
In one exemplary embodiment, the amplification module includes:
the first calculation submodule is used for calculating the number of image pixels included in the target spliced image through the scaling module to obtain a first pixel number;
the second calculating submodule is used for calculating the number of image pixels included in the target display image through the scaling module to obtain a second pixel number;
and the inserting submodule is used for inserting pixel points of a first target quantity into the target splicing image through the zooming module to obtain a target display image, wherein the first target quantity is a difference value obtained by subtracting the first pixel quantity from the second pixel quantity.
In one exemplary embodiment, the insertion sub-module includes:
the first inserting subunit is used for correspondingly inserting the pixels to be interpolated of the first preset number of rows into the target spliced image every other pixels of the first preset number of rows through the zooming module to obtain a target display image; alternatively, the first and second electrodes may be,
and the second inserting subunit is used for correspondingly inserting a second preset number of rows of pixels to be interpolated into the target spliced image every second preset number of rows of pixels through the zooming module to obtain a target display image, wherein the number of the pixels to be interpolated in the target display image is the first target number.
In one exemplary embodiment, the zoom-out module includes:
the third calculation sub-module is used for calculating the number of image pixels included in the target spliced image through the scaling module to obtain a third number of pixels;
the fourth calculating submodule is used for calculating the number of image pixels included in the target display image through the scaling module to obtain a fourth pixel number;
and the deleting submodule is used for deleting the pixel points of the second target quantity in the target spliced image through the zooming module to obtain a target display image, wherein the second target quantity is a difference value obtained by subtracting the fourth pixel quantity from the third pixel quantity.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
Fig. 10 is a schematic structural diagram of an alternative electronic device according to an embodiment of the application. The electronic device may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing device.
As shown in fig. 10, the electronic apparatus of this embodiment includes: a processor 11, a memory 12 and a computer program 13 stored in said memory 12 and executable on said processor 11. The processor 11 implements the steps S502, S504, and S506 in the above-mentioned image processing method embodiment when executing the computer program 13, or the processor 11 implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the acquiring unit 902, the splicing unit 904, and the scaling unit 906 shown in fig. 9, when executing the computer program 13.
Illustratively, the computer program 13 may be partitioned into one or more modules/units, which are stored in the memory 12 and executed by the processor 11 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 13 in the electronic device.
Those skilled in the art will appreciate that fig. 10 is merely an example of an electronic device and is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 11 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, a discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 12 may be an internal storage unit of the electronic device, such as a hard disk or a memory of the electronic device. The memory 12 may also be an external storage device of the electronic device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device. Further, the memory 12 may also include both an internal storage unit and an external storage device of the electronic device. The memory 12 is used for storing the computer program and other programs and data required by the electronic device. The memory 12 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to an apparatus/terminal device, recording medium, computer Memory, read-Only Memory (ROM), random-Access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-drive, a removable hard drive, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image processing method applied to the field of LEDs is characterized by comprising the following steps:
acquiring a plurality of images to be processed through a target storage component, wherein the plurality of images to be processed correspond to a plurality of video sources;
executing splicing operation on the multiple images to be processed through the target storage component to obtain a target spliced image, wherein the target spliced image is located in the target storage component;
and performing zooming operation on the target spliced image through a zooming module to obtain a target display image, wherein the image parameters of the target display image are the same as the target display parameters of display equipment, the zooming module is connected with the target storage component, and the zooming module is used for zooming the target spliced image.
2. The image processing method according to claim 1, wherein said acquiring a plurality of images to be processed by the target storage means includes:
acquiring images sent by each video source in the multiple video sources through the target storage component to obtain the multiple images to be processed;
and respectively storing the plurality of images to be processed in a plurality of storage areas in the target storage component through the target storage component, wherein the plurality of storage areas correspond to the multi-channel video source.
3. The image processing method according to claim 2, wherein said performing, by the target storage means, a stitching operation on the plurality of images to be processed to obtain a target stitched image comprises:
acquiring a preset splicing format corresponding to the multiple video sources through the target storage component, wherein the preset splicing format is used for indicating splicing positions of the multiple images to be processed in the target spliced image;
and executing the splicing operation on the plurality of images to be processed through the target storage component according to the preset splicing format to obtain the target spliced image.
4. The image processing method according to any one of claims 1 to 3, wherein the performing, by a scaling module, a scaling operation on the target stitched image to obtain a target display image comprises:
calculating a first image parameter of the target stitched image through the scaling module;
under the condition that the first image parameter is smaller than the target display parameter, executing an amplification operation on the target spliced image through the scaling module to obtain the target display image, wherein the scaling operation comprises the amplification operation;
and under the condition that the first image parameter is larger than the target display parameter, executing a reducing operation on the target spliced image through the scaling module to obtain the target display image, wherein the scaling operation comprises the reducing operation.
5. The image processing method of claim 4, wherein the performing, by the scaling module, the magnification operation on the target stitched image to obtain the target display image comprises:
calculating the number of image pixels included in the target spliced image through the scaling module to obtain a first pixel number;
calculating the number of image pixels included in the target display image through the scaling module to obtain a second number of pixels;
and inserting pixel points of a first target number into the target spliced image through the scaling module to obtain the target display image, wherein the first target number is a difference value obtained by subtracting the first pixel number from the second pixel number.
6. The image processing method of claim 5, wherein the inserting, by the scaling module, a first target number of pixel points in the target stitched image to obtain the target display image comprises:
correspondingly inserting a line of pixel points to be interpolated every other first preset number of lines of pixel points in the target spliced image through the zooming module to obtain the target display image; alternatively, the first and second electrodes may be,
and correspondingly inserting a row of the pixels to be interpolated every second preset number of rows of pixels in the target spliced image through the zooming module to obtain the target display image, wherein the number of the pixels to be interpolated in the target display image is the first target number.
7. The image processing method according to claim 4, wherein the performing a reduction operation on the target stitched image through a scaling module to obtain the target display image comprises:
calculating the number of image pixels included in the target spliced image through the scaling module to obtain a third number of pixels;
calculating the number of image pixels included in the target display image through the scaling module to obtain a fourth pixel number;
and deleting pixel points of a second target number in the target spliced image through the scaling module to obtain the target display image, wherein the second target number is a difference value obtained by subtracting the fourth pixel number from the third pixel number.
8. An image processing apparatus applied to the field of LEDs, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of images to be processed through a target storage component, and the plurality of images to be processed correspond to a plurality of video sources;
the splicing unit is used for executing splicing operation on the multiple images to be processed through the target storage component to obtain a target spliced image, wherein the target spliced image is positioned in the target storage component;
and the zooming unit is used for performing zooming operation on the target spliced image through a zooming module to obtain a target display image, wherein the image parameters of the target display image are the same as the target display parameters of the display device, the zooming module is connected with the target storage component, and the zooming module is used for zooming the target spliced image.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202211342493.8A 2022-10-31 2022-10-31 Image processing method, image processing device, electronic equipment and storage medium Pending CN115689888A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211342493.8A CN115689888A (en) 2022-10-31 2022-10-31 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211342493.8A CN115689888A (en) 2022-10-31 2022-10-31 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115689888A true CN115689888A (en) 2023-02-03

Family

ID=85046809

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211342493.8A Pending CN115689888A (en) 2022-10-31 2022-10-31 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115689888A (en)

Similar Documents

Publication Publication Date Title
EP3061234B1 (en) Guided color grading for an extended dynamic range image
EP2202683A1 (en) Image generation method, device, its program and recording medium with program recorded therein
CN112991242A (en) Image processing method, image processing apparatus, storage medium, and terminal device
US20230362328A1 (en) Video frame insertion method and apparatus, and electronic device
US20210090220A1 (en) Image de-warping system
CN113989173A (en) Video fusion method and device, electronic equipment and storage medium
KR20140038436A (en) Method and device for retargeting a 3d content
CN109272526B (en) Image processing method and system and electronic equipment
CN107220934B (en) Image reconstruction method and device
JP2008046608A (en) Video window detector
CN113506305A (en) Image enhancement method, semantic segmentation method and device for three-dimensional point cloud data
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN115689888A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110619597A (en) Semitransparent watermark removing method and device, electronic equipment and storage medium
EP2438572A1 (en) Generating images with different fields of view
CN110677577A (en) Image processing method and device
CN112598571B (en) Image scaling method, device, terminal and storage medium
US9723216B2 (en) Method and system for generating an image including optically zoomed and digitally zoomed regions
JP2006215657A (en) Method, apparatus, program and program storage medium for detecting motion vector
CN112929601A (en) Vehicle monitoring video transmission system and transmission method
CN112308809A (en) Image synthesis method and device, computer equipment and storage medium
CN112150355A (en) Image processing method and related equipment
CN112150345A (en) Image processing method and device, video processing method and sending card
CN111242087A (en) Object recognition method and device
US20180286006A1 (en) Tile reuse in imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination