WO2019196589A1 - 图像处理方法、设备、装置、图像贴合方法、设备、显示方法、装置及计算机可读介质 - Google Patents

图像处理方法、设备、装置、图像贴合方法、设备、显示方法、装置及计算机可读介质 Download PDF

Info

Publication number
WO2019196589A1
WO2019196589A1 PCT/CN2019/078015 CN2019078015W WO2019196589A1 WO 2019196589 A1 WO2019196589 A1 WO 2019196589A1 CN 2019078015 W CN2019078015 W CN 2019078015W WO 2019196589 A1 WO2019196589 A1 WO 2019196589A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
display
images
stitched
Prior art date
Application number
PCT/CN2019/078015
Other languages
English (en)
French (fr)
Inventor
王雪丰
孙玉坤
张�浩
陈丽莉
苗京花
赵斌
李茜
王立新
索健文
李文宇
彭金豹
范清文
陆原介
刘亚丽
王晨如
孙建康
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/641,537 priority Critical patent/US11783445B2/en
Publication of WO2019196589A1 publication Critical patent/WO2019196589A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the present disclosure relates to the field of image processing, and more particularly, to an image processing method, apparatus, image fitting method, display device, and medium.
  • the concentration of the cones on the retina responsible for observing color and detail is such that the human eye can only accept details of the center of the gaze region, which corresponds to the angle of the human eye relative to the gaze image.
  • the range of 5 degrees of view, and for anything beyond the scope of the gaze, the human eye will blur the clarity of these things.
  • the effective observation area of the human eye is similar to a circle. That is to say, for an image (in particular, a high-resolution image), only the image in the central circular area is the image that is finally effectively captured by the human eye, and the image in the edge area outside the circular area. It does not fall within the effective observation area of the human eye.
  • the output of the current image processor can only be a rectangular image, and the image transmitted in the channel can only be a rectangular image. This makes it necessary to ensure that the rectangular image is transmitted in the channel in the prior art, and in the transmitted rectangular image, only the image in the central circular area is effectively observed for the effective viewing experience of the user.
  • An image also referred to as an effective image
  • an image in an edge region outside the circular region is an image that is not effectively observed (also referred to as a useless image). Therefore, a part of the image (i.e., the image of the edge region) in the rectangular image transmitted in the channel wastes the channel bandwidth to some extent.
  • an image processing method including: dividing an input image into regions to obtain a plurality of sub-images; determining a part of the plurality of sub-images as an image to be output; Outputting each sub-image in the image to obtain a stitched image; and transmitting the stitched image, wherein the stitched image has a size smaller than a size of the input image.
  • the stitched image is a rectangular image.
  • the method further includes: filling a vacant area of the stitched image to form a rectangular image.
  • the splicing each sub-image in the image to be output includes: determining a sub-image having the largest area among the sub-images, and moving the sub-images with respect to the sub-image having the largest area Other sub-images in .
  • the plurality of sub images are divided based on a shape of a display area of the display device.
  • an image fitting method comprising: receiving a stitched image obtained according to the image processing method as described above; extracting each sub image in the image to be output from the stitched image; and fitting Each of the sub-images obtains a display image, wherein the pasting each sub-image means obtaining a display image by an operation opposite to a splicing process of splicing each sub-image to obtain the spliced image.
  • the image fitting method further includes: calculating, for each sub-image in the image to be output, a fitting parameter of the sub-image; and fitting each sub-sub-based based on a fitting parameter of each sub-image image.
  • the fitting parameter includes an area ratio parameter and an offset parameter, the area ratio parameter including a width ratio and a height ratio of the sub image with respect to a display area of the display device, and the The offset parameter includes the starting position of the sub-image in the display area.
  • the attaching each sub-image includes conforming according to a shape of a display area of the display device.
  • an image display method comprising: dividing an input image into regions to obtain a plurality of sub-images; determining a part of the plurality of sub-images as an image to be output; a sub-image in the image to be output, a stitched image is obtained; the stitched image is transmitted, wherein the stitched image is smaller than the input image; and the stitched image is received;
  • the splicing process of the image is reversed to obtain a display image.
  • an image processing apparatus comprising: a region dividing unit configured to divide an input image into regions to obtain a plurality of sub-images; and a tiling unit configured to divide a part of the plurality of sub-images The sub-image is determined as an image to be output; and an output unit configured to splicing each sub-image in the image to be output to obtain a spliced image, wherein the spliced image is transmitted, wherein the spliced image is smaller than the input image.
  • an image pasting apparatus comprising: a receiving unit configured to receive a stitched image obtained according to the image processing method according to claim 1; and a fitting unit configured to stitch from the stitched image Extracting each sub-image in the image to be output, and fitting each sub-image to obtain a display image, wherein the sub-images are referred to by an operation opposite to the splicing process of splicing each sub-image to obtain the spliced image. Get the display image.
  • an image processing apparatus comprising one or more processors and one or more memories configured to execute computer instructions to perform image processing as described above The method, or the image fitting method as described above.
  • a display device including a display screen and at least one processor configured to receive a stitched image obtained according to an image processing method of the present disclosure; Image matching method to obtain a display image; the display screen is configured to display the display image.
  • the display device further includes one or more sensors configured to track and determine gaze point data of a user within a display area of the display device, at least one processor of the display device It is further configured to transmit the fixation point data to the image processing apparatus as described above.
  • the at least one processor of the display device is further configured to acquire a shape of a display area of the display device and transmit the shape to the image processing device as described above.
  • the shape of the display area of the display device includes a non-rectangular shape.
  • a computer readable storage medium configured to store computer instructions that, when executed by a processor, perform an image processing method as described above, or perform an image as described above Fit method.
  • FIG. 1 illustrates a flow chart of an image processing method in accordance with an embodiment of the present disclosure.
  • FIG. 2 illustrates a schematic diagram of a process of region division and splicing of an image in an image processing method according to an embodiment of the present disclosure.
  • FIG. 3 illustrates a schematic diagram of a model in which an splicing process as shown in FIG. 2 is completed in an image processing method according to an embodiment of the present disclosure.
  • FIG. 4 shows a schematic diagram of the stitched image of FIG. 2 output in accordance with an embodiment of the present disclosure.
  • FIG. 5 illustrates a schematic diagram of an image displayed on a display device in accordance with an embodiment of the present disclosure.
  • FIG. 6 shows a schematic diagram of an image stitching process according to another embodiment of the present disclosure.
  • Fig. 7 is a view showing a stitched image obtained according to the splicing process shown in Fig. 6.
  • FIG. 8 shows a schematic diagram of an image displayed on a display device in accordance with an embodiment of the present disclosure.
  • FIG. 9A shows a schematic diagram of an image processing device in accordance with an embodiment of the present disclosure.
  • FIG. 9B shows a schematic diagram of an image pasting apparatus in accordance with an embodiment of the present disclosure.
  • FIG. 10 shows a schematic diagram of an image processing system in accordance with an embodiment of the present disclosure.
  • FIG. 11 shows a schematic diagram of a display device included in an image processing system according to an embodiment of the present disclosure.
  • FIG. 12 shows a schematic diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 13 shows a schematic diagram of a display device in accordance with an embodiment of the present disclosure.
  • FIG. 14 illustrates a flow chart of an image fitting method in accordance with an embodiment of the present disclosure.
  • FIG. 15 shows a flowchart of an image display method according to an embodiment of the present disclosure.
  • a rendering engine is employed to create, retrieve, and perform image processing operations.
  • the rendering engine may be, for example, a Unity rendering engine, or other image processing tools, and the disclosure does not limit it.
  • the display area to be rendered for the rendering engine may be the entire scene (Scene) targeted by the rendering engine that performs the rendering.
  • the construction of the scene can be performed by the rendering engine according to the data of the construction scene (attribute data Attributes, equalization data Uniforms, texture data Texture, and the like).
  • the rendering engine can perform the construction of the scene through a shader of the graphics processor or a central processing unit or other logic operation circuit that can perform rendering operations.
  • a plurality of virtual cameras are arranged, and various parameters can be set to cause each virtual camera to obtain a desired image for providing a viewable scene view during rendering.
  • a desired image for providing a viewable scene view during rendering.
  • an orthogonal projection camera for example, an orthogonal projection camera, a perspective projection camera.
  • the virtual camera imports the captured image into the rendering engine and sets the parameters of the virtual camera and adjusts the angle of the virtual camera to capture the image to be processed as an input image.
  • the required algorithms can be designed. These algorithms can be implemented in the form of software products in the processor executing the rendering engine, or can be solidified in performing the required image processing purposes.
  • the process of performing the required image processing can be abstracted into a model that performs corresponding image processing functions.
  • the divided image can be referred to as a partition model
  • the stitched image can be referred to as a stitching model.
  • the attached image is referred to as a fit model, and the like.
  • an image of a central region of interest to the human eye (which may be a central region of the entire image or a central region of a partial image region in which the human eye is gazing) is an area that can be effectively focused by the human eye.
  • An image that is not effectively focused by the human eye, that is, an image outside the center area may be referred to as a useless image.
  • FIG. 1 shows a flowchart of the image processing method.
  • step S101 the input image is divided into regions to obtain a plurality of sub-images.
  • the division of the sub image may be based on a shape of a display area of the display device.
  • the sub-image may also be divided based on the acquired gaze point data of the user in the display area.
  • step S102 a part of the plurality of sub-images is determined as an image to be output.
  • the sub-image that is not determined as the image to be output will be discarded as the discarded image, that is, only the sub-image in the image to be output needs to be transmitted without transmitting the discarded image.
  • step S103 each sub-image in the image to be output is spliced to obtain a spliced image.
  • the shape of the image to be output may be a non-rectangular shape such as a hexagon, an octagon, or the like, which is more similar to a circular shape with respect to the image to be output of the rectangle.
  • the display of such a non-rectangular image is also referred to as an "Abnormal Display.”
  • Each of the sub-images in such a non-rectangular image may be appropriately moved to splicing to form a rectangular image suitable for channel transmission.
  • step S104 the stitched image is transmitted. Since only a part of the sub-images is determined as the image to be output in step S102, and in step S103, the stitched image is obtained based only on the determined sub-images in the image to be output, that is, only the determined stitching image is included as The sub-image of the image to be output is discarded, and the other sub-images are discarded, and are not used for transmission, so that the size of the stitched image after stitching is smaller than the size of the input image.
  • a part of a sub-image that does not need to be displayed on the display screen for the user to view is discarded, and an appropriate stitching is performed on the sub-image determined as the image to be output.
  • a stitched image suitable for channel transmission and smaller than the original input image size can be obtained.
  • channel resources can be saved.
  • FIG. 14 shows a flowchart of the image fitting method.
  • a stitched image is received.
  • the stitched image may be a stitched image obtained according to the image processing method as described above.
  • the stitched image is obtained by stitching each sub-image in the image to be output.
  • each sub-image in the image to be output is extracted from the mosaic image.
  • the respective sub-images are pasted to obtain a display image.
  • the attaching each sub-image refers to obtaining a display image by an operation opposite to a splicing process of splicing each sub-image to obtain the spliced image.
  • the image fitting method may further include calculating, for each sub-image in the image to be output, a fitting parameter of the sub-image; and then fitting each based on a fitting parameter of each sub-image Sub image.
  • the bonding parameter may include an area ratio parameter and an offset parameter, the area ratio parameter including a width ratio and a height ratio of the sub image with respect to a display area of the display device, and the partial deviation
  • the shift parameter includes the starting position of the sub-image in the display area.
  • the attaching each sub-image may include, for example, conforming according to a shape of a display area of the display device.
  • the shape of the display device matches the shape of the image to be output so that the output image can be most efficiently placed on the display screen. Therefore, the stitched image needs to be restored to the original image to be output when it reaches the side of the display device to be completely displayed on the display screen. Since the splicing process is to convert from the image to be output to the spliced image, and the process of splicing is restored from the spliced image to the image to be output, the splicing of the sub-images means that the splicing is obtained by splicing the sub-images. The splicing process of the image is reversed to obtain a display image.
  • the so-called reverse operation may mean that the adjustment of the position of each sub-image by the fitting process is opposite to the adjustment of the position of the corresponding sub-image by the splicing process.
  • steps S101 to S104 may be performed in a rendering engine, and steps S201 to S203 may be performed in a display device.
  • steps S201 to S203 may be performed by a driver integrated circuit (Driver IC) of the display device.
  • Driver IC driver integrated circuit
  • FIG. 2 a schematic diagram of performing sub-pixel area division and splicing an image to be output to form a spliced image in the image processing method according to the present disclosure is shown.
  • the rectangular image on the left side is the original input image 100, and the size of the input image may be 1920*1080.
  • the input image can be divided into seven sub-images according to the area distribution, such as the sub-images 101-107 in the left image in FIG.
  • each sub-image numbered 101-103 shown in FIG. 2 is determined as an image to be output, which are combined into one hexagonal image, and the other four sub-images 104-107 are discarded as images, that is, are not transmitted to Display device.
  • the rectangular image on the right side in FIG. 2 is a stitched image 200 obtained by stitching, and the size of the stitched image is 1380*1080.
  • the process of splicing the respective sub-images 101-103 in the image to be output to form the spliced image 200 can be expressed as: comparing the sub-image labeled 102 in the left image with respect to The sub-image numbered 101 is translated to the lower left, and the sub-image numbered 103 in the left image is translated to the upper right relative to the sub-image numbered 101; the sub-images with the moved numbers 102 and 103 are moved.
  • the sub-image combination numbered 101 is obtained, thereby obtaining a spliced image 200 of the right rectangle shown in FIG. 2.
  • the stitched image can save 28.125% of the transmission channel bandwidth.
  • a process of splicing an image to be output (non-rectangular image) including sub-images 101-103 to form a spliced image may include a plurality of different moving paths for each sub-image, and may include panning And a variety of movement methods such as rotation.
  • the sub-image with the largest area among the sub-images of the image to be output may be determined first, such as the sub-image 101 in FIG. 2, and then the determined The sub-image having the largest area is fixed, and the other sub-images in the sub-images are moved with respect to the sub-image having the largest area, such as the sub-images 102 and 103 in FIG. 2, to obtain the stitched image after stitching.
  • the sub-image having the largest area among the sub-images may include the gaze point area of the user.
  • the stitching process can be implemented by a model.
  • the model can be pre-established as needed, for example using modeling software, and then imported into the rendering engine as a file for stitching the image.
  • FIG. 3 is a schematic diagram 300 showing a model in which an splicing process as shown in FIG. 2 is completed in an image processing method according to an embodiment of the present disclosure.
  • the model may be pre-established to complete the sub-image region partitioning and splicing process as shown in FIG. 2. In this way, it is only necessary to call the model to process the image in real time when processing each frame image, and the calculation amount of the stitching path is re-determined for each frame image.
  • the modeling process requires that each sub-image to be stitched be provided to the modeling software, and the size and shape of the stitched image desired to be output needs to be preset. That is to say, the shape and size of each sub-image and the relative positional relationship between the sub-images are specific, and the size and shape of the stitched image desired to be output are also specific, so that the movement trajectory of each sub-image is also determined. .
  • the model created in FIG. 3 is for generating a stitched image of a selected size based on the three sub-images numbered 101-103 in FIG. 2.
  • the stitched image of the selected size may be a rectangular image. It should be understood that the image of the selected size may also be an image of other shapes, and the disclosure does not limit it.
  • the rendering engine provides each sub-image in the image to be output to the model and acquires the stitched image generated by the model for transmission to the display device.
  • FIG. 4 is a schematic diagram 400 showing the stitched image of FIG. 2 outputted in accordance with an embodiment of the present disclosure.
  • the top and bottom views shown in Figure 4 are the scene view Scene 411 and the game view Game 412 of the stitched image viewable in the rendering engine, respectively, wherein the Scene view is a 3D image, which can be viewed from different perspectives by adjusting the angle of the virtual camera.
  • the stitched image; the Game view is a flat image of the 3D image.
  • the stitched image is appropriately adjusted by viewing the Game view to transmit the two-dimensional stitched image (rectangular image) in the channel.
  • each sub-image and each sub-image is provided to the modeling software.
  • the fitting parameters of each sub-image are determined in advance, and the fitting parameters have the following two functions: one is for establishing the model, so as to determine the initial relative position of each sub-image, and calculating the sub-images based on the initial relative positions.
  • the moving track is used to form a desired stitched image; the second is that it can be used to restore the stitched image received from the channel on the display device side, for example by the display device, to the same display image as the original image to be output.
  • the fitting parameter reflects the relative positional relationship of each sub-image in the original image to be output
  • the fitting parameter can be determined according to the left rectangle image in FIG. 2.
  • the fit parameter may include an area ratio parameter Scale and an offset parameter Offset.
  • the area where the original input image is located is referred to as a display area (for example, the scene where the original input image is located), and the area ratio parameter Scale may include a width ratio and a height ratio of the sub image with respect to the display area, and the offset parameter may include a sub The specific location of the image in the display area.
  • the offset parameter of the sub-image may be determined by adding the sub-image to the minimum rectangle, and the offset ratio of the lower left corner of the minimum rectangle to the lower left corner of the display area in the width direction and the height direction, respectively.
  • the end point at the lower left corner of the display area (ie, the original input image) is set as the reference point (0, 0). It is easy to understand that it is feasible to select any other position as a reference point.
  • Vector2 represents a two-dimensional vector.
  • the shape, size and relative positional relationship of the sub-images to be stitched are already determined, and the position of each sub-image in the stitched image is also determined, so
  • the rendering engine can label each sub-image and provide the sub-images in parallel to the model according to their respective labels, and the model can be based on each sub-model.
  • the label of the image moves it to the corresponding location to generate the desired stitched image.
  • the three sub-images to be output may be sequentially labeled as 101-103, as shown by the left rectangle image. Since the model is deterministic, the relative position of each sub-image in the spliced image will also be determined after splicing. Specifically, as shown in the right rectangular image of FIG. 2, in the spliced image, the sub-image numbered 102 is moved to the lower left of the sub-image numbered 101, and the sub-image numbered 103 is moved. Go to the upper left of the sub-image numbered 101. Due to the positional relationship of each sub-image, whether it is splicing or splicing, it is deterministic. Therefore, when each sub-image is provided to the established model according to the label, the model can also quickly move each sub-image to the corresponding label. At the location, the desired stitched image is obtained.
  • fitting parameters can reflect the relative positional relationship of each sub-image
  • the operation of labeling each sub-image as described above can be performed based on the fitting parameters of the respective sub-images.
  • FIG. 5 is a schematic diagram 500 showing a display image displayed on a display device in accordance with an embodiment of the present disclosure.
  • the display device receives the stitched image, and may be executed by a processor (eg, a driver circuit Driver IC) configured in the display device according to the present invention.
  • a processor eg, a driver circuit Driver IC
  • the process of the disclosed image fitting method ie, restoring the received stitched image to the original image to be output
  • the display image shown in FIG. 5 corresponds to the image to be output shown in FIG. 2, and thus, the display image is also a hexagonal image.
  • the driving circuit of the display device since the fitting parameter includes the relative positional relationship of each sub-image before splicing, the driving circuit can be based on the former Each of the determined sub-images is pasted (reduced) into the original image to be output.
  • the drive circuit may move each sub-image to a position corresponding to its label based on the label of each sub-image as described above.
  • the original image to be output that is, the image in the left image of FIG. 2 includes the image of the sub image 101-103
  • the sub image labeled 102 is at the upper right of the sub image labeled 101
  • the sub-image numbered 103 is at the lower right of the sub-image numbered 101. In this way, it is also possible to quickly restore the image received from the channel to the original image to be output.
  • the shape of the display screen of the display device matches the shape of the image to be output.
  • the screen shape of the display device shown in FIG. 5 is a hexagon.
  • other shapes may be adopted according to actual conditions, and the disclosure does not limit the same.
  • FIG. 6 shows a schematic diagram 600 of another image stitching process in which the image to be output is an octagonal image, in accordance with an embodiment of the present disclosure.
  • the square image on the left side is the original input image 601, and the size of the input image may be 1000*1000.
  • the input image may be divided into nine sub-images 401-409 according to the area distribution, wherein each sub-image labeled as 401-405 shown in FIG. 6 is determined as an image to be output to the display, and they are combined as An octagon image, while the other four sub-images located at the edge of the square image are images that will not be displayed by the display.
  • the square image on the right side is the stitched image after stitching, and the size of the stitched image is close to 856*856 (a rectangular area is vacant).
  • the sub-image labeled 401 can be determined as the sub-image with the largest area among the five sub-images to be output, so that the sub-image with the label 401 can be fixed, only relative to The sub-image having the largest area moves the other sub-images in the respective sub-images.
  • the sub-image numbered 402 is translated to the lower right relative to the sub-image numbered 401
  • the sub-image numbered 403 is translated to the upper right relative to the sub-image labeled 401
  • the sub-image numbered 404 is relative to
  • the sub-image labeled 401 is translated to the lower right
  • the sub-image labeled 405 is translated to the lower left relative to the sub-image labeled 401
  • the moved sub-images labeled 402-405 are moved.
  • the sub-image combination labeled 401 yields a mosaic image 602 that is approximately square on the right side of FIG.
  • the image size of the square on the right side in Fig. 6 is only 73.27% of the size of the left square image, that is, only the square image transmitted on the right side can save 26.73% of the transmission channel bandwidth compared to the square image on the left side of the transmission.
  • the desired size value can be determined by the display area of the display screen.
  • a rectangular image closest to the desired size value can be output as the stitched image, and the ratio of the area of the stitched image to the area of the rectangular image having the desired size value is between [0, 1].
  • a stitched image can be selected in such a way that the maximum value in the range [0, 1] can be reached.
  • each sub-image input into the model may not be just spliced into a desired matrix image.
  • the spliced image spliced by the sub-images 401-405 is compared to a rectangle.
  • a rectangular area as shown by area 410 is vacant.
  • the rendering engine may fill the vacancy area 410 to fill the spliced image as a rectangular image having a desired size value.
  • the rendering engine may fill the vacant area 410 using a background color or a gradation color or the like, and the present disclosure does not limit the manner in which the vacant area 410 is filled.
  • the bonding operation is performed by the driving circuit of the display device
  • the vacant area 410 will be The bonding process (reduction process) of the driving circuit of the display device is discarded, but it needs to be transmitted in the channel, and thus the bandwidth that can be saved is slightly less than in the case where it can be just spliced into a rectangular image.
  • FIG. 7 is a schematic diagram 700 showing a stitched image obtained according to the stitching process shown in FIG. 6.
  • the top and bottom views shown in Figure 7 are the Scene view 701 and the Game view 702 of the stitched image viewable in the rendering engine, respectively, wherein the Scene view is a 3D image, which can be viewed from different perspectives by adjusting the angle of the virtual camera.
  • the image (close to the square image); the Game view is the planar image of the 3D image.
  • the stitched image is appropriately adjusted by viewing the Game view, and the two-dimensional stitched image is filled with respect to the blank area 410 of the desired rectangular image (for example, filling the background color or the gradient color, etc.), and finally the filled rectangular image is Transmission in the transmission channel.
  • FIG. 8 is a schematic diagram 800 showing an image displayed in accordance with an embodiment of the present disclosure.
  • the display image shown in FIG. 8 is obtained based on the stitched image 602 shown in FIG. 6.
  • the driving circuit in the display device can restore the received stitched image to the original image to be output (ie, an octagonal image).
  • the display device In order to display in the display area of the display screen. That is, the display device first extracts each sub-image 401-405 in the image to be output from the spliced image 602, and then fits each sub-image to obtain a display image.
  • the image to be output may be determined based on a shape of a display area.
  • the shape of the display area of the display screen may be acquired first, such as a square or a rectangle, and the size of the shape.
  • the input image 100 may be divided into a plurality of sub-images 101-107 as shown in the left image of FIG.
  • the input image 601 can be divided into a plurality of sub-images 401-409 as shown in the left image of FIG.
  • FIG. 15 shows a flowchart of an image display method according to an embodiment of the present disclosure.
  • step S301 the input image is divided into regions to obtain a plurality of sub-images.
  • the division of the sub image may be based on a shape of a display area of the display device.
  • the sub-image may also be divided based on the acquired gaze point data of the user in the display area.
  • step S302 a part of the plurality of sub-images is determined as an image to be output.
  • the sub-image that is not determined as the image to be output will be discarded as the discarded image, that is, only the sub-image in the image to be output needs to be transmitted without transmitting the discarded image.
  • step S303 each sub-image in the image to be output is spliced to obtain a spliced image, and in step S304, the spliced image is transmitted, for example, the spliced image may be transmitted to a display device for display.
  • step S305 a mosaic image is received.
  • the stitched image may be a stitched image obtained in step S303 as described above.
  • the stitched image is obtained by stitching each sub-image in the image to be output.
  • each sub-image in the image to be output is extracted from the mosaic image.
  • the respective sub-images are pasted, and the displayed display image is displayed, for example, the display image obtained by the pasting on the display screen.
  • the attaching each sub-image refers to obtaining a display image by an operation opposite to a splicing process of splicing each sub-image to obtain the spliced image.
  • FIG. 9A is a schematic diagram showing an image processing apparatus according to an embodiment of the present disclosure.
  • the image processing apparatus 900 illustrated in FIG. 9A may include an area dividing unit 901, a splicing unit 902, and an output unit 903.
  • the area dividing unit 901 is configured to perform area division on the input image to obtain a plurality of sub-images.
  • a part of the divided sub-images is taken as an image to be output without transmitting a sub-image other than the image to be output in the input image.
  • the shape of the image to be output may be a non-rectangular shape such as a hexagon, an octagon, or the like, which is not limited by the present disclosure.
  • a plurality of sub-images of the central region of the input image may be regarded as images to be output without transmitting a sub-image of an edge region of the input image, that is, discarding during transmission A sub-image of the edge region.
  • the splicing unit 902 is configured to splicing each sub-image in the image to be output to obtain a spliced image.
  • the image to be output may be a non-rectangular image, and thus it is necessary to appropriately move each sub-image in the non-rectangular image to be spliced into a rectangular image suitable for channel transmission.
  • the output unit 903 is configured to output the stitched image. Wherein, since at least one of the plurality of sub-images is discarded after the input image is divided into regions, the size of the mosaic image is smaller than the size of the input image.
  • the image processing apparatus 900 of the embodiment of the present disclosure may discard a part of the sub-images that are not required to be displayed on the display screen for the user to view, and perform appropriate splicing of the sub-images to be output to obtain a channel transmission suitable for the channel.
  • the input image size is small and the stitched image.
  • channel resources can be saved.
  • the stitched image can be a rectangular image. Since the image to be transmitted on the channel by the image processing apparatus 900 needs to be a rectangular image, it is generally required that the image stitched through the splicing unit should also be a rectangular image.
  • the size of the rectangular image may be set in advance, and preferably, a size that is most convenient to be formed into a rectangle may be set according to the shape and size of each sub-image in the image to be output.
  • the image processing apparatus 900 may further include: a filling unit 904 configured to perform, when the mosaic image is a non-rectangular image, the spliced image with respect to a vacant area of a set size rectangle Fill it to form a rectangular image with a set size.
  • a filling unit 904 configured to perform, when the mosaic image is a non-rectangular image, the spliced image with respect to a vacant area of a set size rectangle Fill it to form a rectangular image with a set size.
  • each sub-image in the image to be output cannot be just spliced into a rectangular image of a certain size regardless of how the image is moved, so that the spliced image has a vacant area with respect to the rectangular image of the selected size.
  • the filling unit 904 is required to fill the vacant area by using, for example, a background color or a gradation color to ensure that the image transmitted in the channel is a rectangular image.
  • splicing each sub-image in the image to be output may include: determining a sub-image having the largest area among the sub-images, and moving the sub-images relative to the sub-image having the largest area Other sub-images.
  • the sub-image with the largest area can be fixed, and only the other sub-images can be moved.
  • the speed of the image splicing program can be improved.
  • only other sub-images can be translated without performing more complicated movements such as rotation, flipping, and the like.
  • FIG. 9B shows a schematic diagram of an image pasting apparatus in accordance with an embodiment of the present disclosure.
  • the image bonding apparatus 910 may include a receiving unit 911 and a bonding unit 912.
  • the image pasting apparatus 910 is configured to receive a stitched image obtained according to the image processing method as described above, such as the stitched image 200, the stitched image 602, and the like.
  • the affixing unit 912 is configured to extract each sub-image in the image to be output from the spliced image, and fit each sub-image to obtain a display image, wherein the affixing each sub-image refers to splicing each sub-image
  • the image obtains the opposite operation of the stitching process of the stitched image to obtain a display image.
  • FIG. 10 is a schematic diagram showing an image processing system in accordance with an embodiment of the present disclosure.
  • the image processing system 1000 may include: an image processing device 900 as described above; and a display device 1100.
  • the display device 1100 is for receiving an image output from the image processing device 900 via a channel, and for displaying an image.
  • FIG. 11 is a schematic diagram showing a display device 1100 included in an image processing system according to an embodiment of the present disclosure.
  • the display device 1100 may include an image extracting unit 1101 and a bonding unit 1102.
  • the image extracting unit 1101 may be configured to extract each sub image in the image to be output from the mosaic image; and the bonding unit 1102 may be configured to fit each sub image to obtain a display image.
  • the shape of the display area in the display device matches the shape of the image to be output so that the outputted image can be most efficiently placed on the display screen. Therefore, the stitched image needs to be restored to the original image to be output when it reaches the side of the display device to be optimally displayed on the display screen. Since the splicing process is to convert from the image to be output to the spliced image, and the process of splicing is restored from the spliced image to the image to be output, the affixing of each sub-image may be a reverse operation of splicing each sub-image in the image to be output. .
  • the display device 1100 may further include: a computing unit 1103 configured to: For each sub-image in the image to be output, a fitting parameter of the sub-image is calculated.
  • the fitting parameter can reflect the relative positional relationship of each sub-image in the original image to be output.
  • the fit parameter may include an area ratio parameter Scale and an offset parameter Offset. If the area in which the original input image is located is referred to as a display area, the area ratio parameter Scale includes a width ratio and a height ratio of the sub image with respect to the display area; and the offset parameter includes a specific position of the sub image in the display area, for example, The offset ratio of the lower left corner of the minimum rectangle to the lower left corner of the display area in the width direction and the height direction, respectively, may be determined by adding the sub-image to the minimum rectangle.
  • the end point at the lower left corner of the display area (ie, the original input image) is set as the reference point. It should be understood that points of other locations may also be selected as reference points, and the disclosure does not limit them.
  • the fitting parameter can reflect the relative positional relationship of each sub-image in the original image to be output
  • the stitched image can be restored to the original image to be output according to the calculated fitting parameter
  • the shape of the display area of the display device 1100 may be non-rectangular.
  • Embodiments of the present disclosure also provide a computer readable storage medium having stored thereon a computer program configured to be executed by a processor to implement one of image processing methods as described in an embodiment of the present disclosure or Multiple steps.
  • an embodiment of the present disclosure further provides an image processing apparatus 1200 including one or more processors 1201 configured to execute computer instructions to perform an image of any of the above embodiments. Processing one or more steps in the method, or performing one or more of the image fitting methods described above.
  • the image processing apparatus 1200 further includes a memory 1202 coupled to the processor 1201 and configured to store the computer instructions.
  • the memory 1202 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM). , Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk
  • Optical Disk Optical Disk
  • the processor 1201 may be a central processing unit (CPU) or a field programmable logic array (FPGA) or a single chip microcomputer (MCU) or a digital signal processor (DSP) or an application specific integrated circuit (ASIC) or a graphics processing unit (GPU).
  • CPU central processing unit
  • FPGA field programmable logic array
  • MCU single chip microcomputer
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • GPU graphics processing unit
  • the one or more processors may be configured to simultaneously execute the image processing method described above in a processor group that is concurrently calculated, or configured to perform a partial step in the image processing method described above with a partial processor that executes the image processing method described above Other parts of the steps, etc.
  • Computer instructions include one or more processor operations defined by an instruction set architecture corresponding to a processor, which may be logically included and represented by one or more computer programs.
  • the image processing device 1200 can also connect various input devices (such as a user interface, a keyboard, etc.), various output devices (such as speakers, network cards, etc.), and display devices to implement interaction between the image processing device and other products or users. I will not repeat them here.
  • various input devices such as a user interface, a keyboard, etc.
  • various output devices such as speakers, network cards, etc.
  • display devices to implement interaction between the image processing device and other products or users. I will not repeat them here.
  • connection may be through a network connection, such as a wireless network, a wired network, and/or any combination of a wireless network and a wired network.
  • the network may include a local area network, the Internet, a telecommunications network, an Internet of Things based Internet and/or telecommunications network, and/or any combination of the above networks, and the like.
  • the wired network can communicate by, for example, twisted pair, coaxial cable or optical fiber transmission.
  • the wireless network can adopt a communication method such as a 3G/4G/5G mobile communication network, Bluetooth, Zigbee or Wi-Fi.
  • an embodiment of the present disclosure also discloses a display device 1300.
  • the limiting device 1300 can include a display screen 1301 and at least one processor 1302.
  • the at least one processor 1302 may be configured to receive a stitched image obtained by the image processing method as described above, and may also be configured to perform an image fitting method as described above to obtain a display image.
  • the display screen 1301 may be configured to display the display image.
  • the display device 1300 may also be connected to the image processing device 1200 via a data transmission device 1303, the image processing device 1200 including at least one processor, which may perform an image processing method as described above to obtain The image is stitched, and the image fitting method as described above can also be performed to obtain a display image.
  • the display device 1300 may receive a stitched image output by the image processing device 1200 via the data transfer device 1303.
  • the at least one processor 1302 in the display device 1300 may extract each sub-image in the image to be output from the received mosaic image, and fit each sub-image to obtain a display image.
  • the data transmission device 1303 is coupled to a driving circuit of the display device 1300, such as an interface (such as VGA, DVI, HDMI, DP, etc.) to which the data transmission device is connected to the display screen.
  • a driving circuit of the display device 1300 such as an interface (such as VGA, DVI, HDMI, DP, etc.) to which the data transmission device is connected to the display screen.
  • data transmission device 1303 can be a display connection cable corresponding to a display screen interface.
  • the data transmission device 1303 may be a wirelessly implemented display signal transceiving device, for example, a wireless display transceiving device capable of performing display functions such as Air Play, DLNA, Miracast, WiDi, Chromecast, and the like.
  • display device 1300 can also include one or more sensors configured to track and determine gaze point data of a user within a display area of display device 1300.
  • the at least one processor 1302 of the display device 1300 is further configured to transmit the fixation point data to the image processing device 1200 via the data transmission device 1303.
  • At least one processor 1302 of the display device 1300 is further configured to acquire a shape of a display area of the display device 1300 and transmit the shape to the image processing device via the data transmission device 1303. 1200.
  • At least one of the image processing apparatuses 1200 may be integrated in a driving circuit of the display apparatus 1300 to perform the image fitting method as described above.
  • the shape of the display area of the display device 1300 includes a non-rectangular shape such as a triangle, a hexagon, an octagon, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种图像处理方法,所述方法包括:将输入图像进行区域划分,得到多个子图像(S101);将所述多个子图像中的一部分子图像确定为待输出图像,(S102);拼接所述待输出图像中的各子图像,得到拼接图像(S103);以及传输所述拼接图像(S104),其中,所述拼接图像的尺寸小于所述输入图像的尺寸。一种图像贴合方法,所述方法包括:接收根据本公开的图像处理方法得到的拼接图像(S201);从拼接图像中提取待输出图像中的各子图像(S202);以及贴合各子图像,得到显示图像(S203),其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。

Description

图像处理方法、设备、装置、图像贴合方法、设备、显示方法、装置及计算机可读介质
本申请要求于2018年04月11日提交的中国专利申请第201810321434.X号的优先权,该中国专利申请的全文通过引用的方式结合于此以作为本申请的一部分。
技术领域
本公开涉及图像处理领域,并且更具体地,涉及一种图像处理方法、装置、图像贴合方法、显示装置以及介质。
背景技术
近来,高清显示器逐渐普及,随着图像的分辨率越来越高,观众的视觉体验不断提升。但是,另一方面,高分辨率图像对处理器的运算速度有很高的要求,并且,在传输的过程中需要占用较多的带宽资源。
对于人眼而言,由于负责观察色彩和细节的视网膜上的视锥细胞浓度不同,使得人眼只能接纳注视区中心的细节,所述注视区对应于人眼相对于注视的图像的角度为5度的视区范围,而对于任何超出注视区范围的事物,人眼都会对这些事物的清晰度进行模糊化处理。可见,人眼的有效观察区近似于圆形。也就是说,对于一幅图像(特别地,高分辨率图像)来说,仅中心圆形区域内的图像是最终被人眼有效捕捉的图像,而该圆形区域外的边缘区域中的图像并未落在人眼有效观察区域内。
然而,当前的图像处理器的输出只能是矩形图像,并且信道中传输的图像也只能是矩形图像。这就使得在现有技术中,在信道中仍然要保证传输矩形图像,而在所传输的矩形图像中,对于用户有效的观察体验来说,只有中心圆形区域内的图像是被有效观察的图像(也被称为有效图像),而该圆形区域外的边缘区域内的图像是未被有效观察的图像(也被称为无用图像)。所以,信道中传输的矩形图像中的一部分图像(即,边缘区域的图像)一定程度地浪费了信道带宽。
发明内容
根据本公开的一方面,提供了一种图像处理方法,包括:将输入图像进行区域划分,得到多个子图像;将所述多个子图像中的一部分子图像确定为待输出图像;拼接所述待输出图像中的各子图像,得到拼接图像;以及传输所述拼接图像,其中,所述拼接图像的尺寸小于所述输入图像的尺寸。
根据本公开的实施例,其中,所述拼接图像为矩形图像。
根据本公开的实施例,其中,所述拼接图像为非矩形图像,所述方法还包括:填充所述拼接图像的空缺区域,以形成矩形图像。
根据本公开的实施例,其中,拼接所述待输出图像中的各子图像,包括:确定各子图像中面积最大的子图像,相对于所述面积最大的子图像来移动所述各子图像中的其它子图像。
根据本公开的实施例,其中,基于显示装置的显示区域的形状,划分所述多个子图像。
根据本公开的另一方面,提供了一种图像贴合方法,包括:接收根据如上所述的图像处理方法得到的拼接图像;从拼接图像中提取待输出图像中的各子图像;以及贴合各子图像,得到显示图像,其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
根据本公开的实施例,所述图像贴合方法还包括:对于所述待输出图像中的各子图像,计算该子图像的贴合参数;基于各子图像的贴合参数来贴合各子图像。
根据本公开的实施例,其中,所述贴合参数包括区域占比参数和偏移参数,所述区域占比参数包括子图像相对于显示装置的显示区域的宽度比和高度比,以及所述偏移参数包括子图像在显示区域的起点位置。
根据本公开的实施例,其中,所述贴合各子图像,包括按照显示装置的显示区域的形状进行贴合。
根据本公开的又一方面,提供了一种图像显示方法,包括:将输入图像进行区域划分,得到多个子图像;将所述多个子图像中的一部分子图像确定为待输出图像;拼接所述待输出图像中的各子图像,得到拼接图像;传输所述拼接图像,其中,所述拼接图像小于所述输入图像;接收所述拼接图像;
从拼接图像中提取待输出图像中的各子图像;以及贴合各子图像,显示贴合得到的显示图像,其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
根据本公开的又一方面,提供了一种图像处理设备,包括:区域划分单元,配置成将输入图像进行区域划分,得到多个子图像;拼接单元,配置成将所述多个子图像中的一部分子图像确定为待输出图像;以及输出单元,配置成拼接所述待输出图像中的各子图像,得到拼接图像,传输所述拼接图像,其中,所述拼接图像小于所述输入图像。
根据本公开的又一方面,提供了一种图像贴合设备,包括:接收单元,配置成接收根据权利要求1所述的图像处理方法得到的拼接图像;以及贴合单元,配置成从拼接图像中提取待输出图像中的各子图像,贴合各子图像,得到显示图像,其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
根据本公开的又一方面,提供了一种图像处理装置,包括一个或多个处理器,以及一个或多个存储器,所述处理器被配置为运行计算机指令,以执行如上所述的图像处理方法,或者执行如上所述的图像贴合方法。
根据本公开的又一方面,提供了一种显示装置,包括显示屏幕和至少一个处理器,所述至少一个处理器配置用于接收根据本公开的图像处理方法得到的拼接图像;执行如上所述的图像贴合方法,以获得显示图像;所述显示屏幕配置用于显示所述显示图像。
根据本公开的实施例,所述显示装置还包括一个或多个传感器,所述传感器被配置为跟踪和确定用户在显示装置的显示区域内的注视点数据,所述显示装置的至少一个处理器还配置成将所述注视点数据传输给如上所述的图像处理装置。
根据本公开的实施例,所述显示装置的至少一个处理器还配置成获取所述显示装置的显示区域的形状,并将所述形状传输给如上所述的图像处理装置。
根据本公开的实施例,其中,所述显示装置的显示区域的形状包括非矩形。
根据本公开的又一方面,提供了一种计算机可读存储介质,被配置为存 储计算机指令,所述计算机指令被处理器运行时执行如上所述的图像处理方法,或者执行如上所述的图像贴合方法。
附图说明
为了更清楚地说明本公开的实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1示出了根据本公开实施例的一种图像处理方法的流程图。
图2示出了根据本公开的实施例的图像处理方法中对图像进行区域划分和拼接的过程示意图。
图3示出了根据本公开的实施例的图像处理方法中建立完成如图2所示的拼接过程的模型的示意图。
图4示出了根据本公开的实施例的输出的图2所示拼接图像的示意图。
图5示出了根据本公开的实施例的在显示装置上显示的图像的示意图。
图6示出了根据本公开的另一实施例的图像拼接过程的示意图。
图7示出了根据图6所示的拼接过程得到的拼接图像的示意图。
图8示出了根据本公开的实施例的在显示装置上显示的图像的示意图。
图9A示出了根据本公开的实施例的一种图像处理设备的示意图。
图9B示出了根据本公开的实施例的一种图像贴合设备的示意图。
图10示出了根据本公开的实施例的一种图像处理系统的示意图。
图11示出了根据本公开的实施例的图像处理系统中包括的显示设备的示意图。
图12示出了根据本公开的实施例的图像处理装置的示意图。
图13示出了根据本公开的实施例的显示装置的示意图。
图14示出了根据本公开实施例的图像贴合方法的流程图。
图15示出了根据本公开实施例的图像显示方法的流程图。
具体实施方式
将参照附图详细描述根据本公开的各个实施例。这里,需要注意的是,在附图中,将相同的附图标记赋予基本上具有相同或类似结构和功能的组成 部分,并且将省略关于它们的重复描述。
为使本公开的实施例的目的、技术方案和优点更加清楚,下面将结合本公开的实施例的附图,对本公开的实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另作定义,此处使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
在本公开的实施例中,采用渲染引擎来创建、获取并进行图像处理操作。该渲染引擎例如可以是Unity渲染引擎,也可以是其他图像处理工具,本公开不对其做出限制。
例如,渲染引擎所针对的待渲染的显示区域可以是执行渲染的渲染引擎(Renderer)所针对的整个场景(Scene)。
例如,可以由渲染引擎根据构建场景的数据(属性数据Attributes、均衡数据Uniforms、纹理数据Texture,诸如此类)来进行场景(空间、光照、物体,诸如此类)的构建。
例如,渲染引擎可以通过图形处理器的着色器或中央处理器或其它可执行渲染运算的逻辑运算电路执行相关操作指令来进行场景的构建。
在渲染引擎中,布置有多个虚拟相机,可以设置各种参数来使各虚拟相机获得所需的图像,以用于在渲染过程中提供可查看的场景视图。例如正交投影相机、透视投影相机。虚拟相机将拍摄的图像导入渲染引擎,并且设置虚拟相机的参数和调整虚拟相机的角度,以捕获待处理的图像作为输入图像。
在渲染引擎中,为了满足执行所需图像处理目的,可以设计所需的算法, 这些算法可以运行在执行该渲染引擎的处理器中以软件产品的形式实现,或者固化在执行所需图像处理目的硬件电路中。无论以硬件还是软件的方法实现,可以将执行所需图像处理的过程抽象为执行对应图像处理功能的模型,例如可以将划分图像称之为划分模型、可以将拼接图像称之为拼接模型、可以将贴合图像称之为贴合模型,诸如此类。
根据发明人所知,由于人眼的生理结构所致,人眼负责观察色彩和细节的视网膜上的视锥细胞浓度不同,人眼在进行观察时通常只能接纳注视区内的细节,对于超出人眼注视区范围的显示区域,由于人眼产生视觉感受的视锥细胞分布的限制,会因模糊化处理而逐渐降低清晰度。因此,在一幅显示图像中,人眼关注的中心区域的图像(可以是整个图像的中心区域,也可以是人眼注视的局部图像区域的中心区域)是最终可以被人眼有效关注的区域,而未被人眼有效关注的图像,即中心区域外的图像可以被称为无用图像。
因而,为了尽量减少传输无用图像(即,边缘图像)所占用的信道带宽,本公开的实施例提供一种图像处理方法,图1示出了所述图像处理方法的流程图。
在步骤S101中,将输入图像进行区域划分,得到多个子图像。根据本公开实施例,所述子图像的划分可以基于显示装置的显示区域的形状。根据本公开的另一实施例,还可以基于获取的用户在显示区域中的注视点数据来划分所述子图像。
接着,在步骤S102中,将所述多个子图像中的一部分子图像确定为待输出图像。其中,未被确定为待输出图像的子图像将作为舍弃图像被舍弃,即仅需传输待输出图像中的子图像,而无需传输所述舍弃图像。
接着,在步骤S103中,拼接该待输出图像中的各子图像,得到拼接图像。
一般地,由于人眼关注的图像中心区域近似于圆形形状,而信道中所传输的图像一般是矩形形状,因此需要对近似于圆形形状的图像区域进行矩形化处理,以便于传输。考虑到圆形形状的图像边缘一周是曲线型,仅通过移动该圆形图像中各子图像的相对位置和角度偏转并不能够容易地形成矩形图像,所以在本公开的一个实施例中,该待输出图像的形状可以为诸如六边形、八边形等非矩形形状,相对于矩形的待输出图像,这些非矩形图像更近似于 圆形形状。这种非矩形图像的显示也被称为“异形显示(Abnormal Display)”。可以对这样的非矩形图像中的各子图像进行适当地移动,以拼接形成适于信道传输的矩形图像。
在步骤S104中,传输所述拼接图像。由于在步骤S102中仅将一部分子图像确定为待输出图像,并且在步骤S103中,仅基于确定的待输出图像中的各子图像来获得拼接图像,即,传输的拼接图像中仅包括确定为待输出图像的子图像,而将其他的子图像舍弃,不用于传输,从而使得拼接后的拼接图像的尺寸小于输入图像的尺寸。
通过执行根据本公开的实施例的图像处理方法的上述处理步骤,一部分不需要被显示在显示屏上供用户观看的子图像被丢弃,而通过对确定为待输出图像的子图像进行适当的拼接,可以获得适于信道传输且比原始的输入图像尺寸小的拼接图像。因而,相对于传输原始的输入图像而言,将尺寸较小的该拼接图像通过信道进行传输,可以节省信道资源。
根据本公开的一个实施例,还提出了一种图像贴合方法,图14示出了所述图像贴合方法的流程图。
首先,在步骤S201中,接收拼接图像。所述拼接图像可以为根据如上所述的图像处理方法获得的拼接图像。所述拼接图像由待输出图像中的各个子图像拼接得到。
接着,在步骤S202中,从所述拼接图像中提取待输出图像中的各个子图像。然后,在步骤S203中,贴合所述各个子图像,以得到显示图像。根据本公开实施例,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
根据本公开的实施例,所述图像贴合方法还可以包括对于所述待输出图像中的各子图像,计算该子图像的贴合参数;然后基于各子图像的贴合参数来贴合各子图像。
根据本公开的实施例,所述贴合参数可以包括区域占比参数和偏移参数,所述区域占比参数包括子图像相对于显示装置的显示区域的宽度比和高度比,以及所述偏移参数包括子图像在显示区域的起点位置。
根据本公开的实施例,所述贴合各子图像,例如可以包括按照显示装置的显示区域的形状进行贴合。
例如,显示装置的形状和待输出图像的形状相匹配,以便所输出的图像能最有效地被投放到显示屏上。因此,拼接图像在到达显示装置一侧时需要被还原为原始的待输出图像才能在显示屏上完整的显示。由于拼接的过程是从待输出图像变换为拼接图像,而贴合的过程是从拼接图像还原为待输出图像,所以,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
例如,所称的逆向操作可以是指贴合过程对每个子图像位置的调整与拼接过程对相应的子图像位置的调整是相反的。
例如,上述步骤S101至S104可以在渲染引擎中进行,步骤S201至S203可以在显示装置中进行。具体地,所述步骤S201至步骤S203可以由显示装置的驱动集成电路(Driver IC)执行。
为了便于说明根据本公开的图像处理方法以及图像贴合方法的过程,如下提供一些具体实施例。如图2所示,显示了根据本公开的图像处理方法中进行子像素区域划分和拼接待输出图像以形成拼接图像的示意图。
例如,在图2中,左侧的矩形图像是原始的输入图像100,并且该输入图像的尺寸可以为1920*1080。如上述步骤S101,可以将输入图像按照区域分布划分为七个子图像,如图2中左侧图像中的子图像101-107。其中,将图2所示的标号为101-103的各子图像确定为待输出图像,它们组合为一个六边形图像,而其它四个子图像104-107将作为舍弃图像,即不被传输至显示装置。图2中右侧的矩形图像是经过拼接处理得到的拼接图像200,该拼接图像的尺寸为1380*1080。并且通过对比左右两侧的矩形图像,可以发现,将待输出图像中的各个子图像101-103进行拼接以形成拼接图像200的过程可以表示为:将左图中标号为102的子图像相对于标号为101的子图像向左下方平移、并且将左图中标号为103的子图像相对于标号为101的子图像向右上方平移;将移动后的标号为102和103的子图像与未移动的标号为101的子图像组合,从而得到了图2所示的右侧的矩形的拼接图像200。
图2中右侧的拼接图像的尺寸只有左侧矩形图像的尺寸的(1380*1080)/(1920*1080)=71.875%,即,相比于传输左侧的矩形图像,仅传输右侧的拼接图像可以节省28.125%的传输信道带宽。
根据本公开实施例,将包含子图像101-103的待输出图像(非矩形图像) 进行拼接以形成拼接图像的过程,可以包括对各子图像进行的多种不同的移动路径,并且可以包括平移和旋转等多种移动方式。
在一些实施例中,为了提高算法的执行效率,降低算法的复杂度,可以先确定待输出图像的各子图像中面积最大的子图像,如图2中的子图像101,然后可以将确定的面积最大的子图像固定,相对于所述面积最大的子图像来移动所述各子图像中的其它子图像,如图2中的子图像102和103,来获得拼接后的拼接图像。
例如,各子图像中面积最大的子图像可以包括用户的注视点区域。
例如,拼接过程可以通过模型来实现。因而,可以根据需要,例如使用建模软件预先建立好模型,然后将该模型以文件的形式导入渲染引擎中用于拼接图像。
图3是示出了根据本公开的实施例的图像处理方法中建立完成如图2所示的拼接过程的模型的示意图300。
由于用于显示的图像通常以视频流的形式呈现在显示装置的屏幕上,因而对于该视频流中的每一帧图像都需要重复地进行捕获、拼接处理、传输和显示等操作。在一些实施例中,可以预先建立好模型来完成如图2所示的子图像区域划分和拼接过程。这样,只需要在处理每一帧图像时,调用该模型便可对图像进行即时的拼接,而省去了对于每一帧图像都要重新确定拼接路径的计算量。
建模过程需要向建模软件提供待拼接的各子图像,并且需要对期望输出的拼接图像的尺寸和形状进行预先设置。也就是说,各子图像的形状和尺寸、各子图像之间的相对位置关系都是特定的,期望输出的拼接图像的尺寸和形状也是特定的,因而使得各子图像的移动轨迹也是确定的。
例如,图3所示建立的模型是用于基于图2中的标号为101-103的三个子图像生成所选尺寸的拼接图像。在图2和图3的示例中,该所选尺寸的拼接图像可以是矩形图像。应当了解的是,所选尺寸的图像也可以是其他形状的图像,本公开不对其做出限制。
模型建立好之后,可以导入渲染引擎直接使用。渲染引擎将待输出图像中的各子图像提供给该模型,并获取模型所生成的拼接图像以用于传输至显示装置。
图4是示出了根据本公开的实施例的输出的图2所示拼接图像的示意图400。图4所示的上下两幅视图分别是在渲染引擎中可查看的拼接图像的场景视图Scene 411和游戏视图Game 412,其中Scene视图是3D图像,可以通过调整虚拟相机的角度来从不同视角查看拼接后的图像;而Game视图是该3D图像的平面图像。通过查看Game视图来适当调整拼接图像,以将该二维的拼接图像(矩形图像)在信道中传输。
如前所述,建立模型时,需要提供待拼接的多个子图像。例如,将各子图像以及各子图像的相对位置关系提供给建模软件。
例如,预先确定各子图像的贴合参数,该贴合参数具有以下两个作用:一是用于模型的建立,以便确定各子图像的初始相对位置,基于初始相对位置计算出各子图像的移动轨迹用以形成期望的拼接图像;二是可以用于在显示装置一侧、例如由显示装置将从信道中接收的拼接图像还原为与原始的待输出图像形状相同的显示图像。
下面结合图2具体描述贴合参数的确定。由于贴合参数反映的是原始的待输出图像中各子图像的相对位置关系,可以根据图2中的左侧矩形图像来确定所述贴合参数。具体地说,贴合参数可以包括区域占比参数Scale和偏移参数Offset。将原始的输入图像所在区域称为显示区域(例如可以是原始的输入图像所在的场景),区域占比参数Scale可以包括子图像相对于显示区域的宽度比和高度比,偏移参数可以包括子图像在显示区域中的具体位置。
例如,可以通过将子图像补齐为最小矩形后、该最小矩形的左下角相对于显示区域的左下角分别在宽度方向和高度方向上的偏移比来确定该子图像的偏移参数。
为了简化计算,在本公开的实施例中,显示区域(即,原始的输入图像)的左下角处的端点被设置为参考点(0,0)。容易理解,选择其它任意位置作为参考点均是可行的。
以图2中的左侧矩形图像为例,对于标号为101的子图像,其区域占比参数Scale=Vector2(1380/1920,1),其偏移参数Offset=Vector2(0,0);对于标号为102的子图像,其区域占比参数Scale=Vector2(540/1920,0.5),其偏移参数Offset=Vector2(1380/1920,0.5);对于标号为103的子图像:其区域占比参数Scale=Vector2(540/1920,0.5),其偏移参数Offset=Vector2(1380/1920,0)。这里, Vector2表示二维向量。
对一个建好的模型来说,各待拼接的子图像的形状、尺寸以及它们之间的相对位置关系已经是确定的了,并且各子图像在拼接好的图像中的位置也是确定的,所以在实际操作中,当建立好模型后导入渲染引擎中使用时,渲染引擎可以对每个子图像进行标号,并且将该多个子图像按照其各自的标号并行地提供给模型,而模型可以根据各子图像的标号将其移动到相应的位置,生成期望的拼接图像。
以图2所示的拼接过程为例,可以将待输出的三个子图像依次标号为101-103,如左侧矩形图像所示。由于模型是确定的,所以在拼接后,拼接图像中的各子图像的相对位置也将是确定的。具体地说,如图2的右侧矩形图像所示,在拼接后的图像中,标号为102的子图像被移动到标号为101的子图像的左下方,而标号为103的子图像被移动到标号为101的子图像的左上方。由于各子图像位置关系,无论拼接前还是拼接后,都具备确定性,所以当将各子图像按照标号提供给建立好的模型后,模型也可以快速地将各子图像移动到与之标号对应的位置处,获得期望的拼接图像。
容易理解的是,由于贴合参数可以反映出各子图像的相对位置关系,因而如上所述对各子图像进行标号的操作可以基于各子图像的贴合参数来完成。
图5是示出了根据本公开的实施例的在显示装置上显示的显示图像的示意图500。在渲染引擎输出拼接图像并通过信道将所述拼接图像传输至显示装置一侧后,显示装置接收拼接图像,并可以由显示装置内配置的处理器(例如,驱动电路Driver IC)来执行根据本公开的图像贴合方法的过程(即,将所接收的拼接图像重新还原为原始的待输出图像),以便在获得用于在显示屏幕上进行显示的显示图像。例如,图5所示的显示图像是对应于图2所示的待输出图像,因而,该显示图像也是六边形图像。
当由显示装置的驱动电路执行根据本公开的图像贴合方法(即,图像还原)的操作时,由于贴合参数包含了各子图像在拼接之前的相对位置关系,因而驱动电路可以基于如前所述确定的各子图像的贴合参数而将各子图像贴合(还原)为原始的待输出图像。
例如,驱动电路也可以基于如前所述的各子图像的标号而将各子图像移 动至与其标号相对应的位置。以图2为例,在原始的待输出图像中,即图2的左侧图像中包含子图像101--103的图像,标号为102的子图像在标号为101的子图像的右上方,并且标号为103的子图像在标号为101的子图像的右下方。通过这种方法,也可以快速地将从信道接收的图像还原为原始的待输出图像。
显示装置的显示屏幕的形状(例如,显示面板的显示区域的形状)与待输出图像的形状是相匹配的。例如图5所示的显示装置的屏幕形状是六边形。当然也可以根据实际情况采用其他的形状,本公开不对其做出限制。
图6示出了根据本公开实施例的另一个图像拼接过程的示意图600,其中,所述待输出图像为八边形图像。
如图6所示,左侧的正方形图像是原始的输入图像601,并且该输入图像的尺寸可以为1000*1000。如上述步骤S101,可以将输入图像按照区域分布划分为九个子图像401-409,其中,将图6所示的标号为401-405的各子图像确定为待输出到显示器的图像,它们组合为一个八边形图像,而其它位于正方形图像边缘的四个子图像是将不被显示器显示的图像。右侧的正方形图像是经过拼接后的拼接图像,该拼接图像的尺寸接近为856*856(空缺一个矩形区域)。并且通过对比左右两侧的矩形图像,可以将标号为401的子图像确定为5个待输出子图像中面积最大的子图像,因而可以将该标号为401的子图像固定不动,只相对于所述面积最大的子图像来移动所述各子图像中的其它子图像。例如,将标号为402的子图像相对于标号为401的子图像向右下方平移、将标号为403的子图像相对于标号为401的子图像向右上方平移、将标号为404的子图像相对于标号为401的子图像向右下方平移、并且将标号为405的子图像相对于标号为401的子图像向左下方平移后,将移动后的标号为402-405的子图像与未移动的标号为401的子图像组合,便得到了图6右侧所示的接近为正方形的拼接图像602。
图6中右侧近似为正方形的图像尺寸只有左侧正方形图像尺寸的73.27%,即,相比于传输左侧的正方形图像,仅传输右侧的拼接图像可以节省26.73%的传输信道带宽。例如,并不是任意的多个输入子图像一定能拼接为期望的矩形图像,所述期望的矩形图像具有期望的尺寸值。所述期望的尺寸值可以由显示屏幕的显示区域来确定。此时,可以输出最接近期望的尺寸 值的矩形图像作为拼接图像,并且该拼接图像的面积与具有期望的尺寸值的矩形图像的面积的比值位于[0,1]之间。例如,可以选择以可达到[0,1]范围内的最大值的方式来获得拼接图像。
例如,输入到模型中各子图像可能无法刚好拼接为期望的矩阵图像,例如,如图6中右侧图像602所示出的,由子图像401-405拼接而成的拼接图像相比于矩形,空缺了如区域410所示的矩形区域。在此种情况下,渲染引擎可以对所述空缺区域410进行填充,从而将所述拼接图像填充为具有期望的尺寸值的矩形图像。例如,渲染引擎可以使用背景色或渐变色等来填充该空缺区域410,本公开不对空缺区域410的填充方式做出限制。
在一些实施例中,对于通过显示装置的驱动电路执行贴合操作的情况,容易理解,由于所填充的空缺区域410最终并不作为显示图像显示在显示装置的显示区域,该空缺区域410将在显示装置的驱动电路的贴合过程(还原过程)中被丢弃,但其却需要在信道中传输,因而,相对于可以刚好拼接为矩形图像的情况,可以节省的带宽略少。
图7是示出了根据图6所示的拼接过程得到的拼接图像的示意图700。图7所示的上下两幅视图分别是在渲染引擎中可查看的拼接图像的Scene视图701和Game视图702,其中Scene视图是3D图像,可以通过调整虚拟相机的角度来从不同视角查看拼接后的图像(接近正方形的图像);而Game视图是该3D图像的平面图像。通过查看Game视图来适当调整拼接图像,并将该二维的拼接图像相对于期望的矩形图像的空缺区域410进行填充(例如,填充背景色或渐变色等),最后将填充后的矩形图像在传输信道中传输。
图8是示出了根据本公开的实施例的显示图像的示意图800。其中,图8中所示出的显示图像是基于图6中示出的拼接图像602得到的。当渲染引擎输出填充后的拼接图像并通过信道传输至显示装置一侧时,显示装置内的驱动电路可以将所接收的拼接图像重新还原为原始的待输出图像(即,八边形图像),以便在显示屏幕的显示区域内进行显示。即,所述显示装置首先从拼接图像602中提取所述待输出图像中的各子图像401-405,然后贴合各子图像,从而得到显示图像。当然,如前所述,在显示装置一侧进行图像还原的过程中,需要将所填充的空缺区域410的图像内容丢弃。
根据本公开实施例,可以基于显示区域的形状,确定所述待输出图像。 例如,可以先获取显示屏幕的显示区域的形状,例如为正方形或长方形,以及所述形状的尺寸。例如,在获取得到所述显示区域的形状为长方形时,可以按照如图2中左侧图像所示的来将输入图像100划分为多个子图像101-107。例如,在获取得到所述显示区域的形状为正方形时,可以按照如图6中左侧图像所示的来将输入图像601划分为多个子图像401-409。
图15示出了根据本公开实施例的图像显示方法的流程图。
首先,在步骤S301中,将输入图像进行区域划分,得到多个子图像。根据本公开实施例,所述子图像的划分可以基于显示装置的显示区域的形状。根据本公开的另一实施例,还可以基于获取的用户在显示区域中的注视点数据来划分所述子图像。
接着,在步骤S302中,将所述多个子图像中的一部分子图像确定为待输出图像。其中,未被确定为待输出图像的子图像将作为舍弃图像被舍弃,即仅需传输待输出图像中的子图像,而无需传输所述舍弃图像。
接着,在步骤S303中,拼接该待输出图像中的各子图像,得到拼接图像,在步骤S304中,传输所述拼接图像,例如,可以将拼接图像传输给显示装置以用于显示。
上述步骤S301-S304的处理与图1中所示出的图像处理方法类似,在此不在赘述。
接着,如图15所示,在步骤S305中,接收拼接图像。所述拼接图像可以为根据如上所述的步骤S303中获得的拼接图像。所述拼接图像由待输出图像中的各个子图像拼接得到。
接着,在步骤S306中,从所述拼接图像中提取待输出图像中的各个子图像。然后,在步骤S307中,贴合所述各子图像,显示贴合得到的显示图像,例如,在显示屏幕上所述贴合得到的显示图像。根据本公开实施例,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
上述步骤S305-S307的处理与图14中所示出的图像贴合方法类似,在此不在赘述。
图9A是示出了根据本公开的实施例的一种图像处理设备的示意图。图9A所示的图像处理设备900可以包括:区域划分单元901、拼接单元902、 以及输出单元903。
具体地,区域划分单元901被配置为将输入图像进行区域划分,得到多个子图像。将所划分的多个子图像中的一部分子图像作为待输出图像,而无需传输输入图像中的待输出图像之外的子图像。例如,该待输出图像的形状可以是六边形、八边形等非矩形的形状,本公开不对其做出限制。并且,考虑到人眼的聚焦特性,可以将该输入图像的中心区域的多个子图像作为待输出的图像,而无需传输该输入图像的边缘区域的子图像,即,在传输的过程中丢弃所述边缘区域的子图像。
拼接单元902被配置为拼接所述待输出图像中的各子图像,得到拼接图像。如前所述,待输出图像可以是非矩形图像,因而需要对该非矩形图像中的各子图像进行适当地移动,以拼接成适于信道传输的矩形图像。
输出单元903,被配置为输出所述拼接图像。其中,由于在对输入图像进行区域划分后丢弃了多个子图像中的至少一个子图像,因而所述拼接图像的尺寸小于所述输入图像的尺寸。
本公开的实施例的图像处理设备900,可以将一部分不需要被显示在显示屏幕上供用户观看的子图像丢弃,并且对待输出的子图像进行适当的拼接,来获得适于信道传输且比原始的输入图像尺寸小的拼接图像。因而,相对于传输原始的输入图像而言,将尺寸较小的该拼接图像通过信道进行传输,可以节省信道资源。
在一些实施例中,所述拼接图像可以是矩形图像。由于该图像处理设备900要在信道上传输的图像需要是矩形图像,所以,通常要求经由拼接单元拼接后的图像也应当是矩形图像。对于该矩形图像的尺寸,可以预先设定,并且优选地,也可以根据待输出图像中的各子图像的形状和尺寸来设定某个最便于拼成矩形的尺寸。
在一些实施例中,所述图像处理设备900还可以包括:填充单元904,被配置为在所述拼接图像为非矩形图像时,对所述拼接图像相对于设定尺寸的矩形的空缺区域进行填充,使其形成具有设定尺寸的矩形图像。在某些时候,所述待输出图像中的各子图像无论怎样移动都无法刚好地拼接为某个所选尺寸的矩形图像,使得拼接后的图像相对于所选尺寸的矩形图像存在空缺区域,此时就需要该填充单元904例如使用背景色或渐变色等方式来填充该 空缺区域,以保证在信道中传输的图像为矩形图像。
在一些实施例中,拼接所述待输出图像中的各子图像,可以包括:确定各子图像中面积最大的子图像,相对于所述面积最大的子图像来移动所述各子图像中的其它子图像。例如,可以固定所述面积最大的子图像,只移动其它的子图像。采用固定面积最大的子图像、而移动其它子图像的方式,可以提高图像拼接程序运行的速度。并且,为了进一步降低算法的复杂度,可以仅对其它子图像进行平移,而没有进行诸如旋转、翻转等较为复杂的移动。应当了解的是,本公开仅是从优化算法的角度而选择平移的移动方式作为本公开的一个示例,而实际上,任何移动方式都是可取的,本公开并不对其做出限制。
图9B示出了根据本公开实施例的图像贴合设备的示意图。如图9B所示出的,图像贴合设备910可以包括接收单元911以及贴合单元912。
所述图像贴合设备910配置成接收根据如上所述的图像处理方法得到的拼接图像,诸如拼接图像200、拼接图像602等。所述贴合单元912配置成从所述拼接图像中提取待输出图像中的各子图像,贴合各子图像,得到显示图像,其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
图10是示出了根据本公开的实施例的一种图像处理系统的示意图。如图10所示,该图像处理系统1000可以包括:如前所述的图像处理设备900;以及显示设备1100。其中,显示设备1100用于经由信道接收从图像处理设备900输出的图像,并且用于显示图像。
图11是示出了根据本公开的实施例的图像处理系统中包括的显示设备1100的示意图。如图11所示,显示设备1100可以包括:图像提取单元1101和贴合单元1102。其中,该图像提取单元1101可以被配置为从拼接图像提取所述待输出图像中的各子图像;并且,该贴合单元1102可以被配置为贴合各子图像,得到显示图像。
通常,显示设备中的显示区域的形状和待输出图像的形状相匹配,以便所输出的图像能最有效地被投放到显示屏上。因此,拼接图像在到达显示设备一侧时需要被还原为原始的待输出图像才能最优地在显示屏幕上显示。由于拼接的过程是从待输出图像变换为拼接图像,而贴合的过程是从拼接图像 还原为待输出图像,所以,贴合各子图像可以是拼接待输出图像中的各子图像的逆向操作。
在一些实施例中,其中,所述贴合各子图像可以是基于各子图像的贴合参数进行贴合,所述显示设备1100还可以包括:计算单元1103,该计算单元1103被配置为:对于所述待输出图像中的各子图像,计算该子图像的贴合参数。
该贴合参数可以反映原始的待输出图像中各子图像的相对位置关系。具体地说,贴合参数可以包括区域占比参数Scale和偏移参数Offset。如果将原始的输入图像所在区域称为显示区域,则区域占比参数Scale包括子图像相对于显示区域的宽度比和高度比;并且,偏移参数包括子图像在显示区域中的具体位置,例如可以通过将子图像补齐为最小矩形后、该最小矩形的左下角相对于显示区域的左下角分别在宽度方向和高度方向上的偏移比来确定。为了简化计算,在本公开的实施例中,显示区域(即,原始的输入图像)的左下角处的端点被设置为参考点。应当了解的是,也可以选择其他位置的点作为参考点,本公开不对其做出限制。
如前所述,由于该贴合参数可以反映原始的待输出图像中各子图像的相对位置关系,所以根据计算出的贴合参数,可以将拼接图像还原为原始的待输出图像。
在一些实施例中,所述显示设备1100的显示区域的形状可以是非矩形。
本公开的实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序配置为被处理器执行时,能够实现如本公开实施例所述的图像处理方法的一个或多个步骤。
如图12所示,本公开的实施例还提供了一种图像处理装置1200,包括一个或多个处理器1201,所述处理器被配置为运行计算机指令,以执行上述任一实施例的图像处理方法中的一个或多个步骤,或者执行如上所述的图像贴合方法中的一个或多个步骤。
可选地,所述图像处理装置1200还包括存储器1202,连接所述处理器1201,被配置为存储所述计算机指令。
其中,存储器1202可以是各种由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程 只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
其中,处理器1201可以是中央处理单元(CPU)或者现场可编程逻辑阵列(FPGA)或者单片机(MCU)或者数字信号处理器(DSP)或者专用集成电路(ASIC)或者图形处理器(GPU)等具有数据处理能力和/或程序执行能力的逻辑运算器件。一个或多个处理器可以被配置为以并行计算的处理器组同时执行上述图像处理方法,或者被配置为以部分处理器执行上述图像处理方法中的部分步骤,部分处理器执行上述图像处理方法中的其它部分步骤等。
计算机指令包括了一个或多个由对应于处理器的指令集架构定义的处理器操作,这些计算机指令可以被一个或多个计算机程序在逻辑上包含和表示。
该图像处理装置1200还可以连接各种输入设备(例如用户界面、键盘等)、各种输出设备(例如扬声器、网卡等)、以及显示设备等实现图像处理装置与其它产品或用户的交互,本文在此不再赘述。
其中,连接可以是通过网络连接,例如无线网络、有线网络、和/或无线网络和有线网络的任意组合。网络可以包括局域网、互联网、电信网、基于互联网和/或电信网的物联网(Internet of Things)、和/或以上网络的任意组合等。有线网络例如可以采用双绞线、同轴电缆或光纤传输等方式进行通信,无线网络例如可以采用3G/4G/5G移动通信网络、蓝牙、Zigbee或者Wi-Fi等通信方式。
如图13所示,本公开的实施例还公开了一种显示装置1300。所述限制装置1300可以包括显示屏幕1301和至少一个处理器1302。所述至少一个处理器1302可以配置用于接收如上所述的图像处理方法得到的拼接图像,并且还可以配置用于执行如上所述的图像贴合方法,以获得显示图像。所述显示屏幕1301可以配置用于显示所述显示图像。
根据本公开实施例,所述显示装置1300还可以经由数据传输器件1303连接到所述图像处理装置1200,所述图像处理装置1200包括至少一个处理器,可以执行如上所述的图像处理方法以获得拼接图像,还可以执行如上所述的图像贴合方法以获得显示图像。根据本公开的一个实施例,所述显示装 置1300可以经由所述数据传输器件1303来接收所述图像处理装置1200输出的拼接图像。
所述显示装置1300中的所述至少一个处理器1302可以从接收的拼接图像中提取待输出图像中的各子图像,并贴合各子图像以得到显示图像。
其中,数据传输器件1303与显示装置1300的驱动电路耦接,例如数据传输器件连接到显示屏幕的接口(如VGA、DVI、HDMI、DP等)。
例如,数据传输器件1303可以是对应显示屏幕接口的显示连接线缆。
例如,数据传输器件1303可以是基于无线实现的显示信号收发器件,例如,能够执行Air Play、DLNA、Miracast、WiDi、Chromecast等显示功能的无线显示收发器件。
例如,显示装置1300还可以包括一个或多个传感器,所述传感器被配置为跟踪和确定用户在显示装置1300的显示区域内的注视点数据。所述显示装置1300的至少一个处理器1302还配置成将所述注视点数据经由数据传输器件1303传输给所述图像处理装置1200。
根据本公开的实施例,所述显示装置1300的至少一个处理器1302还配置成获取所述显示装置1300的显示区域的形状,并将所述形状经由数据传输器件1303传输给所述图像处理装置1200。
根据本公开的一个实施例,可以将图像处理装置1200中的至少一个处理器集成在显示装置1300的驱动电路中,以执行如上所述的图像贴合方法。
例如,显示装置1300的显示区域的形状包括非矩形,诸如三角形、六边形、八边形等。
尽管这里已经参考附图描述了示例实施例,应理解上述示例实施例仅仅是示例性的,并且不意图将本公开的范围限制于此。本领域普通技术人员可以在其中进行各种改变和修改,而不偏离本公开的范围和精神。所有这些改变和修改意在被包括在所附权利要求所要求的本公开的范围之内。

Claims (18)

  1. 一种图像处理方法,包括:
    将输入图像进行区域划分,得到多个子图像;
    将所述多个子图像中的一部分子图像确定为待输出图像;
    拼接所述待输出图像中的各子图像,得到拼接图像;以及
    传输所述拼接图像,其中,所述拼接图像小于所述输入图像。
  2. 如权利要求1所述的方法,其中,所述拼接图像为矩形图像。
  3. 如权利要求1所述的方法,其中,所述拼接图像为非矩形图像,所述方法还包括:
    填充所述拼接图像的空缺区域,以形成矩形图像。
  4. 如权利要求1-3中任一项所述的方法,其中,拼接所述待输出图像中的各子图像,包括:
    确定各子图像中面积最大的子图像,相对于所述面积最大的子图像来移动所述各子图像中的其它子图像。
  5. 如权利要求1-4中任一项所述的方法,其中,基于显示装置的显示区域的形状,划分所述多个子图像。
  6. 一种图像贴合方法,包括:
    接收根据权利要求1所述的图像处理方法得到的拼接图像;
    从拼接图像中提取待输出图像中的各子图像;以及
    贴合各子图像,得到显示图像,
    其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
  7. 如权利要求6所述的方法,还包括:
    对于所述待输出图像中的各子图像,计算该子图像的贴合参数;
    基于各子图像的贴合参数来贴合各子图像。
  8. 如权利要求6或7所述的方法,其中,所述贴合参数包括区域占比参数和偏移参数,
    所述区域占比参数包括子图像相对于显示装置的显示区域的宽度比和高度比,以及
    所述偏移参数包括子图像在显示区域的起点位置。
  9. 如权利要求6-8中任一项所述的方法,其中,所述贴合各子图像,包括按照显示装置的显示区域的形状进行贴合。
  10. 一种图像显示方法,包括:
    将输入图像进行区域划分,得到多个子图像;
    将所述多个子图像中的一部分子图像确定为待输出图像;
    拼接所述待输出图像中的各子图像,得到拼接图像;
    传输所述拼接图像,其中,所述拼接图像小于所述输入图像;
    接收所述拼接图像;
    从拼接图像中提取待输出图像中的各子图像;以及
    贴合各子图像,显示贴合得到的显示图像,
    其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
  11. 一种图像处理设备,包括:
    区域划分单元,配置成将输入图像进行区域划分,得到多个子图像;
    拼接单元,配置成将所述多个子图像中的一部分子图像确定为待输出图像;以及
    输出单元,配置成拼接所述待输出图像中的各子图像,得到拼接图像,传输所述拼接图像,
    其中,所述拼接图像小于所述输入图像。
  12. 一种图像贴合设备,包括:
    接收单元,配置成接收根据权利要求1所述的图像处理方法得到的拼接图像;以及
    贴合单元,配置成从拼接图像中提取待输出图像中的各子图像,贴合各子图像,得到显示图像,
    其中,所述贴合各子图像是指通过与拼接各子图像得到所述拼接图像的拼接过程相反的操作来获得显示图像。
  13. 一种图像处理装置,包括
    一个或多个处理器,以及
    一个或多个存储器,
    所述处理器被配置为运行计算机指令,以执行如权利要求1-5中任一项所述的图像处理方法,或者执行如权利要求6-9中任一项所述的图像贴合方法。
  14. 一种显示装置,包括显示屏幕和至少一个处理器,
    所述至少一个处理器配置用于接收根据权利要求1所述的图像处理方法得到的拼接图像;
    执行如权利要求6-9中任一项所述的图像贴合方法,以获得显示图像;
    所述显示屏幕配置用于显示所述显示图像。
  15. 如权利要求14所述的显示装置,还包括一个或多个传感器,所述传感器被配置为跟踪和确定用户在显示装置的显示区域内的注视点数据,所述显示装置的至少一个处理器还配置成将所述注视点数据传输给如权利要求10所述的图像处理装置。
  16. 如权利要求14或15所述的显示装置,所述显示装置的至少一个处理器还配置成获取所述显示装置的显示区域的形状,并将所述形状传输给如权利要求13所述的图像处理装置。
  17. 如权利要求14-16中任一项所述的显示装置,其中,所述显示装置的显示区域的形状包括非矩形。
  18. 一种计算机可读存储介质,被配置为存储计算机指令,所述计算机指令被处理器运行时执行如权利要求1-5中任一项所述的图像处理方法,或者执行如权利要求6-9中任一项所述的图像贴合方法。
PCT/CN2019/078015 2018-04-11 2019-03-13 图像处理方法、设备、装置、图像贴合方法、设备、显示方法、装置及计算机可读介质 WO2019196589A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/641,537 US11783445B2 (en) 2018-04-11 2019-03-13 Image processing method, device and apparatus, image fitting method and device, display method and apparatus, and computer readable medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810321434.X 2018-04-11
CN201810321434.XA CN110365917B (zh) 2018-04-11 2018-04-11 图像处理方法、计算机产品、显示装置及计算机可读介质

Publications (1)

Publication Number Publication Date
WO2019196589A1 true WO2019196589A1 (zh) 2019-10-17

Family

ID=68163892

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078015 WO2019196589A1 (zh) 2018-04-11 2019-03-13 图像处理方法、设备、装置、图像贴合方法、设备、显示方法、装置及计算机可读介质

Country Status (3)

Country Link
US (1) US11783445B2 (zh)
CN (1) CN110365917B (zh)
WO (1) WO2019196589A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468611A (zh) * 2023-06-09 2023-07-21 北京五八信息技术有限公司 图像拼接方法、装置、设备和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437231B (zh) * 2020-11-24 2023-11-14 维沃移动通信(杭州)有限公司 图像拍摄方法和装置、电子设备及存储介质
CN113489791B (zh) * 2021-07-07 2024-05-14 佳都科技集团股份有限公司 图像上传方法、图像处理方法及相关装置
CN114463184B (zh) * 2022-04-11 2022-08-02 国仪量子(合肥)技术有限公司 图像拼接方法、装置及存储介质、电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1980330A (zh) * 2005-12-08 2007-06-13 索尼株式会社 图像处理装置、图像处理方法和计算机程序
JP2008098870A (ja) * 2006-10-10 2008-04-24 Olympus Corp 撮像装置及び画像処理プログラム
CN102957842A (zh) * 2011-08-24 2013-03-06 中国移动通信集团公司 一种视频图像处理方法、装置及系统
CN103312916A (zh) * 2012-03-15 2013-09-18 百度在线网络技术(北京)有限公司 一种用于在移动终端传输图片的方法与装置
CN103581603A (zh) * 2012-07-24 2014-02-12 联想(北京)有限公司 一种多媒体数据的传输方法及电子设备
CN107317987A (zh) * 2017-08-14 2017-11-03 歌尔股份有限公司 虚拟现实的显示数据压缩方法和设备、系统

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9275479B2 (en) * 2009-10-22 2016-03-01 Collage.Com, Llc Method, system and computer program product for creating collages that visually resemble a particular shape or group of shapes
US20200125244A1 (en) * 2009-12-03 2020-04-23 Innoventions, Inc. Context-based graphical view navigation guidance system
US9055277B2 (en) * 2011-03-31 2015-06-09 Panasonic Intellectual Property Management Co., Ltd. Image rendering device, image rendering method, and image rendering program for rendering stereoscopic images
US20160125629A1 (en) * 2014-11-04 2016-05-05 Vaporstream, Inc. Divided Electronic Image Transmission System and Method
US9779529B2 (en) * 2015-02-20 2017-10-03 Adobe Systems Incorporated Generating multi-image content for online services using a single image
JP2017092521A (ja) * 2015-11-02 2017-05-25 キヤノン株式会社 表示装置およびその制御方法
US9864925B2 (en) * 2016-02-15 2018-01-09 Ebay Inc. Digital image presentation
US11282165B2 (en) * 2016-02-26 2022-03-22 Netflix, Inc. Dynamically cropping digital content for display in any aspect ratio
WO2018048223A1 (ko) * 2016-09-09 2018-03-15 삼성전자 주식회사 3차원 이미지를 프로세싱하기 위한 방법 및 장치
KR102352933B1 (ko) * 2016-09-09 2022-01-20 삼성전자주식회사 3차원 이미지를 프로세싱하기 위한 방법 및 장치
KR20180060236A (ko) * 2016-11-28 2018-06-07 엘지전자 주식회사 이동 단말기 및 그의 동작 방법
KR20180074369A (ko) * 2016-12-23 2018-07-03 삼성전자주식회사 3차원 컨텐츠의 썸네일 관리 방법 및 그 장치
US10638039B2 (en) * 2016-12-28 2020-04-28 Ricoh Company, Ltd. Apparatus, system, and method of controlling image capturing, and recording medium
US10621767B2 (en) * 2017-06-12 2020-04-14 Qualcomm Incorporated Fisheye image stitching for movable cameras
US10425622B2 (en) * 2017-07-18 2019-09-24 The United States Of America As Represented By The Secretary Of The Army Method of generating a predictive display for tele-operation of a remotely-operated ground vehicle
KR102384054B1 (ko) * 2017-08-01 2022-04-07 엘지전자 주식회사 이동 단말기 및 그 제어 방법
US10867368B1 (en) * 2017-09-29 2020-12-15 Apple Inc. Foveated image capture for power efficient video see-through
CN107516294B (zh) * 2017-09-30 2020-10-13 百度在线网络技术(北京)有限公司 拼接图像的方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1980330A (zh) * 2005-12-08 2007-06-13 索尼株式会社 图像处理装置、图像处理方法和计算机程序
JP2008098870A (ja) * 2006-10-10 2008-04-24 Olympus Corp 撮像装置及び画像処理プログラム
CN102957842A (zh) * 2011-08-24 2013-03-06 中国移动通信集团公司 一种视频图像处理方法、装置及系统
CN103312916A (zh) * 2012-03-15 2013-09-18 百度在线网络技术(北京)有限公司 一种用于在移动终端传输图片的方法与装置
CN103581603A (zh) * 2012-07-24 2014-02-12 联想(北京)有限公司 一种多媒体数据的传输方法及电子设备
CN107317987A (zh) * 2017-08-14 2017-11-03 歌尔股份有限公司 虚拟现实的显示数据压缩方法和设备、系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116468611A (zh) * 2023-06-09 2023-07-21 北京五八信息技术有限公司 图像拼接方法、装置、设备和存储介质
CN116468611B (zh) * 2023-06-09 2023-09-05 北京五八信息技术有限公司 图像拼接方法、装置、设备和存储介质

Also Published As

Publication number Publication date
US20210158481A1 (en) 2021-05-27
CN110365917B (zh) 2021-08-03
US11783445B2 (en) 2023-10-10
CN110365917A (zh) 2019-10-22

Similar Documents

Publication Publication Date Title
WO2019196589A1 (zh) 图像处理方法、设备、装置、图像贴合方法、设备、显示方法、装置及计算机可读介质
US11436787B2 (en) Rendering method, computer product and display apparatus
US10506223B2 (en) Method, apparatus, and device for realizing virtual stereoscopic scene
US9786255B2 (en) Dynamic frame repetition in a variable refresh rate system
CN101548277B (zh) 多并行处理器的计算机图形系统
JP7195935B2 (ja) 画像表示方法、表示システム及びコンピューター読取可能記憶媒体
EP3438919B1 (en) Image displaying method and head-mounted display apparatus
WO2022002181A1 (zh) 自由视点视频重建方法及播放处理方法、设备及存储介质
CN106846451A (zh) 面向移动设备的真实感体渲染和交互系统及其工作方法
US12100106B2 (en) Stereoscopic rendering of virtual 3D objects
WO2022089046A1 (zh) 虚拟现实显示方法、装置及存储介质
US20160252730A1 (en) Image generating system, image generating method, and information storage medium
US11892637B2 (en) Image generation apparatus, head-mounted display, content processing system, and image display method
US10230933B2 (en) Processing three-dimensional (3D) image through selectively processing stereoscopic images
CN108765582B (zh) 一种全景图片显示方法及设备
US6559844B1 (en) Method and apparatus for generating multiple views using a graphics engine
WO2023246302A1 (zh) 字幕的显示方法、装置、设备及介质
WO2018134946A1 (ja) 画像生成装置、及び画像表示制御装置
US11880920B2 (en) Perspective correct vector graphics with foveated rendering
US11636578B1 (en) Partial image completion
JP2015073273A (ja) カラーフレームとオリジナルデプスフレームをパック及びアンパックする方法、装置並びにシステム
WO2024174050A1 (zh) 视频通信方法和装置
WO2021146978A1 (zh) 显示系统、图形处理器gpu、显示控制器以及显示方法
WO2018134947A1 (ja) 画像配信装置
CN117173378A (zh) 基于CAVE环境的WebVR全景数据展现方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19784862

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.01.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19784862

Country of ref document: EP

Kind code of ref document: A1