WO2022205924A1 - 图像处理方法及装置、显示控制方法及装置、显示装置 - Google Patents

图像处理方法及装置、显示控制方法及装置、显示装置 Download PDF

Info

Publication number
WO2022205924A1
WO2022205924A1 PCT/CN2021/129899 CN2021129899W WO2022205924A1 WO 2022205924 A1 WO2022205924 A1 WO 2022205924A1 CN 2021129899 W CN2021129899 W CN 2021129899W WO 2022205924 A1 WO2022205924 A1 WO 2022205924A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sub
pixel
pixels
edge
Prior art date
Application number
PCT/CN2021/129899
Other languages
English (en)
French (fr)
Inventor
耿立华
马希通
段然
时晓东
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US18/016,430 priority Critical patent/US12008285B2/en
Publication of WO2022205924A1 publication Critical patent/WO2022205924A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4406Loading of operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/373Details of the operation on graphic patterns for modifying the size of the graphic pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2330/00Aspects of power supply; Aspects of display protection and defect management
    • G09G2330/02Details of power systems and of start or stop of display operation
    • G09G2330/026Arrangements or methods related to booting a display
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0442Handling or displaying different aspect ratios, or changing the aspect ratio
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/045Zooming at least part of an image, i.e. enlarging it or shrinking it
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to the field of display technology, and in particular, to an image processing method and device, a display control method and device, and a display device.
  • the display device When a display device (such as a monitor) is turned on, it takes a certain preparation time to perform normal display.
  • the display device can be made to display a startup screen during the startup preparation time after the display device is turned on and before it can be used normally. In this way, the user can be informed that the display device has been turned on and is being activated, thereby eliminating the user's anxious and boring feeling of waiting.
  • the startup screen may generally include information such as a company logo (identity), a product model, and the like.
  • a first aspect provides an image processing method, comprising using image data of a first image as image data of a base area; based on the image data of the first image, generating image data of an expansion area according to an expansion strategy to obtain a second image
  • the image data of the second image includes the image data of the base area and the image data of the extended area.
  • the first image has a first resolution
  • the second image has a second resolution
  • the first resolution is smaller than the second resolution.
  • generating the image data of the expansion area according to the expansion strategy includes: generating the image data of the expansion area according to the expansion strategy based on the pixel value of at least one pixel in the edge of the first image, or Image data of the extended area is generated according to the pixel value of each pixel in the first image.
  • the first image includes edges of a solid color. Based on the pixel value of at least one pixel in the edge of the first image, generating the image data of the extended area according to the expansion strategy includes: using the pixel value of one pixel in the edge of the first image as the pixel value of each pixel in the extended area, to Get the image data of the extended area.
  • the first image includes edges that are not solid colors.
  • generating image data of the extended area according to the expansion strategy includes: generating image data of the extended area according to the expansion strategy based on pixel values of multiple pixels in the edge of the first image.
  • the first image includes edges whose color gradients in the column direction but are constant in the row direction.
  • the extended area includes a first sub-area and a second sub-area other than the first sub-area. Wherein, the first sub-region and the base region are flush in the row direction.
  • generating the image data of the extended region according to the expansion strategy includes: The pixel value of each pixel in the corresponding row of pixels; according to the change trend of the pixel values of multiple pixels in the edge of the first image in the column direction, obtain the pixel values of the pixels in the second sub-region that change in the column direction , where the pixel values of each row of pixels are the same, and each pixel value is within the range of pixel values.
  • generating the pixel value of each pixel in the corresponding row of pixels in the first sub-region according to the pixel value of at least one pixel located at the edge in each row of pixels in the first image includes: The pixel value of one of the pixels located at the edge is used as the pixel value of each pixel in the corresponding row of pixels of the first subregion.
  • the pixel values of a column of pixels in the second sub-region constitute an arithmetic progression.
  • the first image includes edges whose color fades in the row direction but does not change in the column direction.
  • the extended area of the second image includes: a third sub-area and a fourth sub-area other than the third sub-area, and the third sub-area is flush with the base area in the column direction.
  • generating the image data of the extended area of the second image according to the expansion strategy includes: The pixel value of each pixel in the corresponding column of pixels in the three sub-regions; according to the change trend of the pixel values of the plurality of pixels in the edge of the first image in the row direction, obtain the multi-column pixels in the fourth sub-region that change in the row direction. Pixel value, where the pixel value of each column of pixels is the same, and each pixel value is within the range of pixel values.
  • generating the pixel value of each pixel in the corresponding column of pixels in the third sub-region according to the pixel value of at least one pixel located at the edge in each column of pixels in the first image includes: The pixel value of one of the pixels located at the edge is used as the pixel value of each pixel in the corresponding column of pixels of the third subregion.
  • the pixel values of a row of pixels in the fourth sub-region constitute an arithmetic progression.
  • the first image includes edges with gradients in color both in the column and row directions.
  • the extended area of the second image includes: a fifth sub-area, a sixth sub-area, and a seventh sub-area other than the fifth sub-area and the sixth sub-area, wherein the fifth sub-area and the base area of the second image Flush in the row direction, the sixth sub-region is flush with the base region of the second image in the column direction.
  • generating image data of the expanded area according to the expansion strategy includes:
  • the pixel values of the corresponding row pixels in the fifth sub-region are obtained, and each pixel value is within the range of the pixel value to obtain Image data of the fifth sub-region.
  • the pixel values of a plurality of pixels located in a column in the edge of the first image in the column direction are obtained, and each pixel value is within the range of pixel values,
  • the image data of the sixth sub-region is obtained. Based on the image data of the fifth sub-region and/or the image data of the sixth sub-region, the image data of the seventh sub-region is obtained.
  • obtaining the image data of the seventh sub-area includes: combining the image data of the seventh sub-area with a pixel in the seventh sub-area in the row direction and The pixel values of two adjacent pixels in the column direction are averaged to obtain the pixel values of the pixels in the seventh sub-region.
  • obtaining the image data of the seventh sub-region includes: according to the image data of the sixth sub-region adjacent to the seventh sub-region The change trend of the pixel values of multiple pixels located in a row in the edge in the row direction, obtain the pixel value of the corresponding row of pixels in the seventh sub-region, each pixel value is within the value range of the pixel value, or, according to the same The change trend of the pixel values of multiple pixels located in a column in the edge of the fifth sub-region adjacent to the seven sub-regions in the column direction, and the pixel value of the corresponding column of pixels in the seventh sub-region is obtained. value within the range of values.
  • the first image includes edges with irregular colors.
  • Generating the image data of the extended area according to the pixel values of each pixel in the first image includes: averaging the pixel values of each pixel in the first image to obtain the pixel value of each pixel in the extended area.
  • the image processing method further includes: identifying the type of the first image according to pixel values of a plurality of pixels in an edge of the first image. Based on the image data of the first image, generating the image data of the expansion area according to the expansion strategy includes: based on the image data of the first image, generating the image data of the expansion area according to the expansion strategy corresponding to the type of the first image.
  • the type of the first image is a first type, a second type, a third type, a fourth type, or a fifth type.
  • the first type is configured to represent that the first image includes edges of a solid color.
  • the second type is configured to represent that the first image includes an edge whose color gradients in the column direction and does not change in the row direction.
  • the third type is configured to represent that the first image includes an edge whose color gradients in the row direction but does not change in the column direction.
  • the fourth type is configured to represent that the first image includes edges whose color is gradient in both the column direction and the row direction.
  • a fifth type is configured to represent that the first image includes edges with irregular colors.
  • the determination of the first The type of an image includes: determining the first judgment result of each first sub-edge; wherein, if the pixel values of each pixel in each row of pixels in the first sub-edge are approximately equal, the first judgment result includes equal, otherwise it is not equal ; If the pixel value of each pixel in each row of pixels in the first sub-edge is gradual, the first judgment result includes gradual change, otherwise it does not change.
  • the first image is of the third type. If the first judgment result of each first sub-edge and the second judgment result of each second sub-edge include gradients, the first image is of the fourth type. If at least one of the first judgment result of each first sub-edge and the second judgment result of each second sub-edge includes inequality and no gradient, the first image is of the fifth type.
  • a display control method is provided, which is applied to a display control device.
  • the display control method includes: reading image data of a startup screen; executing the image processing method provided in any of the above embodiments to obtain image data of a second image , wherein the first image in the image processing method is a startup screen; image data of the second image is output to control the display panel to display according to the image data of the second image.
  • outputting the image data of the second image includes outputting the image data of the second image in response to the display control apparatus being in a power-on initialization state.
  • the display control method further includes: in response to the end of the power-on initialization state of the display control device, outputting a working picture.
  • an image processing apparatus including: a first processing module and a second processing module.
  • the first processing module is configured to use image data of the first image as image data of the base region, wherein the first image has a first resolution.
  • the second processing module is configured to, based on the image data of the first image, generate image data of the extended area according to the expansion strategy, so as to obtain image data of the second image including the image data of the basic area and the image data of the extended area, wherein the second The image has a second resolution, the first resolution being smaller than the second resolution.
  • an image processing apparatus including: a memory and a processor, the memory is configured to store computer program instructions, and the processor is configured to execute the computer program instructions, so that the image processing apparatus executes any of the above-mentioned embodiments. image processing method.
  • a display control device comprising: a reading module, an image processing device, and an output module.
  • the reading module is configured to read image data of the boot screen in response to the display control device being powered on.
  • the image processing apparatus is configured to execute the image processing method provided in any of the foregoing embodiments to obtain image data of the second image; wherein the first image in the image processing method is a startup screen.
  • the output module is configured to output image data of the second image to control the display panel to display according to the image data of the second image.
  • a display control apparatus comprising: a memory and a processor, the memory is configured to store computer program instructions, and the processor is configured to execute the computer program instructions, so that the display control apparatus executes any of the above-mentioned embodiments. Displays the control method.
  • a chip system including at least one chip, and at least one chip is configured to execute the image processing method provided by any of the foregoing embodiments, or the display control method provided by any of the foregoing embodiments.
  • a display device including: the display control device provided by any of the foregoing embodiments, or the chip system provided by any of the foregoing embodiments.
  • the display device also includes a display panel configured to display images.
  • a computer-readable storage medium stores computer program instructions, and when the computer program instructions are executed on a computer (eg, a display device), the computer is made to execute any of the above-mentioned embodiments.
  • a computer eg, a display device
  • a computer program product includes computer program instructions, and when the computer program instructions are executed on a computer (for example, a display device), the computer program instructions cause the computer to execute the image processing method provided in any of the above-mentioned embodiments, or, the above-mentioned image processing method.
  • the display control method provided by any one of the embodiments.
  • a computer program is provided.
  • the computer program When the computer program is executed on a computer (eg, a display device), the computer program causes the computer to execute the image processing method provided by any of the above embodiments, or the display control method provided by any of the above embodiments.
  • FIG. 1 is a schematic diagram of a user performing a power-on operation on a display, and the display showing a boot screen;
  • FIG. 2 is a block diagram of a display device according to some embodiments.
  • 3A is a block diagram of a display control device according to some embodiments.
  • 3B is a block diagram of a display control apparatus according to some embodiments.
  • FIG. 4 is a schematic diagram of a first image and a second image according to some embodiments.
  • FIG. 5 is a flowchart of a display control method according to some embodiments.
  • 6A is a schematic diagram of a first image according to some embodiments.
  • 6B is a schematic diagram of a second image according to some embodiments.
  • FIG. 7 is a flowchart of an image processing method according to some embodiments.
  • FIG. 8 is a schematic diagram of a first image according to some embodiments.
  • 9A is a schematic diagram of a first image and a second image according to some embodiments.
  • 9B is a schematic diagram of a first image and a second image according to some embodiments.
  • 9C is a schematic diagram of a first image and a second image according to some embodiments.
  • 9D is a schematic diagram of a first image and a second image according to some embodiments.
  • 9E is a schematic diagram of a first image and a second image according to some embodiments.
  • FIG. 10 is a block diagram of a display control apparatus according to some embodiments.
  • FIG. 11 is a block diagram of an image processing apparatus according to some embodiments.
  • FIG. 12 is a block diagram of a display control apparatus according to some embodiments.
  • first and second are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • plural means two or more.
  • the expressions “coupled” and “connected” and their derivatives may be used.
  • the term “connected” may be used in describing some embodiments to indicate that two or more components are in direct physical or electrical contact with each other.
  • the term “coupled” may be used in describing some embodiments to indicate that two or more components are in direct physical or electrical contact.
  • the terms “coupled” or “communicatively coupled” may also mean that two or more components are not in direct contact with each other, yet still co-operate or interact with each other.
  • the embodiments disclosed herein are not necessarily limited by the content herein.
  • At least one of A, B, and C has the same meaning as “at least one of A, B, or C”, and both include the following combinations of A, B, and C: A only, B only, C only, A and B , A and C, B and C, and A, B, and C.
  • a and/or B includes the following three combinations: A only, B only, and a combination of A and B.
  • the term “if” is optionally construed to mean “when” or “at” or “in response to determining” or “in response to detecting,” depending on the context.
  • the phrases “if it is determined that" or “if a [statement or event] is detected” are optionally interpreted to mean “in determining" or “in response to determining" or “on the detection of [the stated condition or event]” or “in response to the detection of the [ stated condition or event]”.
  • the power-on process of the display device may include: first, the display receives a power-on instruction.
  • the display when the display is in a power-off state, the user performs a power-on operation on the display.
  • the user can press the power switch of the display; for another example, the user can also perform a power-on operation on the display through a power-on gesture, an infrared remote control, etc. .
  • a power-on instruction may be sent to the display through a host connected to the display. Then, in response to the power-on instruction, the display displays the start-up screen of the display shown in FIG.
  • the start-up image can be stored in the memory of the display by the manufacturer of the display in advance (for example, before the product leaves the factory), and when the start-up image needs to be displayed, the start-up image can be retrieved from the memory of the display for display. After that, the display can display the splash screen of the operating system in the host computer.
  • the displayed picture is an image with a corresponding high resolution.
  • the image resolution of the startup screen of such a display is also a corresponding high resolution.
  • the higher-resolution startup image is stored in the memory of the display, so that the higher-resolution startup image occupies a larger storage space, not only occupies a large amount of storage resources, but also takes up a lot of storage resources before displaying the startup image. , it will take longer to load the splash screen.
  • an embodiment of the present invention provides a display device, which is a product with an image display function, such as: a display, a TV, a billboard, a digital photo frame, a laser printer with a display function, a telephone, a mobile phone , Personal Digital Assistant (PDA), digital cameras, camcorders, viewfinders, navigators, vehicles, large-area walls, home appliances, information query equipment (such as e-government, banks, hospitals, electric power and other departments Business query equipment, monitors, etc.
  • PDA Personal Digital Assistant
  • the display device may include: a display module 200 and a display control device 100 coupled to the display module.
  • the display module 200 is configured to display an image (screen)
  • the display control device 100 is configured to execute a display control method and output image data to the display module 200 to control the display module 200 to display an image corresponding to the image data.
  • the display module 200 may include a timing controller (Timing Controller, abbreviated as TCON), a data drive circuit (ie a source drive circuit), a scan drive circuit and a display panel (Display Panel, abbreviated as DP, Also known as display screen).
  • TCON Timing Controller
  • DP Display Panel
  • the display panel may be an OLED (Organic Light Emitting Diode, organic light-emitting diode) panel, a QLED (Quantum Dot Light Emitting Diodes, quantum dot light-emitting diode) panel, an LCD (Liquid Crystal Display, liquid crystal display) panel, micro LED (including: miniLED or microLED) panels, etc.
  • OLED Organic Light Emitting Diode, organic light-emitting diode
  • QLED Quantum Dot Light Emitting Diodes, quantum dot light-emitting diode
  • LCD Liquid Crystal Display, liquid crystal display
  • micro LED including: miniLED or microLED
  • the display panel may include a plurality of sub-pixels.
  • the number and distribution of the plurality of sub-pixels included in the display panel determine the resolution of the display panel, that is, the resolution of the display module 200 or the display device.
  • the display panel includes M ⁇ N physical pixels, and each physical pixel includes a red sub-pixel (R sub-pixel), a green sub-pixel (G sub-pixel) and a blue sub-pixel (B sub-pixel).
  • the resolution of the display panel is M ⁇ N.
  • the display panel includes (M ⁇ N)/2 R sub-pixels, M ⁇ N G sub-pixels, and (M ⁇ N)/2 B sub-pixels, and M ⁇ N virtual pixels formed by these sub-pixels,
  • An image with a resolution of M ⁇ N can be displayed, wherein the R sub-pixel and the B sub-pixel can be shared by different virtual pixels.
  • the resolution of the display panel is also M ⁇ N.
  • Resolution is generally expressed in multiplicative form.
  • the resolution of the display panel may be 1920 ⁇ 1080, 4096 ⁇ 2160, or 8192 ⁇ 4320, etc., indicating that the display panel contains 1920 ⁇ 1080, 4096 ⁇ 2160, or 8192 ⁇ 4320 physical or virtual pixels. The higher the resolution, the higher the number of pixels.
  • the TCON is used to convert the received data signals (eg, image data output from the display control device 100 ) and control signals into data signals and control signals suitable for the data driving circuit and the scanning driving circuit, so as to realize the image display of the display panel.
  • the input interface of the TCON may have at least one of a TTL interface, an LVDS interface, an eDP interface, and a V-by-One interface.
  • the output interface of the display control apparatus 100 may be at least one of a TTL interface, an LVDS interface, an eDP interface, a V-by-One interface, and the like.
  • the TCON may be integrated into the display control device 100 .
  • the data driving circuit may be a source driving chip, such as a driving IC.
  • the data driving circuit is configured to provide a driving signal (also called a data driving signal, which may include a voltage or current corresponding to the digital signal) to each sub-pixel in the display panel in response to the data signal (ie, the digital signal) and the control signal sent by the TCON. ).
  • the data driving circuit may be integrated in the display control apparatus 100 .
  • the scan driver circuit can be a scan driver chip, such as a driver IC, which is bound to the display panel; it can also be arranged in the display panel, which can be called GOA (Gate Driver on Array, a scan driver circuit arranged on the array substrate).
  • the scan driving circuit is configured to provide a scan signal to each row of sub-pixels in the display panel in response to the control signal sent by the TCON.
  • the display control apparatus 100 may be a system-on-chip, which may include at least one chip, configured to execute the display control method.
  • the chip may be a programmable logic device, for example, a field programmable gate array (Field Programmable Gate Array, FPGA for short), a complex programmable logic device (CPLD), and the like.
  • the chip may also be a SoC (System-on-a-Chip, system on a chip) chip.
  • the display control apparatus 100 is a chip system including a SoC chip and an FPGA chip, and is configured to execute the display control method.
  • the chip system may include: a SoC board including a SoC chip and an FPGA board including an FPGA chip.
  • the chip system may include: a board including a SoC chip and an FPGA chip.
  • the display control apparatus 100 is a chip system, including an FPGA chip, configured to execute the display control method.
  • the chip system may be an FPGA board including an FPGA chip.
  • the display control apparatus 100 may include at least one processor 101 and at least one memory 102 .
  • at least one memory 102 stores a computer program
  • at least one processor 101 is configured to run the computer program in the at least one memory 102, so that the display control apparatus 100 executes the display control method.
  • the memory 102 may include high-speed random access memory, and may also include non-volatile memory, such as magnetic disk storage devices, flash memory devices, etc., and may also be read-only memory (ROM) or other devices that can store static information and instructions. Types of static storage devices, random access memory (RAM) or other types of dynamic storage devices that can store information and instructions, or one-time programmable memory (One Time Programable, OTP), electrically erasable Electrically erasable programmable read-only memory (EEPROM), magnetic disk storage medium, FLASH (flash memory) or other magnetic storage device, or capable of carrying or storing program code in the form of instructions or data structures and capable of being used by a computer Access any other medium without limitation.
  • the memory 102 may exist independently and be connected to the processor 101 through a communication line.
  • the memory 102 may also be integrated with the processor 101 .
  • the processor 101 is used to implement image processing, which can be one or more general-purpose central processing units (central processing units, referred to as CPUs), microprocessors (Microcontroller Units, referred to as MCUs), logic devices (Logic), specific applications An integrated circuit (application-specific integrated circuit, referred to as ASIC), a graphics processor (Graphics Processing Unit, referred to as GPU), or an integrated circuit for controlling program execution in some embodiments of the present disclosure; wherein, the CPU may be a single-core processing unit It can also be a multi-core processor (multi-CPU).
  • a processor herein may refer to one or more devices, circuits, or processing cores for processing data (eg, computer program instructions, etc.).
  • an embodiment of the present disclosure provides a display control method.
  • the display control method provided by the embodiment of the present disclosure may utilize image data of a lower-resolution startup screen (eg, the first image 510 ) stored in the display device, so that the display device displays a higher-resolution startup screen (eg, the first image 510 ) two images 520).
  • a display device with a higher resolution the image data of the boot screen with a lower resolution can be stored in the display device, which solves the problem that the image data of the boot screen with a higher resolution occupies a large storage space and the boot screen with a higher resolution
  • the image data of the screen takes a long time to load.
  • FIG. 5 is a flowchart of a display control method provided by an embodiment of the present disclosure. 4 and 5, the display control method includes:
  • the image data of the startup screen is read.
  • the display control device in response to the display control device being powered on, the image data of the startup screen is read.
  • the display control device in response to the display control device being powered on, the image data of the startup screen is read.
  • the display control device in response to the display control device being powered on, the image data of the startup screen is read.
  • the display control device in response to the display control device being powered on, the image data of the startup screen is read. For example, when the display device is turned off, it is triggered to turn on, and the display control device in the display device is powered on to execute the reading step.
  • the image data of the startup screen is read.
  • the read command may be issued by the host in the display device to the display control device.
  • the startup image includes at least one frame of image, the startup image includes one frame of image, and the startup image is static; the startup image includes multiple frames of images, and the startup image is dynamic.
  • Each frame of image can include display content and background.
  • the displayed content of the boot screen may include patterns (eg, logo), text (eg, copyright information), etc., and is generally concentrated in the center of the boot screen or at other positions.
  • the background of the startup screen may be a solid color or a non-solid color, for example, a color gradient or an irregular color distribution.
  • the startup screen may include a first image 510 having a first resolution including display content "XXX" and a background.
  • the background is a solid black background.
  • FIG. 6A shows a schematic diagram of the first image 510 .
  • the first image 510 includes n rows and m columns of pixels, ie, m ⁇ n pixels.
  • the pixel in the xth row and the y column may be recorded as pixel xy, the value range of x is 1 ⁇ n, and the value range of y is 1 ⁇ m.
  • the first image 510 may include pixels 11, 12, 13, ..., 1m, 21, ..., 2m, ..., n1, ..., nm.
  • the image data may include RGB image data, and the image data may also include YUV image data.
  • the RGB image data may include a pixel value of at least one pixel, and the pixel value may include pixel data (eg, grayscale data) of each sub-pixel in the pixel.
  • the pixel value has a certain range.
  • each pixel value obtained or generated is within the value range of the pixel value.
  • the pixel value if the obtained or generated result exceeds the value range of the pixel value, then, when the result is greater than the range, the pixel value takes the larger boundary value of the range; when the result is less than the range
  • the pixel value takes the smaller boundary value of the range.
  • the value range of the pixel value is 0 to 255.
  • the calculated or generated result is greater than 255, the pixel value is 255; when the calculated or generated result is less than 0, the pixel value is 0.
  • S102 Execute an image processing method to obtain image data of the second image.
  • image data of the second image 520 may be obtained based on the image data of the first image 510 .
  • the second image 520 has a second resolution, and the second resolution is greater than the first resolution of the first image 510 .
  • the second image 520 has a base area B and an extended area E.
  • the display control device needs image data corresponding to the second image 520 , and the image data may include pixel values of each pixel in the second image 520 .
  • the image data of the second image 520 may include the image data of the base area B and the image data of the extension area E, wherein the image data of the base area B may include the pixel values of each pixel in the base area B of the second image 520,
  • the image data of the extended area E may include pixel values of respective pixels in the extended area E of the second image 520 .
  • FIG. 6B shows a structure diagram of the second image 520 .
  • the second image 520 has a second resolution, denoted as p ⁇ q (p ⁇ 1, q ⁇ 1).
  • the second image 520 contains pxq pixels.
  • the pixel in the xth row and the y column may be recorded as pixel xy, the value range of x is 1 ⁇ q, and the value range of y is 1 ⁇ p.
  • the second image 520 may include pixels 11, 12, 13, ..., 1p, 21, ..., 2p, 31, ..., 3p, ..., q1, ..., qp.
  • the extension area E is an annular area, and the extension area E surrounds the base area B.
  • the base area B may be in the middle of the second image 520 .
  • the base area B and the second image 520 are both rectangles, and the center point of the base area B and the center point of the second image 520 are coincident.
  • the image data of the low-resolution boot screen can be used to fill a part of the display screen of the high-resolution display device with the high-resolution boot screen.
  • the image corresponding to the low-resolution boot screen is filled with corresponding backgrounds in other parts of the screen displayed by the high-resolution display device, so that the screen displayed by the high-resolution display device is an overall screen with uniform color changes.
  • the base area B in the second image 520 may be filled with an image corresponding to the low-resolution first image 510 , and the second The expanded region E of image 520 is filled with a corresponding solid black background.
  • the high-resolution boot screen displayed by the high-resolution display device includes all the information of the low-resolution boot screen (for example, the display content "XXX"), and the color changes uniformly, which can provide the user with a good visual effect.
  • the image data of the second image is output.
  • the image data of the second image is output to TCON to control the display panel to display according to the image data of the second image, that is, display the second image.
  • the display control device when the display control device receives the power-on state ending signal, it outputs the image data of the working picture to the TCON, so as to control the display panel to display the working picture.
  • FIG. 7 shows a flowchart of steps of an image processing method provided by an embodiment of the present disclosure. Referring to Figure 7, the image processing method includes the following steps:
  • S201 Use the image data of the first image as the image data of the base area.
  • the image data of the first image 510 is used as the image data of the base region B of the second image 520 .
  • the part located in the base region B in the second image 520 includes m ⁇ n pixels, and the pixel values of the m ⁇ n pixels are the same as those of the m ⁇ n pixels of the first image 510 .
  • the pixel values are correspondingly equal, so that the portion of the second image 520 located in the base region B is the first image 510 .
  • S202 (optional): Identify the type of the first image according to pixel values of multiple pixels in the edge of the first image.
  • the expanded area of the second image can be filled with a background similar to the background of the first image, so that when the expanded area of the second image is expanded When the region and the base region are stitched together, the color change of the second image is relatively uniform.
  • the first image can be divided into different types, and correspondingly different expansion strategies are used to generate the image data of the expanded area of the second image, so that the expanded area of the second image is the corresponding background, so that the second The color variation of the image is uniform.
  • the expansion area of the second image is directly spliced with the edge of the first image, the first image can be divided into different types according to the edge of the first image, and then a corresponding expansion strategy can be selected to realize the expansion of the second image.
  • the area and the base area are spliced together, the color change is more uniform.
  • the edge F of the first image 510 includes two first sub-edges parallel to the row direction, such as the first sub-edge Fx 1 (ie, the O 1 O 2 X 3 X 1 portion), The first sub-edge Fx 2 (ie the O 3 O 4 X 4 X 2 part).
  • the edge F of the first image 510 also includes two second sub-edges parallel to the column direction, such as the second sub-edge Fy 1 (ie, O 1 O 3 Y 3 Y 1 ), the second sub-edge Fy 2 (ie, O 2 ). O 4 Y 4 Y 2 part).
  • the type of the first image may be determined according to the change trend of pixel values of the plurality of pixels in each first sub-edge in the row direction and the change trend of the pixel values of the plurality of pixels in each second sub-edge in the column direction.
  • the judgment result of the first sub-edge is determined according to the change trend of the pixel values of the plurality of pixels in each of the first sub-edges in the row direction.
  • the judgment result of the first sub-edge is recorded as the first judgment result.
  • the judgment result of the first sub-edge Fx 1 is denoted as the first judgment result 1
  • the judgment result of the first sub-edge Fx 2 is denoted as the first judgment result 2.
  • the first judgment result includes equality; otherwise, the first judgment result includes unequal.
  • the difference is less than the set value (for example, 1 or 2), that is, the difference between any two of R x1 ⁇ R xn is less than the set value, the difference between any two of G x1 ⁇ G xn is less than the set value, and the difference between B x1 ⁇ B If the difference between any two of xn is less than the set value, the pixel values of the pixels in the xth row are approximately equal. If the pixel values of each row of pixels in the first sub-edge Fx 1 are substantially equal, the first judgment result 1 includes equality; otherwise, the first judgment result 1 includes unequal.
  • the set value for example, 1 or 2
  • the first judgment result includes gradual change; otherwise, the first judgment result includes no gradual change.
  • all pixel values of pixels in the xth row of the first sub-edge Fx1 include ( Rx1 , Rx2 , Rx3 ... Rxn )( Bx1 , Bx2 , Bx3 ... Bxn )( Gx1 , Gx2 , G x3 ...
  • ⁇ R x2 R x2 -R x1
  • ⁇ R x3 R x3 -R x2
  • ⁇ R xn R n -R (n-1)
  • ⁇ G x2 G x2 -G x1
  • ⁇ G x3 G x3 -G x2
  • ⁇ G xn G n -G (n-1)
  • ⁇ B x2 B x2 -B x1
  • the difference between any two of ⁇ R x2 to ⁇ R xn , ⁇ G x2 to ⁇ G xn , and ⁇ B x2 to ⁇ B xn is less than a set value (for example, 1 or 2), which also indicates that the pixel value of the pixel in the xth row is gradually changed.
  • the first judgment result includes inequality, and at least one of the three sets of data ⁇ R x2 to ⁇ R xn, ⁇ G x2 to ⁇ G xn and ⁇ B x2 to ⁇ B xn is gradually increased/decreased,
  • the remaining groups are roughly equal, which means the pixel value gradient of the pixel in the xth row.
  • the first judgment result includes not being equal, and if ⁇ R x2 ⁇ R xn gradually increases, and ⁇ G x2 ⁇ G xn gradually increases, and ⁇ B x2 ⁇ B xn gradually increases; or, the first judgment result includes not are equal, and if ⁇ R x2 ⁇ R xn gradually decreases, ⁇ G x2 ⁇ G xn gradually decreases, and ⁇ B x2 ⁇ B xn gradually decreases, then the pixel value of the pixel in the xth row is gradually changed.
  • the first judgment result includes not being equal, and if ⁇ R x2 ⁇ R xn gradually increase/decrease, and ⁇ G x2 ⁇ G xn are approximately equal, and ⁇ B x2 ⁇ B xn are approximately equal, it means that the pixels in the xth row are The pixel value gradient of .
  • the first judgment result 1 includes gradient; otherwise, the first judgment result 1 includes no gradient.
  • the judgment result of the second sub-edge is determined according to the change trend of the pixel values of the plurality of pixels in each second sub-edge in the column direction.
  • the judgment result of the second sub-edge is recorded as the second judgment result.
  • the judgment result of the second sub-edge Fy 1 is denoted as the second judgment result 1
  • the judgment result of the second sub-edge Fy 2 is denoted as the second judgment result 2.
  • the second judgment result includes equality; otherwise, the second judgment result includes inequality. If the pixel value of each pixel in each column of pixels in the second sub-edge is gradual, the second judgment result includes gradual change; otherwise, the second judgment result includes no gradual change.
  • the first image is of the first type, and the first type may be configured as Indicates that the first image includes edges of solid color.
  • the first judgment result 1 of the first sub-edge Fx 1 , the first judgment result 2 of the first sub-edge Fx 2 , and the second judgment result 1 of the second sub-edge Fy 1 , the second sub-edge Fy 2 The second judgment results 2 of 2 all include equal, then the first image 510 is of the first type, and the first image 510 may include a solid color edge.
  • the first image is of the second type, and the second type may be configured to represent the first image
  • the image includes edges where the color gradients in the column direction and does not change in the row direction.
  • the first judgment result 1 of the first sub-edge Fx 1 and the first judgment result 2 of the first sub-edge Fx 2 are both equal, and the second judgment result 1 and the second judgment result of the second sub-edge Fy 1 are equal.
  • the second judgment results 2 of the sub-edges Fy 2 all include gradients
  • the first image 510 is of the second type, and the first image 510 may include an edge whose color gradients in the column direction but does not change in the row direction.
  • the first image is of the third type, and the third type may be configured to represent the first image
  • the image includes edges whose color fades in the row direction and does not change in the column direction.
  • the first judgment result 1 of the first sub-edge Fx 1 and the first judgment result 2 of the first sub-edge Fx 2 both include gradients
  • the second judgment result 1 and the second judgment result of the second sub-edge Fy 1 If the second judgment results 2 of the sub-edges Fy 2 are all equal, the first image 510 is of the third type, and the first image 510 may include an edge whose color changes in the row direction and does not change in the column direction.
  • the first image is of the fifth type, and the fifth type can be configured To represent that the first image includes edges with irregular colors.
  • the first judgment result 1 of the first sub-edge Fx 1 , the first judgment result 2 of the first sub-edge Fx 2 , and the second judgment result 1 of the second sub-edge Fy 1 , the second sub-edge Fy 2 At least one of the second judgment results 2 of 2 includes unequal and non-gradient, then the first image 510 is of the fifth type, and the first image 510 may include edges with irregular colors.
  • S203 Based on the image data of the first image, generate image data of the expansion area according to the expansion strategy.
  • the first image can be divided into five types according to the edge of the first image.
  • different expansion strategies may be used to generate image data of the expanded region of the second image by using the image data of the first image.
  • the image data of the extended area E may be generated according to the expansion strategy.
  • the first image 510 has an edge F.
  • the edge F includes 4 sub-edges, such as sub-edges Fx 1 , Fx 2 , Fy 1 , Fy 2 .
  • the width of each sub-edge is greater than 1 pixel, and the width of each sub-edge may or may not be equal.
  • each sub-edge has the same width, which is n pixels.
  • both the sub-edge Fx 1 and the sub-edge Fx 2 include n rows of pixels
  • both the sub-edge Fy 1 and the sub-edge Fy 2 include n columns of pixels.
  • the first image 510 includes a solid color edge F1.
  • the edge F1 of a solid color means that the color of the edge F1 is a single color.
  • all the pixels in the edge F1 of the first image 510 may have the same pixel value.
  • the pixel value of one pixel in the edge F1 of the first image 510 may be used as the pixel value of each pixel in the extended area E.
  • the edge F1 of the first image 510 is a solid color
  • the pixel values of each pixel in the edge F1 are equal
  • the pixel value of any pixel in the solid color edge F1 is used as the pixel of each pixel in the extended area E value
  • the color of the extended area E is the same as the color of the edge F1 of the first image 510 .
  • the base area B of the second image 520 can fill the first image 510, and the base area B of the second image 520 also has a solid color edge. That is, at this time, the color of the extension area E is the same as the color of the edge of the base area B.
  • the second image 520 has a background of solid color, and the color changes uniformly.
  • the pixel value of 11 pixels in the edge F1 of the first image 510 is used as the pixel value of each pixel in the extended area E.
  • the first image includes edges that are not solid colors.
  • the edge of non-solid color means that the edge has multiple colors.
  • each pixel in the edge of the first image may have different pixel values.
  • edges that are not solid colors may include edges whose color gradients in the row direction and/or the column direction.
  • the color gradient may mean that the color gradually changes along a certain direction, for example, the color gradually becomes darker or lighter along a certain direction.
  • the variation may be uniform or non-uniform.
  • image data of the expanded area may be generated according to an expansion strategy.
  • the first image 510 includes an edge F2 whose color is gradually changed in the column direction and unchanged in the row direction.
  • the color of the edge F2 gradually becomes darker in the column direction, and accordingly, the pixel value of each pixel in the edge F2 may show a trend of gradually increasing in the column direction.
  • the change in the column direction of the pixel value of each pixel in the edge F2 may be a uniform change, for example, these pixel values are in an arithmetic progression, or may be a non-uniform change.
  • the color of the edge F2 does not change in the row direction, and accordingly, the pixel values of each pixel in the edge F2 are approximately equal in the row direction.
  • the extended area E of the second image 520 shown in FIG. 9B includes the first sub-area D11 (ie the V 1 V 2 S 8 S 5 part), the first sub-region D12 (ie the V 3 V 4 S 6 S 7 part), In addition to the first sub-region, a second sub-region D21 (ie, the S 4 S 3 V 3 V 2 portion) and a second sub-region D22 (ie, the S 1 S 2 V 4 V 1 portion) are also included.
  • the first sub-area D11 and the first sub-area D12 are both flush with the base area B in the row direction. At this time, each row of pixels in the first sub-area D11 and the first sub-area D12 can be aligned with the corresponding row of the base area B. Pixel flush.
  • the pixel value of each pixel in the corresponding row of pixels in the first sub-region may be generated according to the pixel value of at least one pixel located at the edge F2 in each row of pixels of the first image 510 .
  • the pixel values of the plurality of rows of pixels in the second sub-region that change in the column direction are obtained, wherein the pixel value of each row of pixels is The same, each pixel value is within the range of pixel values.
  • the image data of the first sub-region D12 can be generated using a method similar to the method of generating the image data of the first sub-region D11. This will not be repeated here.
  • the pixel value of each pixel in the row of pixels in the first sub-area D11 may be generated according to the pixel value of at least one pixel located at the edge F2 in the corresponding row of pixels in the first image 510.
  • the corresponding row of the first image 510 filled by the base area B that is, the pixels located at the edge F2 in the first row of pixels of the first image 510 may be the pixels of the first image 510 .
  • the pixel value of each pixel in the first row of pixels in the first sub-region D11 may be generated according to the pixel value of at least one of the pixels.
  • the pixel value of one pixel located at the edge F2 in each row of pixels of the first image 510 is used as the pixel value of each pixel in the corresponding row of pixels of the first sub-region D11.
  • the corresponding row of the first image 510 filled in the base area B that is, a pixel located at the edge F2 in the first row of pixels in the first image 510 (for example, a pixel The pixel value of 11) is used as the pixel value of each pixel in the first row of pixels of the first sub-region D11.
  • the pixel values of the respective pixels in each row of the edge F2 of the first image 510 are approximately equal.
  • the pixel value of any pixel located at the edge F2 in each row of pixels of the first image 510 is used as the pixel value of each pixel in the corresponding row of pixels in the first sub-area D11, so that the pixel value in the first sub-area D11 can be
  • Each row of pixels displays approximately the same color as the corresponding row of pixels in the edge F2 of the first image 510 , so that in the second image 520, from the first image 510 filled in the base area B to the corresponding background filled in the first sub-area D11
  • the color change is relatively uniform.
  • the second sub-region D21 is taken as an example for description.
  • the second sub-region D21 includes i+1 rows of pixels, i ⁇ 0, these rows of pixels can be recorded as the f+xth row of pixels, and the value of x ranges from 0 to i. +1 lines, ..., f+i lines of pixels. Since the color of the edge F2 of the first image 510 gradually changes in the column direction, the pixel values of the plurality of pixels in the edge F2 of the first image 510 have a certain change trend in the column direction. For example, the pixel value of each pixel in the edge F2 of the first image 510 gradually increases or decreases in the column direction.
  • the image data of the second sub-region D21 can be generated according to the change trend of the pixel values of the plurality of pixels in the edge F2 in the column direction, so that the change trend of the pixel values of each pixel in the second sub-region D21 in the column direction is the same as that of the first sub-region D21.
  • the pixel values of a plurality of pixels in the edge F2 of the image 510 have the same trend of change in the column direction.
  • the pixel value of any one of the pixels is used to obtain the pixel values of the pixels in the plurality of rows of pixels in the second sub-region D21 that vary in the column direction.
  • the pixel values of each pixel in each row of pixels may be equal.
  • the pixel values of one column of pixels can form an arithmetic sequence, so that the color of the second sub-region D21 changes uniformly in the column direction, which can provide users with Good look and feel.
  • the method for obtaining the image data of the second sub-area D21 is that the pixel value of the pixel in the f+xth row in the second sub-area D21 is:
  • R (f+x) R 11 +x ⁇ (R 11 ⁇ R 21 );
  • G (f+x) G 11 +x ⁇ (G 11 ⁇ G 21 );
  • the second sub-region D22 may include k+1 rows of pixels, k ⁇ 0, these rows of pixels may be denoted as the g+xth row of pixels, and the value of x ranges from 0 to k, for example, as shown in FIG. 9B g line, g+1 line, ..., g+k line of pixels.
  • the method of obtaining the image data of the second sub-region D22 is similar to that of the second sub-region D21.
  • the pixel value of any pixel in the nth row of pixels such as the pixel value of pixel n1 (including R n1 , G n1 , B n1 ), obtain the pixel value of the pixels in the first sub-area D12 in multiple rows of pixels that change in the column direction .
  • the method for obtaining is, for example, the pixel value of the pixel in the g+xth row in the first sub-region D12 is:
  • R (g+x) R n1 -x ⁇ (R 11 -R 21 );
  • G (g+x) G n1 -x ⁇ (G 11 -G 21 );
  • B (g+x) B n1 -x ⁇ (B 11 -B 21 ).
  • the first image 510 includes an edge F3 whose color is gradually changed in the row direction and unchanged in the column direction.
  • the color of the edge F3 gradually becomes darker in the row direction, and correspondingly, the pixel value of each pixel in the edge F3 may show a trend of gradually increasing in the row direction.
  • the variation of the pixel values of each pixel in the edge F3 in the row direction may be uniform variation, for example, these pixel values are in an arithmetic progression, or may be non-uniform variation.
  • the extended region E of the second image 520 shown in FIG. 9C includes a third subregion and a fourth subregion other than the third subregion, wherein the third subregion and the base region B are flush in the column direction.
  • the second image 520 includes a third sub-region D31 (ie, a portion H 4 H 3 S 7 S 8 ), a third sub-region D32 (ie, a portion H 1 H 2 S 6 S 5 ), and a fourth sub-region D41 (ie, a portion H 1 H 2 S 6 S 5 ). That is, the S 4 S 1 H 1 H 4 part), the fourth sub-region D42 (that is, the S 3 S 2 H 2 H 3 part).
  • the third sub-area D31 and the third sub-area D32 are both flush with the base area B in the column direction. Corresponding columns are flush with pixels.
  • the pixel value of each pixel in the corresponding column of pixels in the third sub-region may be generated according to the pixel value of at least one pixel located at the edge F3 in each column of pixels of the first image 510 .
  • the pixel values of the plurality of columns of pixels in the fourth sub-region that change in the row direction are obtained. pixel values are within the range of pixel values.
  • the third sub-area D31 is taken as an example for illustration, the image data of the third sub-area D32 can use a method similar to the method of generating the image data of the third sub-area D31 to generate, and will not be repeated here.
  • the pixel value of each pixel in the column of pixels in the third sub-region D31 may be generated according to the pixel value of at least one pixel located at the edge F2 in the corresponding column of pixels in the first image 510.
  • the corresponding column of the first image 510 filled by the base region B that is, the pixel located at the edge F3 in the first column of pixels of the first image 510 may be the first column of the first image 510 .
  • the pixel value of each pixel in the first column of pixels of the third sub-region D31 may be generated according to the pixel value of at least one of the pixels.
  • the pixel value of one pixel located at the edge F3 in each column of pixels of the first image 510 is used as the pixel value of each pixel in the corresponding column of pixels of the third sub-region D31.
  • the corresponding column of the first image 510 filled in the base area B that is, a pixel located at the edge F3 in the first column of pixels in the first image 510 (for example, a pixel The pixel value of 11) is used as the pixel value of each pixel in the first column of pixels of the third sub-region D31.
  • the pixel values of the respective pixels in each column of the edge F3 of the first image 510 are approximately equal.
  • the pixel value of any pixel located at the edge F3 in each column of pixels in the first image 510 is used as the pixel value of each pixel in the corresponding column of pixels in the third sub-region D31, so that the third sub-region D31 can be
  • Each column of pixels displays substantially the same color as the corresponding column of pixels in the edge F3 of the first image 510 , so that the color change from the first image 510 to the third sub-region D31 of the second image 520 is relatively uniform.
  • the fourth sub-region D41 includes i+1 columns of pixels, i ⁇ 0, these columns of pixels can be denoted as the f+xth column of pixels, and the value of x ranges from 0 to i, for example The f-column, f+1-column, ..., f+i-column pixels shown in FIG. 9C. Since the color of the edge F3 of the first image 510 gradually changes in the row direction, the pixel values of the plurality of pixels in the edge F3 of the first image 510 have a certain change trend in the row direction.
  • the pixel value of each pixel in the edge F3 of the first image 510 gradually increases or decreases in the row direction.
  • the image data of the fourth sub-area D41 can be generated according to the changing trend of the pixel values of the plurality of pixels in the edge F3 in the row direction, so that the changing trend of the pixel values of each pixel in the fourth sub-area D41 in the row direction is the same as that of the first image 510.
  • the pixel values of the plurality of pixels in the edge F3 have the same trend of change in the row direction.
  • the pixel value of any one of the pixels in the first column is used to obtain the pixel values of the pixels of the plurality of columns that change in the row direction in the fourth sub-region D41.
  • the pixel values of each pixel in each column of pixels may be equal.
  • the pixel values of a row of pixels can form an arithmetic sequence, so that the color of the fourth sub-area D41 changes more uniformly in the row direction, which can provide users with good results. perception.
  • the pixel values of each pixel in a row of pixels are different, and each pixel value constitutes an arithmetic sequence; for another example, the obtained image of the fourth sub-region D41
  • the pixel values of multiple adjacent pixels in a row of pixels may be equal, and the non-repetitive values of the pixel values of all pixels in a row may constitute an arithmetic sequence.
  • the method for obtaining the image data of the fourth sub-area D41 is that the pixel value of the pixel in the f+xth column in the fourth sub-area D41 is:
  • R (f+x) R 11 +x ⁇ (R 11 ⁇ R 12 );
  • G (f+x) G 11 +x ⁇ (G 11 ⁇ G 12 );
  • the fourth sub-region D42 may include k+1 columns of pixels, k ⁇ 0, these columns of pixels may be denoted as the g+xth column of pixels, and the value of x ranges from 0 to k, for example, as shown in FIG. 9C Column g, column g+1, ..., column g+k of pixels.
  • the method of obtaining the image data of the fourth sub-region D42 is similar to that of the fourth sub-region D41.
  • the pixel value of any pixel in the m-th column of pixels such as the pixel value of pixel 1m (including R 1m , G 1m , B 1m ), is used to obtain the pixel values of multiple columns of pixels that vary in the row direction in the fourth sub-region D42.
  • the calculation method is, for example, the pixel value of the pixel in the g+xth column in the fourth sub-region D42 is:
  • R (g+x) R 1m -x ⁇ (R 11 -R 12 );
  • G (g+x) G 1m -x ⁇ (G 11 -G 12 );
  • B (g+x) B 1m -x ⁇ (B 11 -B 12 ).
  • the first image 510 includes an edge F4 whose color changes in both the column direction and the row direction.
  • the color of the edge F4 gradually becomes darker in the row direction and/or the column direction, and accordingly, the pixel value of each pixel in the edge F4 may show a gradually increasing trend in the row direction and/or the column direction.
  • the changes in the row direction and/or the column direction of the pixel values of the respective pixels in the edge F4 may be uniform changes, for example, these pixel values are in an arithmetic progression, or may be non-uniform changes.
  • the extended area E of the second image 520 shown in FIG. 9D includes a fifth sub-area, a sixth sub-area, and a seventh sub-area other than the fifth sub-area and the sixth sub-area, wherein the fifth sub-area and the base area B is flush in the row direction, and the sixth sub-region is flush with the base region B in the column direction.
  • the second image 520 includes a fifth sub-region D51, a fifth sub-region D52, a sixth sub-region D61, a sixth sub-region D62, and a seventh sub-region D71, a seventh sub-region D72, a seventh sub-region D73, The seventh sub-region D74.
  • the fifth sub-area D51 and the fifth sub-area D52 are both flush with the base area B in the row direction. At this time, each row of pixels in the fifth sub-area D51 and the fifth sub-area D52 can be aligned with the corresponding row of the base area B.
  • the pixels are flush; the sixth sub-area D61 and the sixth sub-area D62 are both flush with the base area B in the column direction.
  • the corresponding columns of B are pixel flush.
  • the pixel values of the pixels in the corresponding row in the fifth sub-area can be obtained according to the changing trend of the pixel values of a plurality of pixels located in a row in the edge F4 of the first image 510 in the row direction. Within the value range, the image data of the fifth sub-region is obtained.
  • the pixel values of the pixels in the corresponding column in the sixth sub-region are obtained, and each pixel value is within the range of the pixel value.
  • the image data of the sixth sub-region is obtained.
  • the image data of the seventh sub-region is obtained.
  • the fifth sub-area D51 includes pixels in x rows and y columns.
  • the value range of x is 1 ⁇ n
  • the value range of y is f ⁇ f+i.
  • Including the pixel xy represents the pixel in the xth row and y column. Since the color of the edge F4 of the first image 510 gradually changes in both the row direction and the column direction, the pixel values of a plurality of pixels located in a row in the edge F4 of the first image 510 have a certain change trend in the row direction.
  • the pixel value of the corresponding row of pixels in the fifth sub-area D51 can be obtained according to the variation trend, so that the variation trend of the pixel value of each pixel in the corresponding row of pixels in the fifth sub-area D51 in the row direction is the same as that of the edge F4 of the first image 510.
  • the change trend of the pixel value of the row of pixels in the row direction is the same.
  • the pixel values of the pixel x1 (including R x1 , G x1 , and B x1 ) and the pixel values of the pixel x2 (including R x2 , G x2 , B x2 ) change trend in the row direction, and obtain the pixel value of the pixel in the xth row in the fifth sub-area D51.
  • the pixel values of the pixels in rows 1 to n of the fifth sub-region D51 are obtained according to the above method, so as to obtain the image data of the fifth sub-region D51.
  • the pixel values of a row of pixels can form an arithmetic sequence, so that the color of the fifth sub-area D51 changes more uniformly in the row direction, which can provide users with good results. perception.
  • the method for obtaining the image data of the fifth sub-area D51 is that the pixel value of the pixel in the x-th row and the y-th column in the fifth sub-area D51 is:
  • Rxy Rx1 +(y-f+1) ⁇ ( Rx1 - Rx2 );
  • G xy G x1 +(y-f+1) ⁇ (G x1 -G x2 );
  • B xy B x1 +(y ⁇ f+1) ⁇ (B x1 ⁇ B x2 ).
  • the fifth sub-area D52 includes pixels x rows and y columns, where the value range of x is 1-n, and the value range of y is g-g+k.
  • the fifth sub-area D52 may include pixels xy, representing the xth Row y column pixels.
  • the pixel value of the pixel x(m-1) in the x-th row of pixels in the first image 510 (including R x(m-1) , G x(m-1 ) ) , B x(m-1) ) and the pixel value of the pixel xm (including R xm , G xm , B xm ) in the row direction change trend, obtain the pixel value of the xth row pixel in the fifth sub-region D52.
  • the image data of the fifth sub-region D52 is obtained by obtaining the pixel values of the pixels in rows 1 to n of the fifth sub-region D52 according to the above method.
  • the method for obtaining the image data of the fifth sub-area D52 is that the pixel value of the pixel in the x-th row and the y-th column in the fifth sub-area D52 is:
  • R xy R xm +(y-g+1) ⁇ (R xm -R x(m-1) );
  • G xy G xm +(y-g+1) ⁇ (G xm -G x(m-1) );
  • B xy B xm +(y ⁇ g+1) ⁇ (B xm ⁇ B x(m ⁇ 1) ).
  • the sixth sub-region D61 includes pixels of x rows and y columns, the value range of x is h ⁇ h+a, the value range of y is 1 ⁇ m, the sixth sub-region D61 can Including the pixel xy, represents the pixel in the xth row and y column. Since the color of the edge F4 of the first image 510 gradually changes in the row direction and the column direction, the pixel values of a plurality of pixels located in a column in the edge F4 of the first image 510 have a certain change trend in the column direction.
  • the pixel value of the corresponding column of pixels in the sixth sub-region D61 can be obtained according to the change trend, so that the change trend of the pixel value of each pixel in the corresponding column of pixels in the sixth sub-region D61 in the column direction is the same as that of the edge F4 of the first image 510.
  • the change trend of the pixel value of this column of pixels in the column direction is the same.
  • the pixel values of the pixel 1y including R 1y , G 1y , and B 1y
  • the edge F4 of the first image 510 and the pixel value of the pixel 2y including R 2y
  • the image data of the sixth sub-region D61 is obtained by obtaining the pixel values of the pixels in the 1-m columns of the sixth sub-region D61 according to the above method.
  • the pixel values of one column of pixels can form an arithmetic sequence, so that the color of the sixth sub-region D61 changes uniformly in the column direction, which can provide users with Good look and feel.
  • the method for obtaining the image data of the sixth sub-area D61 is that the pixel value of the pixel in the x-th row and the y-th column in the sixth sub-area D61 is:
  • R xy R 1y +(x ⁇ h+1) ⁇ (R 1y ⁇ R 2y );
  • G xy G 1y +(x ⁇ h+1) ⁇ (G 1y ⁇ G 2y );
  • the sixth sub-area D62 includes pixels x rows and y columns, the value range of x is from j to j+z, and the value range of y is from 1 to m.
  • the sixth sub-area D62 may include pixels xy, representing the xth Row y column pixels.
  • the pixel values of the pixels in the y-th column of the sixth sub-region D62 change trends in the column direction, and obtain the pixel in the y-th column in the sixth sub-region D62 pixel value.
  • the image data of the sixth sub-region D62 is obtained by obtaining the pixel values of the pixels in the 1-m columns of the sixth sub-region D62 according to the above method.
  • the method for obtaining the image data of the sixth sub-area D62 is that the pixel value of the pixel in the x-th row and the y-th column in the sixth sub-area D62 is:
  • Rxy Rny + (x-j+1) ⁇ ( Rny -R (n-1)y );
  • Gxy Gny +(x-j+1) ⁇ ( Gny -G (n-1)y );
  • Bxy Bny +(x ⁇ j+1) ⁇ ( Bny ⁇ B (n ⁇ 1)y ).
  • obtaining the image data of the seventh sub-region may include:
  • the pixel values of two pixels adjacent to one pixel in the seventh sub-region in the row direction and the column direction are averaged to obtain the pixel value of the pixel in the seventh sub-region.
  • the image data generation method of the seventh sub-area will be described below by taking the seventh sub-area D71 as an example.
  • the generation method of the image data of the seventh sub-areas D72, D73, and D74 is similar to the method of generating the image data of the first sub-area D71. , and will not be repeated here.
  • the seventh sub-region D71 includes pixels in x rows and y columns, where the value range of x is from h to h+a, and the value range of y is from f to f+i.
  • the seventh sub-region D71 may include pixels xy. Represents the pixel at row x and column y.
  • the method for generating the image data of the seventh sub-region D71 is, for example, from the position of the pixels in the first row and the first column of the first image 510 filled in the base region B in the second image 520 to h+a in the seventh sub-region D71.
  • the position of the pixels in the row f+i column is deduced, and the pixel value of the unknown pixel is calculated from the pixel values of the two adjacent pixels in the row direction and the column direction.
  • the hth row and fth column of the seventh sub-area D71 are calculated using the pixel values of the pixels at the first row and fth column of the fifth sub-area D51 and the pixel values of the pixels at the hth row and the first column of the sixth sub-area D61.
  • the calculation method can be averaging.
  • the method for obtaining the pixel value of the pixel xy in the seventh sub-region D71 may be:
  • Rxy (Rx (y-1) +R (x-1)y )/2;
  • G xy (G x (y-1) + G (x-1) y )/2;
  • B xy (B x(y-1) +B (x-1)y )/2.
  • Another example of the method for generating the image data of the seventh sub-region D71 is to generate the sixth sub-region D61 based on the image data of the fifth sub-region D51 adjacent to the seventh sub-region D71 using the same image data as the image data based on the first image 510.
  • a method similar to the method of the image data of the fifth sub-area D51 that is, according to the change trend of the pixel values of a plurality of pixels located in a column in the edge of the fifth sub-area D51 in the column direction, obtain the pixel value of the corresponding column in the seventh sub-area D71, Further, the image data of the seventh sub-region D71 is obtained.
  • the method for generating the image data of the seventh sub-region D71 is, for example, generating the fifth sub-region D51 based on the image data of the sixth sub-region D61 adjacent to the seventh sub-region D71 using the same image data as the image data based on the first image 510.
  • the first image 510 includes an edge F5 of irregular color.
  • the pixel value of each pixel in the first image 510 may be averaged to obtain the pixel value of each pixel in the extended area.
  • the pixel value of each pixel in the extended area E is:
  • R xy (R 11 +R 12 +...+R 1m +R 21 +R 22 +...+R 2m +...+R n1 +R n2 +...+R nm )/(n ⁇ m);
  • G xy (G 11 +G 12 +...+G 1m +G 21 +G 22 +...+G 2m +...+G n1 +G n2 +...+G nm )/(n ⁇ m);
  • B xy (B 11 +B 12 +...+B 1m +B 21 +B 22 +...+B 2m +...+B n1 +B n2 +...+B nm )/(n ⁇ m).
  • step S201 and step S203 may be performed simultaneously; step S201 and step S203 may not be performed simultaneously, and there is no sequence between the two.
  • the image processing method includes step S202, in this case, step S202 may be performed before step S203.
  • FIG. 10 is a structural diagram of a display control apparatus 300 provided by an embodiment of the present disclosure.
  • the display control device 300 includes: a reading module 310 , an image processing device 320 and an output module 330 .
  • the reading module 310 is configured to read the image data of the boot screen in response to the power-on of the display control device 300 .
  • the reading module 310 may perform step S101 of the display control method provided by any of the above embodiments.
  • the image processing device 320 is configured to execute the image processing method provided in any of the above embodiments to obtain image data of the second image.
  • the first image in the image processing method may be the startup screen read by the reading module.
  • the output module 330 is configured to output the image data of the second image, so as to control the display panel to display according to the image data of the second image, that is, to display the second image with a larger resolution.
  • the output module 403 may execute steps S103 and/or S104 in the display control method described in any of the above embodiments.
  • Some embodiments of the present disclosure also provide an image processing apparatus.
  • the image processing apparatus may be used as the image processing apparatus 320 in the display control apparatus shown in FIG. 10 .
  • the image processing apparatus 320 may include: a first processing module 321 and a second processing module 322 .
  • the first processing module 321 is configured to use the image data of the first image as the image data of the base area. Exemplarily, the first processing module 321 may execute step S201 of the image processing method provided in any of the foregoing embodiments.
  • the second processing module 322 is configured to generate image data of the extended area based on the image data of the first image according to the expansion strategy, so as to obtain image data of the second image including the image data of the basic area and the image data of the extended area.
  • the second processing module 322 may execute S202 and/or S203 in the image processing method described in any of the foregoing embodiments.
  • FIG. 10 and FIG. 11 are only schematic.
  • the division of the above modules is only a logical function division.
  • Modules or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • Each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist physically alone, or two or more modules may be integrated into one module.
  • Each of the above modules can be implemented in the form of hardware, or can be implemented in the form of software functional units.
  • the above modules may be implemented by software function modules generated after at least one processor 101 in FIG. 3B reads the program code stored in the memory 102 .
  • the above modules may also be implemented by different hardware in the display device.
  • the above functional modules can also be implemented by a combination of software and hardware.
  • the reading module 310 and the image processing device 320 in FIG. 10 may be implemented by the processor 101 running program codes.
  • the output module 330 may be an output interface, such as a high-definition display protocol interface such as eDP.
  • the display control apparatus may be a system-on-chip, including a SoC board and an FPGA board.
  • the SoC board is configured to store and/or load the boot screen, including: a boot screen storage module 601, a storage controller 602, a sending module 603, and a processor 604.
  • the startup image storage module 601 is configured to store the startup image.
  • the startup screen storage module 601 may be, for example, a memory, and the memory may refer to the description of the memory 102 in FIG. 3B , for example.
  • the storage controller 602 is configured to, in response to the power-on of the display control device 300 , read out the startup image from the startup image storage module 601 and transmit it to the sending module 603 .
  • the storage controller 602 may be a DMA (Direct Memory Access, direct memory access) controller.
  • the sending module 603 is configured to transmit the boot screen to the FPGA chip.
  • the sending module 603 includes a sending interface, which may be an LVDS interface, and is configured to transmit the boot screen to the FPGA chip through the LVDS protocol.
  • the processor 604 is configured to control the storage controller 602 and the sending module 603 to implement their respective functions.
  • the processor 604 can refer to, for example, the description of the processor 101 in FIG. 3B .
  • the FPGA board is configured to recognize the type of the boot screen and/or generate image data of a higher resolution image (eg, the second image) based on image data of the boot screen (eg, the first image). It includes: a receiving module 605 , a storage module 606 , a pixel sampling module 607 , a type determination module 608 , an image expansion module 609 , a selector 610 , and a display output module 611 .
  • the pixel sampling module 607, the type determination module 608, the image expansion module 609, and the selector 610 may be included in the FPGA chip.
  • the receiving module 605 is configured.
  • the receiving module 605 includes a receiving interface, which may be an LVDS interface, and is configured to receive image data of the boot screen sent by the SoC board through the LVDS protocol.
  • the storage module 606 is configured to buffer the received image data of the boot screen by frame to achieve synchronization with the post-level system.
  • the storage module 606 may be a DDR SDRAM (Double Data Rate SDRAM, double-rate synchronous dynamic random access memory).
  • the pixel sampling module 607 may be configured to perform the step of identifying the type of the first image in the image processing method provided in any of the above embodiments, for example, step S202 in FIG. 7 .
  • the type determination module 608 may be configured to perform the step of identifying the type of the first image in the image processing method provided in any of the above embodiments, for example, step S202 in FIG. 7 .
  • the image expansion module 609 can be configured to perform the step of generating the image data of the expanded area according to the expansion strategy based on the image data of the first image in the image processing method provided by any of the above embodiments, for example, S202 in FIG. 7 .
  • the selector 610 is configured to select whether to output the data of the startup screen or the data of the normal working screen. Exemplarily, it may be configured to perform step S103 and/or step S104 in the display control method shown in FIG. 5 .
  • the initial state of the selector 610 can be set to display the startup screen, that is, in response to the power-on of the display control device, the display device is in the startup initialization state, and the selector 610 selects to display the startup picture, so as to respond to the display control device in the startup initialization state , output the image data of the second image.
  • the SoC chip can transmit a signal to the selector 610, so that the selector 610 selects a working screen (ie, a normal display screen to be displayed after the boot screen).
  • the display output module 611 is configured to output the selected picture to the back-end display screen.
  • the display output module 611 may be an output interface, for example, may be at least one of an eDP interface and a V-by-One interface.
  • the FPGA board may further include a front-end processing system for processing the received image data of the working screen, and outputting the processed image data to the selector.
  • the processing may include, for example, hue adjustment, brightness adjustment, contrast adjustment, chromaticity calibration, and the like.
  • the functions implemented by the above display device, display control device, and image processing device are similar to the steps in the above display control method and image processing method, the specific implementation of the method can be implemented with reference to the above method. The relevant descriptions of the corresponding steps in the example will not be repeated here.
  • using the display device, the display control device, and the image processing device provided by the embodiments of the present disclosure can all achieve the effect of using the image data of the lower resolution image to make the display device display the higher resolution image.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • a software program it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes computer program instructions, and when the computer program instructions are executed on a computer (eg, a display device), the computer program instructions cause the computer to execute the image processing method provided in any of the above embodiments, or, any of the above The display control method provided by the embodiment.
  • the computer program instructions may be stored on or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server or data center via wired (eg coaxial cable, optical fiber, digital subscriber line, DSL) or wireless (eg infrared, wireless, microwave, etc.) methods.
  • wired eg coaxial cable, optical fiber, digital subscriber line, DSL
  • wireless eg infrared, wireless, microwave, etc.
  • Embodiments of the present disclosure also provide a computer program.
  • the computer program When the computer program is executed on a computer (eg, a display device), the computer program causes the computer to execute the image processing method provided by any of the above embodiments, or the display control method provided by any of the above embodiments.
  • Embodiments of the present disclosure also provide a computer-readable storage medium.
  • the computer-readable storage medium stores computer program instructions, and when the computer program instructions are run on a computer (for example, a display device), causes the computer to execute the image processing method provided by any of the foregoing embodiments, or the display provided by any of the foregoing embodiments. Control Method.
  • the computer-readable storage medium can be any available medium that can be accessed by a computer, or a data storage device such as a server, data center, etc. that includes one or more available media integrated.
  • the available media may be magnetic media (eg, floppy disks, magnetic disks, tapes), optical media
  • DVD digital video disc
  • SSD solid state drives

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

一种图像处理方法,包括将第一图像的图像数据用作基础区域的图像数据,其中第一图像具有第一分辨率。基于第一图像的图像数据,按照扩展策略生成扩展区域的图像数据,以得到第二图像的图像数据,第二图像的图像数据包括基础区域的图像数据和所述扩展区域的图像数据。其中第二图像具有第二分辨率,第一分辨率小于第二分辨率。

Description

图像处理方法及装置、显示控制方法及装置、显示装置
本申请要求于2021年3月31日提交的、申请号为202110349875.2的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及显示技术领域,尤其涉及一种图像处理方法及装置、显示控制方法及装置、显示装置。
背景技术
显示装置(例如显示器)开机时,需要一定的准备时间才能进行正常显示。可以在显示装置开机后、还未能正常使用前的这段开机准备时间中,让显示装置显示开机画面。这样,可以让用户知道显示装置已经开启并正在启动,消除用户等待的焦急感和枯燥感。开机画面一般可以包括公司logo(标识),产品型号等信息。
发明内容
第一方面,提供一种图像处理方法,包括将第一图像的图像数据用作基础区域的图像数据;基于第一图像的图像数据,按照扩展策略生成扩展区域的图像数据,以得到第二图像的图像数据,第二图像的图像数据包括基础区域的图像数据和所述扩展区域的图像数据。其中,第一图像具有第一分辨率,第二图像具有第二分辨率,第一分辨率小于第二分辨率。
在一些实施例中,基于第一图像的图像数据,按照扩展策略生成扩展区域的图像数据包括:基于第一图像的边缘中至少一个像素的像素值,按照扩展策略生成扩展区域的图像数据,或者根据第一图像中各个像素的像素值,生成扩展区域的图像数据。
在一些实施例中,第一图像包括纯色的边缘。基于第一图像的边缘中至少一个像素的像素值,按照扩展策略生成扩展区域的图像数据包括:将第一图像的边缘中一个像素的像素值用作扩展区域中每个像素的像素值,以得到扩展区域的图像数据。
在一些实施例中,第一图像包括非纯色的边缘。基于第一图像的边缘中至少一个像素的像素值,按照扩展策略生成扩展区域的图像数据包括:基于第一图像的边缘中多个像素的像素值,按照扩展策略生成扩展区域的图像数据。
在一些实施例中,第一图像包括颜色在列方向上渐变在行方向上不变的 边缘。扩展区域包括第一子区域和第一子区域以外的第二子区域。其中,第一子区域和基础区域在行方向上齐平。基于第一图像的边缘中多个像素的像素值,按照扩展策略生成扩展区域的图像数据包括:根据第一图像的每行像素中位于边缘的至少一个像素的像素值,生成第一子区域的相应行像素中每个像素的像素值;按照第一图像的边缘中多个像素的像素值在列方向上的变化趋势,求取第二子区域中列方向上变化的多行像素的像素值,其中每行像素的像素值相同,每个像素值在像素值取值范围内。
在一些实施例中,根据第一图像的每行像素中位于边缘的至少一个像素的像素值,生成第一子区域的相应行像素中每个像素的像素值包括:将第一图像的每行像素中位于边缘的一个像素的像素值用作第一子区域的相应行像素中的每个像素的像素值。
在一些实施例中,第二子区域中一列像素的像素值构成等差数列。
在一些实施例中,第一图像包括颜色在行方向上渐变在列方向上不变的边缘。第二图像的扩展区域包括:第三子区域和第三子区域以外的第四子区域,第三子区域与基础区域在列方向上齐平。基于第一图像的边缘中多个像素的像素值,按照扩展策略生成第二图像的扩展区域的图像数据包括:根据第一图像的每列像素中位于边缘的至少一个像素的像素值,生成第三子区域的相应列像素中每个像素的像素值;按照第一图像的边缘中多个像素的像素值在行方向上的变化趋势,求取第四子区域中行方向上变化的多列像素的像素值,其中每列像素的像素值相同,每个像素值在像素值取值范围内。
在一些实施例中,根据第一图像的每列像素中位于边缘的至少一个像素的像素值,生成第三子区域的相应列像素中每个像素的像素值包括:将第一图像的每列像素中位于边缘的一个像素的像素值用作第三子区域的相应列像素中的每个像素的像素值。
在一些实施例中,第四子区域中一行像素的像素值构成等差数列。
在一些实施例中,第一图像包括颜色在列方向和行方向上均渐变的边缘。第二图像的扩展区域包括:第五子区域、第六子区域、以及所述第五子区域和第六子区域以外的第七子区域,其中,第五子区域和第二图像的基础区域在行方向上齐平,第六子区域与第二图像的基础区域在列方向上齐平。基于第一图像的边缘中多个像素的像素值,按照扩展策略生成扩展区域的图像数据包括:
按照第一图像的边缘中位于一行的多个像素的像素值在行方向上的变化趋势,求取第五子区域中相应行像素的像素值,每个像素值在像素值取值范围内,得到第五子区域的图像数据。按照第一图像的边缘中位于一列的多个像素的像素值在列方向上的变化趋势,求取第六子区域中相应列像素的像素值,每个像素值在像素值取值范围内,得到第六子区域的图像数据。基于第五子区域的图像数据和/或第六子区域的图像数据,求取第七子区域的图像数据。
在一些实施例中,基于第五子区域的图像数据和/或第六子区域的图像数据,求取第七子区域的图像数据包括:将与第七子区域中的一个像素在行方向和列方向上相邻的两个像素的像素值求平均,得到第七子区域中的像素的像素值。
在一些实施例中,基于第五子区域的图像数据和/或第六子区域的图像数据,求取第七子区域的图像数据包括:按照与第七子区域相邻的第六子区域的边缘中位于一行的多个像素的像素值在行方向上的变化趋势,求取第七子区域中相应行像素的像素值,每个像素值在像素值的取值范围内,或者,按照与第七子区域相邻的第五子区域的边缘中位于一列的多个像素的像素值在列方向上的变化趋势,求取第七子区域中相应列像素的像素值,每个像素值在像素值的取值范围内。
在一些实施例中,第一图像包括颜色无规律的边缘。根据第一图像中各个像素的像素值,生成扩展区域的图像数据包括:将第一图像中各个像素的像素值求平均,得到扩展区域中每个像素的像素值。
在一些实施例中,图像处理方法还包括:根据第一图像的边缘中多个像素的像素值,识别第一图像的类型。基于第一图像的图像数据,按照扩展策略生成扩展区域的图像数据包括:基于第一图像的图像数据,按照与第一图像的类型对应的扩展策略生成扩展区域的图像数据。
在一些实施例中,第一图像的边缘包括平行于行方向的两个第一子边缘,平行于列方向的两个第二子边缘。根据第一图像的边缘中多个像素的像素值,识别第一图像的类型包括:根据每个第一子边缘中多个像素的像素值在行方向上的变化趋势、每个第二子边缘中多个像素的像素值在列方向上的变化趋势,确定第一图像的类型。
在一些实施例中,第一图像的类型为第一类型、第二类型、第三类型、 第四类型、或第五类型。第一类型被配置为表示第一图像包括纯色的边缘。第二类型被配置为表示第一图像包括颜色在列方向上渐变在行方向上不变的边缘。第三类型被配置为表示第一图像包括颜色在行方向上渐变在列方向上不变的边缘。第四类型被配置为表示第一图像包括颜色在列方向和行方向上均渐变的边缘。第五类型被配置为表示第一图像包括颜色无规律的边缘。
在一些实施例中,根据每个第一子边缘中多个像素的像素值在行方向上的变化趋势、每个第二子边缘中多个像素的像素值在列方向上的变化趋势,确定第一图像的类型包括:确定每个第一子边缘的第一判断结果;其中,若第一子边缘中每一行像素中各个像素的像素值大致相等,则第一判断结果包括相等,否则不相等;若第一子边缘中每一行像素中各个像素的像素值渐变,则第一判断结果包括渐变,否则不渐变。确定每个第二子边缘的第二判断结果;其中,若第二子边缘中每一列像素中各个像素的像素值大致相等,则第二判断结果包括相等,否则不相等;若第二子边缘中每一列像素中各个像素的像素值渐变,则第二判断结果包括渐变,否则不渐变。如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果均包括相等,则第一图像为第一类型。如果每个第一子边缘的第一判断结果均包括相等、每个第二子边缘的第二判断结果均包括渐变,则第一图像为第二类型。如果每个第一子边缘的第一判断结果均包括渐变,每个第二子边缘的第二判断结果均包括相等,则第一图像为第三类型。如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果均包括渐变,则第一图像为第四类型。如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果中的至少一个包括不相等和不渐变,则第一图像为第五类型。
第二方面,提供了一种显示控制方法,应用于显示控制装置,显示控制方法包括:读取开机画面的图像数据;执行上述任一实施例提供的图像处理方法,得到第二图像的图像数据,其中,图像处理方法中的第一图像为开机画面;输出第二图像的图像数据,以控制显示面板根据第二图像的图像数据进行显示。
在一些实施例中,输出第二图像的图像数据包括:响应于显示控制装置处于开机初始化状态,输出第二图像的图像数据。显示控制方法还包括:响应于显示控制装置的开机初始化状态结束,输出工作画面。
第三方面,提供了一种图像处理装置,包括:第一处理模块和第二处理模块。第一处理模块被配置为将第一图像的图像数据用作基础区域的图像数 据,其中第一图像具有第一分辨率。第二处理模块被配置为基于第一图像的图像数据,按照扩展策略生成扩展区域的图像数据,以得到包括基础区域的图像数据和扩展区域的图像数据的第二图像的图像数据,其中第二图像具有第二分辨率,第一分辨率小于第二分辨率。
第四方面,提供了一种图像处理装置,包括:存储器和处理器,存储器被配置为存储计算机程序指令,处理器被配置为执行计算机程序指令,使得图像处理装置执行上述任一实施例提供的图像处理方法。
第五方面,提供了一种显示控制装置,包括:读取模块、图像处理装置、以及输出模块。读取模块被配置为响应于显示控制装置上电,读取开机画面的图像数据。图像处理装置被配置为执行上述任一实施例提供的图像处理方法,得到第二图像的图像数据;其中,图像处理方法中的第一图像为开机画面。输出模块被配置为输出第二图像的图像数据,以控制显示面板根据第二图像的图像数据进行显示。
第六方面,提供了一种显示控制装置,包括:存储器和处理器,存储器被配置为存储计算机程序指令,处理器被配置为执行计算机程序指令,使得显示控制装置执行上述任一实施例提供的显示控制方法。
第七方面,提供了一种芯片系统,包括至少一个芯片,至少一个芯片被配置执行上述任一实施例提供的图像处理方法,或者,上述任一实施例提供的显示控制方法。
第八方面,提供了一种显示装置,包括:上述任一实施例提供的显示控制装置,或者,上述任一实施例提供的芯片系统。显示装置还包括显示面板,被配置为显示图像。
第九方面,提供了一种计算机可读存储介质,计算机可读存储介质存储有计算机程序指令,计算机程序指令在计算机(例如,显示装置)上运行时,使得计算机执行上述任一实施例提供的图像处理方法,或者,上述任一实施例提供的显示控制方法。
第十方面,提供了一种计算机程序产品。所述计算机程序产品包括计算机程序指令,在计算机(例如,显示装置)上执行所述计算机程序指令时,所述计算机程序指令使计算机执行如上述任一实施例提供的图像处理方法,或者,上述任一实施例提供的显示控制方法。
第十一方面,提供了一种计算机程序。当所述计算机程序在计算机(例如,显示装置)上执行时,所述计算机程序使计算机执行如上述任一实施例提供的图像处理方法,或者,上述任一实施例提供的显示控制方法。
附图说明
为了更清楚地说明本公开中的技术方案,下面将对本公开一些实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例的附图,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。此外,以下描述中的附图可以视作示意图,并非对本公开实施例所涉及的产品的实际尺寸、方法的实际流程、信号的实际时序等的限制。
图1为用户对显示器进行开机操作,显示器显示开机画面的示意图;
图2为根据一些实施例的显示装置的结构图;
图3A为根据一些实施例的显示控制装置的结构图;
图3B为根据一些实施例的显示控制装置的结构图;
图4为根据一些实施例的第一图像和第二图像的示意图;
图5为根据一些实施例的显示控制方法的流程图;
图6A为根据一些实施例的第一图像的示意图;
图6B为根据一些实施例的第二图像的示意图;
图7为根据一些实施例的图像处理方法的流程图;
图8为根据一些实施例的第一图像的示意图;
图9A为根据一些实施例的第一图像和第二图像的示意图;
图9B为根据一些实施例的第一图像和第二图像的示意图;
图9C为根据一些实施例的第一图像和第二图像的示意图;
图9D为根据一些实施例的第一图像和第二图像的示意图;
图9E为根据一些实施例的第一图像和第二图像的示意图;
图10为根据一些实施例的显示控制装置的结构图;
图11为根据一些实施例的图像处理装置的结构图;
图12为根据一些实施例的显示控制装置的结构图。
具体实施方式
下面将结合附图,对本公开一些实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开所提供的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本公开保护的范围。
除非上下文另有要求,否则,在整个说明书和权利要求书中,术语“包括(comprise)”及其其他形式例如第三人称单数形式“包括(comprises)”和现在分词形式“包括(comprising)”被解释为开放、包含的意思,即为“包含,但不限于”。在说明书的描述中,术语“一个实施例(one embodiment)”、“一些实施例(some embodiments)”、“示例性实施例(exemplary embodiments)”、“示例(example)”、“特定示例(specific example)”或“一些示例(some examples)”等旨在表明与该实施例或示例相关的特定特征、结构、材料或特性包括在本公开的至少一个实施例或示例中。上述术语的示意性表示不一定是指同一实施例或示例。此外,所述的特定特征、结构、材料或特点可以以任何适当方式包括在任何一个或多个实施例或示例中。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本公开实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在描述一些实施例时,可能使用了“耦接”和“连接”及其衍伸的表达。例如,描述一些实施例时可能使用了术语“连接”以表明两个或两个以上部件彼此间有直接物理接触或电接触。又如,描述一些实施例时可能使用了术语“耦接”以表明两个或两个以上部件有直接物理接触或电接触。然而,术语“耦接”或“通信耦合(communicatively coupled)”也可能指两个或两个以上部件彼此间并无直接接触,但仍彼此协作或相互作用。这里所公开的实施例并不必然限制于本文内容。
“A、B和C中的至少一个”与“A、B或C中的至少一个”具有相同含义,均包括以下A、B和C的组合:仅A,仅B,仅C,A和B的组合,A和C的组合,B和C的组合,及A、B和C的组合。
“A和/或B”,包括以下三种组合:仅A,仅B,及A和B的组合。
如本文中所使用,根据上下文,术语“如果”任选地被解释为意思是“当……时”或“在……时”或“响应于确定”或“响应于检测到”。类似地,根据上下文,短语“如果确定……”或“如果检测到[所陈述的条件或事件]”任选地被解释为是指“在确定……时”或“响应于确定……”或“在检测到[所陈述的条件或事件]时”或“响应于检测到[所陈述的条件或事件]”。
本文中“适用于”或“被配置为”的使用意味着开放和包容性的语言,其不排除适用于或被配置为执行额外任务或步骤的设备。
另外,“基于”或“根据”的使用意味着开放和包容性,因为“基于”或“根据” 一个或多个所述条件或值的过程、步骤、计算或其他动作在实践中可以基于额外条件或超出所述的值。
如本文所使用的那样,“约”、“大致”或“近似”包括所阐述的值以及处于特定值的可接受偏差范围内的平均值,其中所述可接受偏差范围如由本领域普通技术人员考虑到正在讨论的测量以及与特定量的测量相关的误差(即,测量系统的局限性)所确定。
显示装置(包括:显示器)的开机过程可以包括:首先,显示器接收开机指示。示例性地,显示器在关机状态下,用户对显示器进行开机操作,例如,参见图1,用户可以按压显示器的电源开关;又如,用户还可以通过开机手势、红外遥控器等对显示器进行开机操作。又示例性地,显示器在待机(或休眠)状态下,可以通过与显示器连接的主机向显示器发出开机指示。随后,显示器响应于开机指示,显示图1示出的显示器的开机画面,以便让用户知道显示器已经开启并正在启动,消除用户等待的焦急感和枯燥感。这种开机画面可以由显示器的制造商预先(例如,在产品出厂前)保存在显示器的存储器中,在需要显示开机画面时,再从显示器的存储器中调取该开机画面进行显示。之后,显示器可以显示主机中操作系统的启动画面。
对于分辨率为4k(即分辨率为4096×2160)、8k(即分辨率为8192×4320)等的高分辨率显示器,其显示的画面是具有相应高分辨率的图像。相应地,这种显示器的开机画面的图像分辨率也为相应的高分辨率。在相关技术中,这种较高分辨率开机画面存储在显示器的存储器中,这样这种较高分辨率开机画面占用的存储空间较大,不仅占用了大量的存储资源,而且在显示开机画面之前,加载开机画面时间也会较长。
为了解决这个问题,本发明实施例提供了一种显示装置,显示装置为具有图像显示功能的产品,例如可以是:显示器,电视,广告牌,数码相框,具有显示功能的激光打印机,电话,手机,个人数字助理(Personal Digital Assistant,PDA),数码相机,便携式摄录机,取景器,导航仪,车辆,大面积墙壁、家电、信息查询设备(如电子政务、银行、医院、电力等部门的业务查询设备、监视器等。
如图2所示,显示装置可以包括:显示模组200和与显示模组耦接的显示控制装置100。显示模组200被配置显示图像(画面),显示控制装置100被配置为执行显示控制方法,向显示模组200输出图像数据,以控制显示模组200显示图像数据相应的图像。
示例性地,参见图2,显示模组200可以包括时序控制器(Timing  Controller,简称为TCON)、数据驱动电路(即源驱动电路)、扫描驱动电路和显示面板(Display Panel,简称为DP,也可以称为显示屏)。
其中,显示面板可以是OLED(Organic Light Emitting Diode,有机发光二极管)面板、QLED(Quantum Dot Light Emitting Diodes,量子点发光二极管)面板、LCD(Liquid Crystal Display,液晶显示器)面板、微LED(包括:miniLED或microLED)面板等。
示例性地,显示面板可以包括多个亚像素。显示面板中包含多个亚像素的个数以及分布决定了显示面板的分辨率,也即显示模组200或显示装置的分辨率。例如,显示面板包括M×N个物理像素,每个物理像素包括一个红色亚像素(R亚像素),一个绿色亚像素(G亚像素)和一个蓝色亚像素(B亚像素),此时,该显示面板的分辨率为M×N。又例如,显示面板包括(M×N)/2个R亚像素、M×N个G亚像素和(M×N)/2个B亚像素,这些亚像素构成的M×N个虚拟像素,能够显示分辨率为M×N的图像,其中,R亚像素和B亚像素可以被不同的虚拟像素共用,此时,该显示面板的分辨率也为M×N。分辨率一般以乘法形式表示。例如,显示面板的分辨率可以是1920×1080、4096×2160、或者8192×4320等,表示显示面板含有1920×1080、4096×2160、或者8192×4320个物理像素或虚拟像素。分辨率越高,像素的数目越多。
TCON用于将接收的数据信号(例如显示控制装置100输出的图像数据)和控制信号转换成适合于数据驱动电路和扫描驱动电路的数据信号和控制信号,从而实现显示面板的图像显示。TCON的输入接口可以有TTL接口、LVDS接口、eDP接口和V-by-One接口等中的至少一者。相应地,显示控制装置100的输出接口可以是TTL接口、LVDS接口、eDP接口和V-by-One接口等中的至少一者。在一些实现方式中,TCON可以集成在显示控制装置100中。
数据驱动电路可以为源驱动芯片,例如驱动IC。数据驱动电路被配置为响应于TCON发送的数据信号(即数字信号)和控制信号,向显示面板中的各个亚像素提供驱动信号(也称为数据驱动信号,可以包括数字信号对应的电压或电流)。在一些实现方式中,数据驱动电路可以集成在显示控制装置100中。
扫描驱动电路可以为扫描驱动芯片,例如驱动IC,与显示面板绑定;还可以设置于显示面板中,此时可以称为GOA(Gate Driver on Array,设置在阵列基板上的扫描驱动电路)。扫描驱动电路被配置为响应于TCON发送的控制信号,向显示面板中的每行亚像素提供扫描信号。
在一些实施例中,显示控制装置100可以为芯片系统,可以包括至少一 个芯片,被配置为执行显示控制方法。芯片可以是可编程逻辑器件,例如,可以是现场可编程门阵列(Field Programmable Gate Array,简称为FPGA)和复杂可编程逻辑器件(CPLD)等。芯片还可以是SoC(System-on-a-Chip,片上系统)芯片。
示例性地,参见图3A,显示控制装置100是芯片系统,包括SoC芯片和FPGA芯片,被配置为执行显示控制方法。例如,芯片系统可以包括:包含SoC芯片的SoC板卡和包含FPGA芯片的FPGA板卡。又如,芯片系统可以包括:包含SoC芯片和FPGA芯片的一板卡。
又示例性地,参见图3A,显示控制装置100是芯片系统,包括FPGA芯片,被配置为执行显示控制方法。例如,芯片系统可以为包含FPGA芯片的FPGA板卡。
在另一些实施例中,参见图3B,显示控制装置100可以包括至少一个处理器101和至少一个存储器102。其中,至少一个存储器102中存储有计算机程序,至少一个处理器101被配置为运行至少一个存储器102中的计算机程序,以使得显示控制装置100执行显示控制方法。
存储器102可以包括高速随机存取存储器,还可以包括非易失存储器,例如磁盘存储器件、闪存器件等,还可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是一次可编程存储器(One Time Programable,OTP)、电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、磁盘存储介质、FLASH(闪存)或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器102可以是独立存在,通过通信线路与处理器101相连接。存储器102也可以和处理器101集成在一起。
处理器101用于实现图像处理,其可以是一个或多个通用中央处理器(central processing unit,简称为CPU)、微处理器(Microcontroller Unit,简称为MCU)、逻辑器件(Logic)、特定应用集成电路(application-specific integrated circuit,简称为ASIC)、图形处理器(Graphics Processing Unit,简称为GPU)或者用于控制本公开一些实施例的程序执行的集成电路;其中,CPU可以是单核处理器(single-CPU),也可以是多核处理器(multi-CPU)。这里的一个处理器可以指一个或多个设备、电路或用于处理数据(例如计算机程序指令等)的处理核。
基于上述介绍的显示装置的结构,本公开实施例提供了一种显示控制方法。参见图4,本公开实施例提供的显示控制方法可以利用显示装置中存储的较低分辨率开机画面(例如第一图像510)的图像数据,使得显示装置显示较高分辨率开机画面(例如第二图像520)。这样,对于具有较高分辨率的显示装置,可以在显示装置中存储较低分辨率开机画面的图像数据,解决了较高分辨率开机画面的图像数据占用存储空间大、以及较高分辨率开机画面的图像数据加载时间长的问题。
图5是本公开实施例提供的显示控制方法的流程图。参见图4和图5,该显示控制方法包括:
S101、读取开机画面的图像数据。
示例性地,响应于显示控制装置上电,读取开机画面的图像数据。例如,显示装置处于关机状态时被触发开机,显示装置中的显示控制装置上电,便执行读取的步骤。
又示例性地,响应于接收到的读取指令,读取开机画面的图像数据。例如,读取指令可以由显示装置中的主机发给显示控制装置。
开机画面包括至少一帧图像,在开机画面包含一帧图像,该开机画面是静态的;在开机画面包含多帧图像,该开机画面是动态的。每帧图像可以包括显示内容和背景。开机画面的显示内容可以包括图案(例如logo)、文字(例如版权信息)等,一般集中位于开机画面的中心或其他位置。开机画面的背景可以是纯色的,也可以是非纯色的,例如可以是颜色渐变的或者是颜色分布不规则的。在一些实施例中,开机画面可以包括第一图像510,具有第一分辨率,其包括显示内容“XXX”以及背景。示例性地,背景为纯黑色背景。
示例性地,图6A示出了第一图像510的示意图。参见图6A,第一图像510包含n行、m列像素,即m×n个像素。其中,第x行y列像素可以记为像素xy,x的取值范围为1~n,y的取值范围为1~m。例如,第一图像510可以包括像素11、12、13、……、1m、21、……、2m、……、n1、……、nm。
图像数据可以包括RGB图像数据,图像数据还可以包括YUV图像数据。其中,RGB图像数据可以包括至少一个像素的像素值,像素值可以包括该像素中各个亚像素的像素数据(例如灰阶数据)。示例性地,像素包括红色亚像素(R亚像素)、绿色亚像素(G亚像素)、以及蓝色亚像素(B亚像素),该像素的像素值可以包括R亚像素、G亚像素以 及B亚像素的灰阶数据,例如R=255,G=255,B=255。当两个像素的像素值相等时,这两个像素可以显示相同的颜色。像素值具有一定的范围,例如,当显示装置为8bit的显示装置,则其图像数据中像素值的取值范围为0~255,即R=0~255、G=0~255、B=0~255。在本公开实施例提供的显示控制方法中,求取或生成的每个像素值均在像素值的取值范围内。示例性地,对于某一像素值,如果求取或生成的结果超过了像素值的取值范围,则,当结果大于该范围时,像素值取该范围的较大边界值;当结果小于该范围时,像素值取该范围的较小边界值。例如,像素值的取值范围为0~255,当求取或生成的结果大于255,则像素值为255;当求取或生成的结果小于0,则像素值为0。
S102、执行图像处理方法,得到第二图像的图像数据。
参见图4和图5,在一些实施例中,通过步骤S102,可以基于第一图像510的图像数据,得到第二图像520的图像数据。其中,第二图像520具有第二分辨率,并且第二分辨率大于第一图像510的第一分辨率。
参见图4,第二图像520具有基础区域B和扩展区域E。显示控制装置为了控制显示面板显示第二图像520,需要第二图像520对应的图像数据,该图像数据可以包括第二图像520中各个像素的像素值。相应地,第二图像520的图像数据可以包括基础区域B的图像数据和扩展区域E的图像数据,其中基础区域B的图像数据可以包括第二图像520的基础区域B中各个像素的像素值,扩展区域E的图像数据可以包括第二图像520的扩展区域E中各个像素的像素值。
示例性地,图6B示出了第二图像520的结构图,参见图6B,第二图像520具有第二分辨率,记为p×q(p≥1,q≥1)。第二图像520包含p×q个像素。其中,第x行y列像素可以记为像素xy,x的取值范围为1~q,y的取值范围为1~p。例如,第二图像520可以包括像素11、12、13、……、1p、21、……、2p、31、……、3p、……、q1、……、qp。
示例性地,扩展区域E为环形区域,且扩展区域E包围基础区域B。基础区域B可以在第二图像520的中部。例如,基础区域B和第二图像520均为矩形,且基础区域B的中心点和第二图像520的中心点重合。
为了利用低分辨率开机画面的图像数据来完成在高分辨率显示装置上显示高分辨率开机画面,可以利用该低分辨率开机画面的图像数据,在高分辨率显示装置显示画面的一部分填充该低分辨率开机画面对应的图像,并且,在高分辨率显示装置显示画面的其他部分填充相应的背景, 使得高分辨率显示装置显示的画面为颜色变化均匀的整体画面。
基于此,在一些实施例中,参见图4和图5,在步骤S102中,可以使第二图像520中的基础区域B填充有低分辨率的第一图像510对应的图像,并且使第二图像520的扩展区域E填充有相应的纯黑色背景。这样,高分辨率显示装置显示的高分辨率开机画面既包含了低分辨率开机画面的所有信息(例如显示内容“XXX”),并且颜色变化均匀,可以给用户提供良好的视觉效果。
S103、输出第二图像的图像数据。
示例性地,响应于显示控制装置处于开机初始化状态,输出所述第二图像的图像数据。例如,在显示控制装置一上电或者接收到开机指示信号时,向TCON输出所述第二图像的图像数据,以控制显示面板根据第二图像的图像数据进行显示,即显示第二图像。
S104(可选的)、响应于显示控制装置的开机初始化状态结束,输出工作画面的图像数据。
例如,在显示控制装置接收到开机状态结束信号时,向TCON输出所述工作画面的图像数据,以控制显示面板显示工作画面。
本公开的一些实施例还提供了一种图像处理方法。该图像处理方法可以作为上述S102的一种具体实现方式,被配置为生成第二图像的图像数据。图7示出了本公开实施例提供的一种图像处理方法的步骤流程图。参见图7,该图像处理方法包括以下步骤:
S201:将第一图像的图像数据用作基础区域的图像数据。
在一些实施例中,参见图4,将第一图像510的图像数据用作第二图像520的基础区域B的图像数据。示例性地,参见图6A和图6B,第二图像520中位于基础区域B的部分包括m×n个像素,该m×n个像素的像素值与第一图像510的m×n个像素的像素值对应地相等,使得第二图像520中位于基础区域B的部分为第一图像510。
S202(可选的):根据第一图像的边缘中多个像素的像素值,识别第一图像的类型。
基于上文的阐述,为了实现利用第一图像的图像数据使得显示装置显示第二图像,可以使第二图像的扩展区域填充与第一图像的背景相似的背景,这样,当第二图像的扩展区域与基础区域拼接在一起时,第二图像的颜色变化较为均匀。
可以根据第一图像的不同背景,将第一图像分为不同类型,相应地使用不同的扩展策略生成第二图像扩展区域的图像数据,使得第二图像的扩展区域是相应的背景,使得第二图像的颜色变化均匀。
由于第二图像的扩展区域直接与第一图像的边缘拼接,所以,可以根据第一图像的边缘将第一图像分为不同类型,进而选择相应的扩展策略,即可实现当第二图像的扩展区域与基础区域拼接在一起时,颜色变化较为均匀的目的。
在一些实施例中,参见图8,第一图像510的边缘F包括平行于行方向的两个第一子边缘,例如第一子边缘Fx 1(即O 1O 2X 3X 1部分)、第一子边缘Fx 2(即O 3O 4X 4X 2部分)。第一图像510的边缘F还包括平行于列方向的两个第二子边缘,例如第二子边缘Fy 1(即O 1O 3Y 3Y 1)、第二子边缘Fy 2(即O 2O 4Y 4Y 2部分)。可以根据每个第一子边缘中多个像素的像素值在行方向上的变化趋势、每个第二子边缘中多个像素的像素值在列方向上的变化趋势,确定第一图像的类型。
示例性地,根据每个第一子边缘中多个像素的像素值在行方向上的变化趋势,确定第一子边缘的判断结果。第一子边缘的判断结果记为第一判断结果。例如,第一子边缘Fx 1的判断结果记为第一判断结果1,第一子边缘Fx 2的判断结果记为第一判断结果2。
其中,若第一子边缘中每一行像素中各个像素的像素值大致相等,则第一判断结果包括相等,否则第一判断结果包括不相等。例如,第一子边缘Fx 1的某一行(例如第x行)像素的所有像素的像素值包括(R x1,R x2,R x3…R xn)(B x1,B x2,B x3…B xn)(G x1,G x2,G x3…G xn),如果存在R x1=R x2=R x3…=R xn,且B x1=B x2=B x3…=B xn,且G x1=G x2=G x3…=G xn说明第x行像素的像素值相等。若(R x1,R x2,R x3…R xn)中、(B x1,B x2,B x3…B xn)中、(G x1,G x2,G x3…G xn)中任两个之间相差小于设定值(例如1或2),即,R x1~R xn中的任两个相差小于设定值,G x1~G xn中的任两个相差小于设定值,B x1~B xn中的任两个相差小于设定值,则第x行像素的像素值大致相等。如果第一子边缘Fx 1中每一行像素的像素值都大致相等,则第一判断结果1包括相等,否则第一判断结果1包括不相等。
若第一子边缘中每一行像素中各个像素的像素值渐变,则第一判断结果包括渐变,否则第一判断结果包括不渐变。例如,第一子边缘Fx 1的第x行像素的所有像素值包括(R x1,R x2,R x3…R xn)(B x1,B x2,B x3…B xn)(G x1,G x2,G x3…G xn),其中,第x行每相邻两个像素的像素值之间的差值记为 ΔR xy=R xy-R x(y-1),ΔG xy=G xy-G x(y-1)、ΔB xy=B xy-B x(y-1)。例如,ΔR x2=R x2-R x1,ΔR x3=R x3-R x2,……,ΔR xn=R n-R (n-1),ΔG x2=G x2-G x1,ΔG x3=G x3-G x2,……,ΔG xn=G n-G (n-1),ΔB x2=B x2-B x1,ΔB x3=B x3-B x2,……,ΔB xn=B n-B (n-1)
在一些实现方式中,第一判断结果包括不相等,并且,如果存在ΔR x2=ΔR x3=……=ΔR xn,且ΔG x2=ΔG x3=……=ΔG xn,且ΔB x2=ΔB x3=……=ΔB xn,则说明第x行像素的像素值渐变。又如,ΔR x2~ΔR xn中、ΔG x2~ΔG xn中、ΔB x2~ΔB xn中的任两个相差小于设定值(例如1或2),也说明第x行像素的像素值渐变。
在另一些实现方式中,第一判断结果包括不相等,并且,ΔR x2~ΔR xn、ΔG x2~ΔG xn、ΔB x2~ΔB xn这三组数据中,至少一组逐渐增大/减小,其余组大致相等,则说明第x行像素的像素值渐变。
例如,第一判断结果包括不相等,并且,如果ΔR x2~ΔR xn逐渐增大,且ΔG x2~ΔG xn逐渐增大,且ΔB x2~ΔB xn逐渐增大;或者,第一判断结果包括不相等,并且,如果ΔR x2~ΔR xn逐渐减小,且ΔG x2~ΔG xn逐渐减小,且ΔB x2~ΔB xn逐渐减小,则说明第x行像素的像素值渐变。
又例如,第一判断结果包括不相等,并且,如果ΔR x2~ΔR xn逐渐增大/减小,且ΔG x2~ΔG xn大致相等,且ΔB x2~ΔB xn大致相等,则说明第x行像素的像素值渐变。
如果第一子边缘Fx 1中每一行像素的像素值都渐变,则第一判断结果1包括渐变,否则第一判断结果1包括不渐变。
示例性地,根据每个第二子边缘中多个像素的像素值在列方向上的变化趋势,确定第二子边缘的判断结果。第二子边缘的判断结果记为第二判断结果。例如,第二子边缘Fy 1的判断结果记为第二判断结果1,第二子边缘Fy 2的判断结果记为第二判断结果2。
与第一子边缘的第一判断结果类似,若第二子边缘中每一列像素中各个像素的像素值大致相等,则第二判断结果包括相等,否则第二判断结果包括不相等。若第二子边缘中每一列像素中各个像素的像素值渐变,则第二判断结果包括渐变,否则第二判断结果包括不渐变。
在一些实施例中,如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果均包括相等,则第一图像为第一类型,第一类型可以被配置为表示第一图像包括纯色的边缘。示例性地,第一子边缘Fx 1的第一判断结果1、第一子边缘Fx 2的第一判断结果2、以及第二子边缘Fy 1的第二判断结果1、第二子边缘Fy 2的第二判断结果2均包括相等,则第一图像510为第一类型,第一图像510可以包括纯色的边缘。
如果每个第一子边缘的第一判断结果均包括相等,每个第二子边缘的第二判断结果均包括渐变,则第一图像为第二类型,第二类型可以被配置为表示第一图像包括颜色在列方向上渐变在行方向上不变的边缘。示例性地,第一子边缘Fx 1的第一判断结果1和第一子边缘Fx 2的第一判断结果2均包括相等,并且,第二子边缘Fy 1的第二判断结果1和第二子边缘Fy 2的第二判断结果2均包括渐变,则第一图像510为第二类型,第一图像510可以包括颜色在列方向上渐变在行方向上不变的边缘。
如果每个第一子边缘的第一判断结果均包括渐变,每个第二子边缘的第二判断结果均包括相等,则第一图像为第三类型,第三类型可以被配置为表示第一图像包括颜色在行方向上渐变在列方向不变的边缘。示例性地,第一子边缘Fx 1的第一判断结果1和第一子边缘Fx 2的第一判断结果2均包括渐变,并且,第二子边缘Fy 1的第二判断结果1和第二子边缘Fy 2的第二判断结果2均包括相等,则第一图像510为第三类型,第一图像510可以包括颜色在行方向上渐变在列方向上不变的边缘。
如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果中的至少一个包括不相等和不渐变,则第一图像为第五类型,第五类型可以被配置为表示第一图像包括颜色无规律的边缘。示例性地,第一子边缘Fx 1的第一判断结果1、第一子边缘Fx 2的第一判断结果2、以及第二子边缘Fy 1的第二判断结果1、第二子边缘Fy 2的第二判断结果2中的至少一个包括不相等和不渐变,则第一图像510为第五类型,第一图像510可以包括颜色无规律的边缘。
S203:基于第一图像的图像数据,按照扩展策略生成扩展区域的图像数据。
经过步骤S202后,可以根据第一图像的边缘,将第一图像分为五种类型。对于不同类型的第一图像,可以使用不同的扩展策略,利用第一图像的图像数据生成第二图像扩展区域的图像数据。
在一些实施例中,参见图8,可以基于第一图像510的边缘F中至少一个像素的像素值,按照扩展策略生成扩展区域E的图像数据。
示例性地,参见图8,第一图像510具有边缘F。边缘F包括4个子边缘,例如子边缘Fx 1、Fx 2、Fy 1、Fy 2。每个子边缘的宽度均大于1个像素,且各个子边缘的宽度可以相等也可以不相等。例如,各个子边缘的宽度相等,均为n个像素。此时,子边缘Fx 1以及子边缘Fx 2均包括n行像素,子边缘Fy 1以及子边缘Fy 2均包括n列像素。示例性地,1≤n≤10。例如,n=5,此时,子边缘 Fx 1以及子边缘Fx 2均包括5行像素,子边缘Fy 1以及子边缘Fy 2均包括5列像素。又示例性地,n=10,此时,子边缘Fx 1以及子边缘Fx 2均包括10行像素,子边缘Fy 1以及子边缘Fy 2均包括10列像素。
在一些实施例中,参见图9A,第一图像510包括纯色的边缘F1。其中,纯色的边缘F1是指边缘F1的颜色是单一颜色。此时,第一图像510的边缘F1中所有像素可以具有相同的像素值。
此时,可以将第一图像510的边缘F1中一个像素的像素值用作扩展区域E中每个像素的像素值。
由于第一图像510的边缘F1是纯色的,边缘F1中的各个像素的像素值均相等,所以,将该纯色边缘F1中的任意一个像素的像素值用作扩展区域E中每个像素的像素值,可以实现扩展区域E的颜色与第一图像510的边缘F1的颜色相同。又因为,经过步骤S201,第二图像520的基础区域B可以填充第一图像510,第二图像520的基础区域B也具有纯色边缘。即此时,扩展区域E的颜色与基础区域B边缘的颜色相同。这样,当扩展区域E与基础区域B连接在一起组成第二图像520时,第二图像520具有纯色的背景,颜色变化均匀。示例性地,参见图9A,将第一图像510的边缘F1中的11像素的像素值用作扩展区域E中每个像素的像素值。
在一些实施例中,第一图像包括非纯色的边缘。其中,非纯色的边是指边缘具有多种颜色。此时,第一图像的边缘中各个像素可以具有不同的像素值。示例性地,非纯色的边缘可以包括颜色在行方向上和/或列方向上渐变的边缘。其中,颜色渐变可以指颜色沿某一方向逐渐变化,例如,颜色沿某一方向逐渐变深或变浅。该变化可以是均匀变化,也可以是非均匀变化。当图像的颜色渐变时,图像中各个像素显示的颜色是渐变的,相应地,图像中各个像素的像素值也可以是逐渐变化的。
此时,可以基于第一图像的边缘中多个(例如z个,z≥2)像素的像素值,按照扩展策略生成扩展区域的图像数据。
在本实施例的一种实现方式中,参见图9B,第一图像510包括颜色在列方向上渐变在行方向上不变的边缘F2。例如,边缘F2的颜色在列方向上逐渐变深,相应地,边缘F2中各个像素的像素值在列方向上可以呈逐渐变大的趋势。边缘F2中各个像素的像素值在列方向上的变化可以是均匀变化,例如这些像素值呈等差数列,也可以是不均匀变化。并且,边缘F2的颜色在行方向上不变,相应地,边缘F2中各个像素的像素值 在行方向上大致相等。
图9B示出的第二图像520的扩展区域E包括第一子区域D11(即V 1V 2S 8S 5部分)、第一子区域D12(即V 3V 4S 6S 7部分),还包括除了第一子区域以外的第二子区域D21(即S 4S 3V 3V 2部分)和第二子区域D22(即S 1S 2V 4V 1部分)。其中,第一子区域D11和第一子区域D12均与基础区域B在行方向上齐平,此时,第一子区域D11和第一子区域D12中各行像素均可以与基础区域B的相应行像素齐平。
此时,可以根据第一图像510的每行像素中位于边缘F2的至少一个像素的像素值,生成第一子区域的相应行像素中每个像素的像素值。按照第一图像510的边缘F2中多个像素的像素值在列方向上的变化趋势,求取第二子区域中在列方向上变化的多行像素的像素值,其中每行像素的像素值相同,每个像素值在像素值取值范围内。
下面,将分别对于第一子区域和第二子区域的图像数据生成方法进行说明。
对于第一子区域,参见图9B,以第一子区域D11为例加以说明,第一子区域D12的图像数据可以使用与生成第一子区域D11的图像数据的方法类似的方法来生成,在此不再赘述。对于第一子区域D11的每一行像素,可以根据第一图像510的相应行像素中位于边缘F2的至少一个像素的像素值生成第一子区域D11中该行像素中每个像素的像素值。例如,对于第一子区域D11的第一行像素,基础区域B填充的第一图像510的相应行,即第一图像510的第一行像素中位于边缘F2的像素可以是第一图像510的第一行像素11、像素12、……、像素1m。可以根据这些像素中的至少一个像素的像素值,生成第一子区域D11的第一行像素中每个像素的像素值。
示例性地,将第一图像510的每行像素中位于边缘F2的一个像素的像素值用作第一子区域D11的相应行像素中的每个像素的像素值。例如,对于第一子区域D11的第一行像素,可以将基础区域B中填充的第一图像510的相应行,即第一图像510的第一行像素中位于边缘F2的一个像素(例如像素11)的像素值用作第一子区域D11的第一行像素中每个像素的像素值。由于第一图像510的边缘F2的颜色在行方向上不变,所以第一图像510的边缘F2的每一行中各个像素的像素值大致相等。此时,将第一图像510的每行像素中位于边缘F2的任一个像素的像素值用作第一子区域D11的相应行 像素中每个像素的像素值,可以使得第一子区域D11中每行像素与第一图像510的边缘F2中相应行像素显示大致相同的颜色,使得在第二图像520中,从基础区域B填充的第一图像510到第一子区域D11填充的相应背景的颜色变化较为均匀。
对于第二子区域,参见图9B,以第二子区域D21为例加以说明。第二子区域D21包括i+1行像素,i≥0,这些行像素可以记为第f+x行像素,x的取值范围为0~i,例如图9B中所示的f行、f+1行、……、f+i行像素。由于第一图像510的边缘F2的颜色在列方向上渐变,所以第一图像510的边缘F2中多个像素的像素值在列方向上具有某一变化趋势。例如,第一图像510的边缘F2中各个像素的像素值在列方向上逐渐增大或减小。可以根据边缘F2中多个像素的像素值在列方向上的变化趋势生成第二子区域D21的图像数据,使得第二子区域D21中各个像素的像素值在列方向上的变化趋势与第一图像510边缘F2中多个像素的像素值在列方向上的变化趋势相同。例如,可以根据像素11的像素值(包括R 11、G 11、B 11)和像素21的像素值(包括R 21、G 21、B 21)的变化趋势,基于第一图像510的第1行像素中任一像素的像素值,例如像素11的像素值,求取第二子区域D21中列方向上变化的多行像素的像素值。示例性地,所求取的第二子区域D21的图像数据中,每一行像素中各个像素的像素值可以相等。示例性地,所求取的第二子区域D21的图像数据中,一列像素的像素值可以构成等差数列,使得第二子区域D21的颜色在列方向上的变化较为均匀,可以给用户提供良好的观感。例如,所求取的第二子区域D21的图像数据中,一列像素中各个像素的像素值均不相同,且各个像素值构成等差数列,例如,第y列中,从第f行~第f+i行,R fy~R (f+i)y=10,15,20,……;G fy~G (f+i)y=15,20,25,……;B fy~B (f+i)y=3,6,9,……;又例如,所求取的第二子区域D21的图像数据中,一列像素中相邻多个像素的像素值之间可以相等,并且一列所有像素的像素值中不重复的取值可以构成等差数列,例如,第y列中,从第f行~第f+i行,R fy~R (f+i)y=10,10,20,20,30.30,……;G fy~G (f+i)y=15,15,20,20,25,25,……;B fy~B (f+i)y=3,3,6,6,9,9,……。示例性地,第二子区域D21的图像数据的求取的方法为,第二子区域D21中的第f+x行像素的像素值为:
R (f+x)=R 11+x×(R 11-R 21);
G (f+x)=G 11+x×(G 11-G 21);
B (f+x)=B 11+x×(B 11-B 21)。
类似地,第二子区域D22可以包括k+1行像素,k≥0,这些行像素可以 记为第g+x行像素,x的取值范围为0~k,例如图9B中所示的g行、g+1行、……、g+k行像素。第二子区域D22的图像数据的求取方法与第二子区域D21的类似。例如,根据第一图像510的像素11的像素值(包括R 11、G 11、B 11)和像素21的像素值(包括R 21、G 21、B 21)的变化趋势,基于第一图像510的第n行像素中任一像素的像素值,例如像素n1的像素值(包括R n1、G n1、B n1),求取第一子区域D12中列方向上变化的多行像素的像素值。求取的方法例如为,第一子区域D12中的第g+x行像素的像素值为:
R (g+x)=R n1-x×(R 11-R 21);
G (g+x)=G n1-x×(G 11-G 21);
B (g+x)=B n1-x×(B 11-B 21)。
在本实施例的一种实现方式中,参见图9C,第一图像510包括颜色在行方向上渐变在列方向上不变的边缘F3。例如,边缘F3的颜色在行方向上逐渐变深,相应地,边缘F3中各个像素的像素值在行方向上可以呈逐渐变大的趋势。边缘F3中各个像素的像素值在行方向上的变化可以是均匀变化,例如这些像素值呈等差数列,也可以是不均匀变化。
图9C示出的第二图像520的扩展区域E包括第三子区域和第三子区域以外的第四子区域,其中第三子区域和基础区域B在列方向上齐平。例如,第二图像520包括第三子区域D31(即H 4H 3S 7S 8部分)、第三子区域D32(即H 1H 2S 6S 5部分)、以及第四子区域D41(即S 4S 1H 1H 4部分)、第四子区域D42(即S 3S 2H 2H 3部分)。其中,第三子区域D31和第三子区域D32均与基础区域B在列方向上齐平,此时,第三子区域D31和第三子区域D32中各列像素均可以与基础区域B的相应列像素齐平。
此时,可以根据第一图像510的每列像素中位于边缘F3的至少一个像素的像素值,生成第三子区域的相应列像素中每个像素的像素值。按照第一图像510的边缘F3中多个像素的像素值在行方向上的变化趋势,求取第四子区域中行方向上变化的多列像素的像素值,其中每列像素的像素值相同,每个像素值在像素值取值范围内。
下面,将分别对于第三子区域和第四子区域的图像数据生成方法进行说明。
对于第三子区域,示例性地,参见图9C,以第三子区域D31为例加以说明,第三子区域D32的图像数据可以使用与生成第三子区域D31的图像数据的方法类似的方法来生成,在此不再赘述。对于第三子区域D31 的每一列像素,可以根据第一图像510的相应列像素中位于边缘F2的至少一个像素的像素值生成第三子区域D31中该列像素中每个像素的像素值。对于第三子区域D31的第一列像素,基础区域B填充的第一图像510的相应列,即第一图像510的第一列像素中位于边缘F3的像素可以是第一图像510的第一列像素,包括像素11、像素21、……、像素n1。可以根据这些像素中的至少一个像素的像素值,生成第三子区域D31的第一列像素中每个像素的像素值。
例如,将第一图像510的每列像素中位于边缘F3的一个像素的像素值用作第三子区域D31的相应列像素中的每个像素的像素值。例如,对于第三子区域D31的第一列像素,可以将基础区域B中填充的第一图像510的相应列,即第一图像510的第一列像素中位于边缘F3的一个像素(例如像素11)的像素值用作第三子区域D31的第一列像素中每个像素的像素值。由于第一图像510的边缘F3的颜色在列方向上不变,所以第一图像510的边缘F3的每一列中各个像素的像素值大致相等。此时,将第一图像510的每列像素中位于边缘F3的任一个像素的像素值用作第三子区域D31的相应列像素中每个像素的像素值,可以使得第三子区域D31中每列像素与第一图像510的边缘F3中相应列像素显示大致相同的颜色,使得从第一图像510到第二图像520的第三子区域D31的颜色变化较为均匀。
对于第四子区域,参见图9C,第四子区域D41包括i+1列像素,i≥0,这些列像素可以记为第f+x列像素,x的取值范围为0~i,例如图9C中所示的f列、f+1列、……、f+i列像素。由于第一图像510边缘F3的颜色在行方向上渐变,所以第一图像510的边缘F3中多个像素的像素值在行方向上具有某一变化趋势。例如,第一图像510的边缘F3中各个像素的像素值在行方向上逐渐增大或减小。可以根据边缘F3中多个像素的像素值在行方向上的变化趋势生成第四子区域D41的图像数据,使得第四子区域D41中各个像素的像素值在行方向上的变化趋势与第一图像510边缘F3中多个像素的像素值在行方向上的变化趋势相同。例如,可以根据像素11的像素值(包括R 11、G 11、B 11)和像素12的像素值(包括R 12、G 12、B 21)在行方向上的变化趋势,基于第一图像510的第1列像素中任一像素的像素值,例如像素11的像素值,求取第四子区域D41中行方向上变化的多列像素的像素值。示例性地,所求取的第四子区域D41的图像数据中,每一列像素中各个像素的像素值可以相等。示例性地,所求取的第四子区域D41的图像数据中,一行像素的像素值 可以构成等差数列,使得第四子区域D41的颜色在行方向上的变化较为均匀,可以给用户提供良好的观感。例如,所求取的第四子区域D41的图像数据中,一行像素中各个像素的像素值不相同,且各个像素值构成等差数列;又例如,所求取的第四子区域D41的图像数据中,一行像素中相邻多个像素的像素值之间可以相等,并且一行所有像素的像素值中不重复的取值可以构成等差数列。示例性地,第四子区域D41的图像数据的求取方法为,第四子区域D41中的第f+x列像素的像素值为:
R (f+x)=R 11+x×(R 11-R 12);
G (f+x)=G 11+x×(G 11-G 12);
B (f+x)=B 11+x×(B 11-B 12)。
类似地,第四子区域D42可以包括k+1列像素,k≥0,这些列像素可以记为第g+x列像素,x的取值范围为0~k,例如图9C中所示的g列、g+1列、……、g+k列像素。第四子区域D42的图像数据的求取方法与第四子区域D41的类似。例如,根据第一图像510的像素11的像素值(包括R 11、G 11、B 11)和像素12的像素值(包括R 12、G 12、B 12)的变化趋势,基于第一图像510的第m列像素中任一像素的像素值,例如像素1m的像素值(包括R 1m、G 1m、B 1m)求取第四子区域D42中行方向上变化的多列像素的像素值。求取的方法例如为,第四子区域D42中的第g+x列像素的像素值为:
R (g+x)=R 1m-x×(R 11-R 12);
G (g+x)=G 1m-x×(G 11-G 12);
B (g+x)=B 1m-x×(B 11-B 12)。
在本实施例的一种实现方式中,参见图9D,第一图像510包括颜色在列方向和行方向上均渐变的边缘F4。示例性地,边缘F4的颜色在行方向上和/或列方向上逐渐变深,相应地,边缘F4中各个像素的像素值在行方向上和/或列方向上可以呈逐渐变大的趋势。此外,边缘F4中各个像素的像素值在行方向上和/或列方向上的变化可以是均匀变化,例如这些像素值呈等差数列,也可以是不均匀变化。
图9D示出的第二图像520的扩展区域E包括第五子区域、第六子区域、以及第五子区域和第六子区域以外的第七子区域,其中,第五子区域和基础区域B在行方向上齐平,第六子区域与基础区域B列方向上齐平。例如,第二图像520包括第五子区域D51、第五子区域D52、第六子区域D61、第六子区域D62、以及第七子区域D71、第七子区域D72、第七子区域D73、第七子区域D74。其中,第五子区域D51和第五子区 域D52均与基础区域B在行方向上齐平,此时,第五子区域D51和第五子区域D52中各行像素均可以与基础区域B的相应行像素齐平;第六子区域D61和第六子区域D62均与基础区域B在列方向上齐平,此时,第六子区域D61和第六子区域D62中各列像素均可以与基础区域B的相应列像素齐平。
此时,可以按照第一图像510的边缘F4中位于一行的多个像素的像素值在行方向上的变化趋势,求取第五子区域中相应行像素的像素值,每个像素值在像素值取值范围内,得到第五子区域的图像数据。
按照第一图像510的边缘F4中位于一列的多个像素的像素值在列方向上的变化趋势,求取第六子区域中相应列像素的像素值,每个像素值在像素值取值范围内,得到第六子区域的图像数据。
基于第五子区域的图像数据和/或第六子区域的图像数据,求取第七子区域的图像数据。
下面,将分别对于第五子区域、第六子区域、第七子区域的图像数据生成方法进行说明。
对于第五子区域,参见图9D,第五子区域D51包括x行y列像素,x的取值范围为1~n,y的取值范围为f~f+i,第五子区域D51可以包括像素xy,表示第x行y列像素。由于第一图像510边缘F4的颜色在行方向上和列方向上均渐变,所以第一图像510的边缘F4中位于一行的多个像素的像素值在行方向上具有某一变化趋势。可以根据该变化趋势求取第五子区域D51中相应行像素的像素值,使得第五子区域D51的相应行像素中各个像素的像素值在行方向上的变化趋势与第一图像510边缘F4的该行像素的像素值在行方向上的变化趋势相同。例如,对于第五子区域D51的第x行像素,可以根据第一图像510中的像素x1的像素值(包括R x1、G x1、B x1)和像素x2的像素值(包括R x2、G x2、B x2)在行方向上的变化趋势,求取第五子区域D51中第x行像素的像素值。根据上述方法求取第五子区域D51的1~n行像素的像素值,便得到第五子区域D51的图像数据。示例性地,所求取的第五子区域D51的图像数据中,一行像素的像素值可以构成等差数列,使得第五子区域D51的颜色在行方向上的变化较为均匀,可以给用户提供良好的观感。示例性地,第五子区域D51的图像数据的求取方法为,第五子区域D51中的第x行第y列像素的像素值为:
R xy=R x1+(y-f+1)×(R x1-R x2);
G xy=G x1+(y-f+1)×(G x1-G x2);
B xy=B x1+(y-f+1)×(B x1-B x2)。
类似地,第五子区域D52包括x行y列像素,x的取值范围为1~n,y的取值范围为g~g+k,第五子区域D52可以包括像素xy,表示第x行y列像素。对于第五子区域D52的第x行像素,可以根据第一图像510中第x行像素中像素x(m-1)的像素值(包括R x(m-1)、G x(m-1)、B x(m-1))和像素xm的像素值(包括R xm、G xm、B xm)在行方向上的变化趋势,求取第五子区域D52中第x行像素的像素值。根据上述方法求取第五子区域D52的1~n行像素的像素值,便得到第五子区域D52的图像数据。示例性地,第五子区域D52的图像数据的求取方法为,第五子区域D52中的第x行第y列像素的像素值为:
R xy=R xm+(y-g+1)×(R xm-R x(m-1));
G xy=G xm+(y-g+1)×(G xm-G x(m-1));
B xy=B xm+(y-g+1)×(B xm-B x(m-1))。
对于第六子区域,参见图9D,第六子区域D61包括x行y列像素,x的取值范围为h~h+a,y的取值范围为1~m,第六子区域D61可以包括像素xy,表示第x行y列像素。由于第一图像510边缘F4的颜色在行方向上和列方向上均渐变,所以第一图像510的边缘F4中位于一列的多个像素的像素值在列方向上具有某一变化趋势。可以根据该变化趋势求取第六子区域D61中相应列像素的像素值,使得第六子区域D61的相应列像素中各个像素的像素值在列方向上的变化趋势与第一图像510边缘F4的该列像素的像素值在列方向上的变化趋势相同。例如,对于第六子区域D61的第y列像素,可以根据第一图像510的边缘F4中像素1y的像素值(包括R 1y、G 1y、B 1y)和像素2y的像素值(包括R 2y、G 2y、B 2y)在列方向上的变化趋势,求取第六子区域D61中第y列像素的像素值。根据上述方法求取第六子区域D61的1~m列像素的像素值,便得到第六子区域D61的图像数据。示例性地,所求取的第六子区域D61的图像数据中,一列像素的像素值可以构成等差数列,使得第六子区域D61的颜色在列方向上的变化较为均匀,可以给用户提供良好的观感。示例性地,第六子区域D61的图像数据的求取方法为,第六子区域D61中的第x行第y列像素的像素值为:
R xy=R 1y+(x-h+1)×(R 1y-R 2y);
G xy=G 1y+(x-h+1)×(G 1y-G 2y);
B xy=B 1y+(x-h+1)×(B 1y-B 2y)。
类似地,第六子区域D62包括x行y列像素,x的取值范围为j~j+z,y 的取值范围为1~m,第六子区域D62可以包括像素xy,表示第x行y列像素。对于第六子区域D62的第y列像素,可以根据第一图像510的边缘F4中第y列像素中像素(n-1)y的像素值(包括R (n-1)y、G (n-1)y、B (n-1)y)和像素ny的像素值(包括R ny、G ny、B ny)在列方向上的变化趋势,求取第六子区域D62中第y列像素的像素值。根据上述方法求取第六子区域D62的1~m列像素的像素值,便得到第六子区域D62的图像数据。示例性地,第六子区域D62的图像数据的求取方法为,第六子区域D62中的第x行第y列像素的像素值为:
R xy=R ny+(x-j+1)×(R ny-R (n-1)y);
G xy=G ny+(x-j+1)×(G ny-G (n-1)y);
B xy=B ny+(x-j+1)×(B ny-B (n-1)y)。
对于第七子区域,示例性地,基于第五子区域的图像数据和/或第六子区域的图像数据,求取第七子区域的图像数据可以包括:
将与第七子区域中的一个像素在行方向和列方向上相邻的两个像素的像素值求平均,得到第七子区域中的该像素的像素值。
下面将以第七子区域D71为例说明第七子区域的图像数据生成方法,第七子区域D72、D73、以及D74的图像数据的生成方法与生成第一子区域D71的图像数据的方法类似,在此不再赘述。
参见图9D,第七子区域D71包括x行y列像素,x的取值范围为h~h+a,y的取值范围为f~f+i,第七子区域D71可以包括像素xy,表示第x行第y列像素。第七子区域D71的图像数据的生成方法例如为:可以从第二图像520中基础区域B填充的第一图像510的第一行第一列像素的位置向第七子区域D71的h+a行f+i列像素的位置推演,由行方向和列方向上相邻的两个像素的像素值计算出未知像素的像素值。例如,使用第五子区域D51的第1行第f列像素的像素值、以及第六子区域D61的第h行第1列像素的像素值计算出第七子区域D71的第h行f列像素的像素值。计算方法可以是求平均值。例如,第七子区域D71中像素xy的像素值的求取方法可以为:
R xy=(R x(y-1)+R (x-1)y)/2;
G xy=(G x(y-1)+G (x-1)y)/2;
B xy=(B x(y-1)+B (x-1)y)/2。
第七子区域D71的图像数据的生成方法又例如为,基于与第七子区域D71相邻的第五子区域D51的图像数据,使用与基于第一图像510的图像数据生 成第六子区域D61的图像数据的方法类似的方法,即按照第五子区域D51的边缘中位于一列的多个像素的像素值在列方向上的变化趋势,求取第七子区域D71中相应列的像素值,进而得到第七子区域D71的图像数据。
第七子区域D71的图像数据的生成方法还例如为,基于与第七子区域D71相邻的第六子区域D61的图像数据,使用与基于第一图像510的图像数据生成第五子区域D51的图像数据的方法类似的方法,即按照第六子区域D61的边缘中位于一行的多个像素的像素值在行方向上的变化趋势,求取第七子区域D71中相应行像素的像素值,进而得到第七子区域D71的图像数据。
在一些实施例中,参见图9E,第一图像510包括颜色无规律的边缘F5。
此时,可以将第一图像510中各个像素的像素值求平均,得到扩展区域中每个像素的像素值。
示例性地,扩展区域E中每个像素的像素值为:
R xy=(R 11+R 12+…+R 1m+R 21+R 22+…+R 2m+…+R n1+R n2+...+R nm)/(n×m);
G xy=(G 11+G 12+…+G 1m+G 21+G 22+…+G 2m+…+G n1+G n2+...+G nm)/(n×m);
B xy=(B 11+B 12+…+B 1m+B 21+B 22+…+B 2m+…+B n1+B n2+...+B nm)/(n×m)。
在本公开实施例提供的图像处理方法中,步骤S201和步骤S203可以同时进行;步骤S201和步骤S203也可以不同时进行,并且二者没有先后顺序。在一些实施例中,图像处理方法包括步骤S202,此时,可以在步骤S203之前进行步骤S202。
本公开的一些实施例还提供了一种显示控制装置,图10是本公开实施例提供的显示控制装置300的结构图。该显示控制装置300包括:读取模块310、图像处理装置320和输出模块330。
其中,读取模块310,被配置为响应于显示控制装置300上电,读取开机画面的图像数据。在一些实施例中,读取模块310可以执行上述任一实施例提供的显示控制方法的步骤S101。
图像处理装置320,被配置为执行上述任一实施例提供的图像处理方法,得到第二图像的图像数据。其中,该图像处理方法中的第一图像可以是读取模块所读取的开机画面。
输出模块330,被配置为输出第二图像的图像数据,以控制显示面 板根据第二图像的图像数据进行显示,即显示具有较大分辨率的第二图像。在一些实施例中,输出模块403可以执行上述任一实施例所述的显示控制方法中的步骤S103和/或S104。
本公开的一些实施例还提供了一种图像处理装置。在一些实现方式中,图像处理装置可以用作图10示出的显示控制装置中的图像处理装置320。
参见图11,图像处理装置320可以包括:第一处理模块321和第二处理模块322。
第一处理模块321,被配置为将第一图像的图像数据用作基础区域的图像数据。示例性地,第一处理模块321可以执行上述任一实施例提供的图像处理方法的步骤S201。
第二处理模块322,被配置为基于第一图像的图像数据,按照扩展策略生成扩展区域的图像数据,以得到包括基础区域的图像数据和扩展区域的图像数据的第二图像的图像数据。示例性地,第二处理模块322可以执行上述任一实施例所述的图像处理方法中的S202和/或S203。
图10、图11描述的图像处理装置以及显示控制装置实施例仅仅是示意性的,例如,上述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。
上述各个模块既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。例如,采用软件实现时,上述模块可以是由图3B中的至少一个处理器101读取存储器102中存储的程序代码后,生成的软件功能模块来实现。上述各个模块也可以由显示装置中的不同硬件分别实现。显然上述功能模块也可以采用软件硬件相结合的方式来实现。例如,图10中的读取模块310和图像处理装置320可以是处理器101运行程序代码来实现。输出模块330可以是输出接口,例如eDP等高清显示协议接口。
参见图12,本公开提供的显示控制装置可以是芯片系统,包括SoC板卡以及FPGA板卡。
其中,SoC板卡被配置为存储和/或加载开机画面,包括:开机画 面存储模块601、存储控制器602、发送模块603、处理器604。
开机画面存储模块601,被配置为存储开机画面。开机画面存储模块601例如可以是存储器,该存储器例如可以参考图3B中存储器102的描述。
存储控制器602,被配置为响应显示控制装置300上电,将开机画面从开机画面存储模块601中读出,并传递给发送模块603。例如,存储控制器602可以是DMA(Direct Memory Access,直接存储器访问)控制器。
发送模块603,被配置为将开机画面传递给FPGA芯片。例如发送模块603包括发送接口,例如可以是LVDS接口,被配置为将开机画面通过LVDS协议传递给FPGA芯片。
处理器604,被配置为控制存储控制器602和发送模块603实现各自的功能。处理器604例如可以参考图3B中处理器101的描述。
FPGA板卡被配置为识别开机画面的类型和/或基于开机画面(例如第一图像)的图像数据生成更大分辨率图像(例如第二图像)的图像数据。包括:接收模块605、存储模块606、像素取样模块607、类型判定模块608、图像扩展模块609、选择器610、显示输出模块611。其中,像素取样模块607、类型判定模块608、图像扩展模块609、选择器610可以包含在FPGA芯片中。
接收模块605,被配置。例如,接收模块605包括接收接口,例如可以是LVDS接口,被配置为通过LVDS协议接收SoC板卡发送过来的开机画面的图像数据。
存储模块606,被配置为将接收到的开机画面的图像数据按帧缓存,达到和后级系统的同步。例如,存储模块606可以是DDR SDRAM(Double Data Rate SDRAM,双倍速率同步动态随机存储器)。
像素取样模块607,可以被配置为执行上述任一实施例提供的图像处理方法中的识别第一图像的类型的步骤,例如图7中步骤S202。
类型判定模块608,可以被配置为执行上述任一实施例提供的图像处理方法中的识别第一图像的类型的步骤,例如图7中步骤S202。
图像扩展模块609,可以被配置为执行上述任一实施例提供的图像处理方法中的基于第一图像的图像数据,按照扩展策略生成扩展区域图 像数据的步骤,例如图7中的S202。
选择器610,被配置为选择输出开机画面的数据还是正常工作画面的数据。示例性地,可以被配置为执行图5所示显示控制方法中的步骤S103和/或步骤S104。例如,可以将选择器610的初始状态设置为显示开机画面,即响应与显示控制装置上电,显示装置处于开机初始化状态,选择器610选择显示开机画面,以便响应与显示控制装置处于开机初始化状态,输出第二图像的图像数据。当开机初始化状态结束,SoC芯片可以向选择器610传递信号,使得选择器610选择工作画面(即开机画面之后需显示的正常显示画面)。
显示输出模块611,被配置为将选择后的画面输出给后端的显示屏。显示输出模块611可以是输出接口,例如可以是eDP接口和V-by-One接口等中的至少一者。
在一些实施例中,FPGA板卡还可以包括前端处理系统,用于处理接收到的工作画面的图像数据,并将处理后得到图像数据输出至选择器。所述处理例如可以包括色调调整、亮度调整、对比度调整、色度校准等。
在本公开实施例中,由于上文中显示装置、显示控制装置、图像处理装置所实现的功能与上述显示控制方法以及图像处理方法中的步骤类似,因此该方法的具体实现可以可参考上述方法实施例中相应步骤的相关描述,此处不再赘述。相应的,使用本公开实施例提供的显示装置、显示控制装置、图像处理装置,均可实现利用较低分辨率图像的图像数据使得显示装置显示较高分辨率图像的效果。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件程序实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括计算机程序指令,在计算机(例如,显示装置)上执行所述计算机程序指令时,所述计算机程序指令使计算机执行如上述任一实施例提供的图像处理方法,或者,上述任一实施例提供的显示控制方法。该计算机程序指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机程序指令可以从一个网站站点、计算机、服务器或数据中 心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))方式或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心传输。
本公开实施例还提供了一种计算机程序。当所述计算机程序在计算机(例如,显示装置)上执行时,所述计算机程序使计算机执行如上述任一实施例提供的图像处理方法,或者,上述任一实施例提供的显示控制方法。
本公开实施例还提供了一种计算机可读存储介质。计算机可读存储介质存储有计算机程序指令,计算机程序指令在计算机(例如,显示装置)上运行时,使得计算机执行上述任一实施例提供的图像处理方法,或者,上述任一实施例提供的显示控制方法。
该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、磁盘、磁带)、光介质
(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state drives,SSD))等。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (22)

  1. 一种图像处理方法,包括:
    将第一图像的图像数据用作基础区域的图像数据,其中所述第一图像具有第一分辨率;
    基于所述第一图像的图像数据,按照扩展策略生成扩展区域的图像数据,以得到包括所述基础区域的图像数据和所述扩展区域的图像数据的第二图像的图像数据,其中所述第二图像具有第二分辨率,所述第一分辨率小于所述第二分辨率。
  2. 根据权利要求1所述的图像处理方法,其中,
    基于所述第一图像的图像数据,按照扩展策略生成扩展区域的图像数据包括:
    基于所述第一图像的边缘中至少一个像素的像素值,按照扩展策略生成所述扩展区域的图像数据;
    或者,
    根据所述第一图像中各个像素的像素值,生成所述扩展区域的图像数据。
  3. 根据权利要求2所述的图像处理方法,其中,
    所述第一图像包括纯色的边缘;
    基于所述第一图像的边缘中至少一个像素的像素值,按照扩展策略生成所述扩展区域的图像数据包括:
    将所述第一图像的边缘中一个像素的像素值用作所述扩展区域中每个像素的像素值,以得到所述扩展区域的图像数据。
  4. 根据权利要求2所述的图像处理方法,其中,
    所述第一图像包括非纯色的边缘;
    基于所述第一图像的边缘中至少一个像素的像素值,按照扩展策略生成所述扩展区域的图像数据包括:
    基于所述第一图像的边缘中多个像素的像素值,按照扩展策略生成所述扩展区域的图像数据。
  5. 根据权利要求4所述的图像处理方法,其中,
    所述第一图像包括颜色在列方向上渐变在行方向上不变的边缘;
    所述扩展区域包括:第一子区域和所述第一子区域以外的第二子区域,所述第一子区域和所述基础区域在行方向上齐平;
    基于所述第一图像的边缘中多个像素的像素值,按照扩展策略生成所述扩展区域的图像数据包括:
    根据所述第一图像的每行像素中位于所述边缘的至少一个像素的像素值,生成所述第一子区域的相应行像素中每个像素的像素值;
    按照所述第一图像的边缘中多个像素的像素值在列方向上的变化趋势,求取所述第二子区域中列方向上变化的多行像素的像素值,其中每行像素的像素值相同,每个像素值在像素值取值范围内。
  6. 根据权利要求5所述的图像处理方法,其中,
    根据所述第一图像的每行像素中位于所述边缘的至少一个像素的像素值,生成所述第一子区域的相应行像素中每个像素的像素值包括:
    将所述第一图像的每行像素中位于所述边缘的一个像素的像素值用作所述第一子区域的相应行像素中的每个像素的像素值。
  7. 根据权利要求4所述的图像处理方法,其中,
    所述第一图像包括颜色在行方向上渐变在列方向上不变的边缘;
    所述扩展区域包括:第三子区域和所述第三子区域以外的第四子区域,所述第三子区域与所述基础区域在列方向上齐平;
    基于所述第一图像的边缘中多个像素的像素值,按照扩展策略生成所述扩展区域的图像数据包括:
    根据所述第一图像的每列像素中位于所述边缘的至少一个像素的像素值,生成所述第三子区域的相应列像素中每个像素的像素值;
    按照所述第一图像的边缘中多个像素的像素值在行方向上的变化趋势,求取所述第四子区域中行方向上变化的多列像素的像素值,其中每列像素的像素值相同,每个像素值在像素值取值范围内。
  8. 根据权利要求7所述的图像处理方法,其中,
    根据所述第一图像的每列像素中位于所述边缘的至少一个像素的像素值,生成所述第三子区域的相应列像素中每个像素的像素值包括:
    将所述第一图像的每列像素中位于所述边缘的一个像素的像素值用作所 述第三子区域的相应列像素中的每个像素的像素值。
  9. 根据权利要求4所述的图像处理方法,其中,
    所述第一图像包括颜色在列方向和行方向上均渐变的边缘;
    所述扩展区域包括:第五子区域、第六子区域、以及所述第五子区域和所述第六子区域以外的第七子区域,其中,所述第五子区域和所述基础区域在行方向上齐平,所述第六子区域与所述基础区域在列方向上齐平;
    基于所述第一图像的边缘中多个像素的像素值,按照扩展策略生成所述扩展区域的图像数据包括:
    按照所述第一图像的边缘中位于一行的多个像素的像素值在行方向上的变化趋势,求取所述第五子区域中相应行像素的像素值,每个像素值在像素值取值范围内,得到所述第五子区域的图像数据;
    按照所述第一图像的边缘中位于一列的多个像素的像素值在列方向上的变化趋势,求取所述第六子区域中相应列像素的像素值,每个像素值在像素值取值范围内,得到所述第六子区域的图像数据;
    基于所述第五子区域的图像数据和/或所述第六子区域的图像数据,求取所述第七子区域的图像数据。
  10. 根据权利要求9所述的图像处理方法,其中,
    基于所述第五子区域的图像数据和/或所述第六子区域的图像数据,求取所述第七子区域的图像数据包括:
    将与所述第七子区域中的一个像素在行方向和列方向上相邻的两个像素的像素值求平均,得到所述第七子区域中的所述像素的像素值。
  11. 根据权利要求9所述的图像处理方法,其中,
    基于所述第五子区域的图像数据和/或所述第六子区域的图像数据,求取所述第七子区域的图像数据包括:
    按照与所述第七子区域相邻的所述第六子区域的边缘中位于一行的多个像素的像素值在行方向上的变化趋势,求取所述第七子区域中相应行像素的像素值,每个像素值在像素值的取值范围内;
    或者,
    按照与所述第七子区域相邻的所述第五子区域的边缘中位于一列的多个 像素的像素值在列方向上的变化趋势,求取所述第七子区域中相应列像素的像素值,每个像素值在像素值的取值范围内。
  12. 根据权利要求2所述的图像处理方法,其中,
    所述第一图像包括颜色无规律的边缘;
    根据所述第一图像中各个像素的像素值,生成所述扩展区域的图像数据包括:
    将所述第一图像中各个像素的像素值求平均,得到所述扩展区域中每个像素的像素值。
  13. 根据权利要求1~12中任一项所述的图像处理方法,
    还包括:根据所述第一图像的边缘中多个像素的像素值,识别所述第一图像的类型;
    基于所述第一图像的图像数据,按照扩展策略生成所述扩展区域的图像数据包括:
    基于所述第一图像的图像数据,按照与所述第一图像的类型对应的扩展策略生成所述扩展区域的图像数据。
  14. 根据权利要求13所述的图像处理方法,其中,
    所述第一图像的边缘包括平行于行方向的两个第一子边缘,平行于列方向的两个第二子边缘;
    根据所述第一图像的边缘中多个像素的像素值,识别所述第一图像的类型包括:
    根据每个第一子边缘中多个像素的像素值在行方向上的变化趋势、每个第二子边缘中多个像素的像素值在列方向上的变化趋势,确定所述第一图像的类型。
  15. 根据权利要求13~14中任一项所述的图像处理方法,其中,
    所述第一图像的类型为第一类型、第二类型、第三类型、第四类型、或第五类型;
    所述第一类型被配置为表示所述第一图像包括纯色的边缘;
    所述第二类型被配置为表示所述第一图像包括颜色在列方向上渐变在行 方向上不变的边缘;
    所述第三类型被配置为表示所述第一图像包括颜色在行方向上渐变在列方向上不变的边缘;
    所述第四类型被配置为表示所述第一图像包括颜色在列方向和行方向上均渐变的边缘;
    所述第五类型被配置为表示所述第一图像包括颜色无规律的边缘。
  16. 根据权利要求15所述的图像处理方法,其中,
    根据每个第一子边缘中多个像素的像素值在行方向上的变化趋势、每个第二子边缘中多个像素的像素值在列方向上的变化趋势,确定所述第一图像的类型包括:
    确定每个第一子边缘的第一判断结果;其中,若所述第一子边缘中每一行像素中各个像素的像素值大致相等,则所述第一判断结果包括相等,否则不相等;若所述第一子边缘中每一行像素中各个像素的像素值渐变,则所述第一判断结果包括渐变,否则不渐变;
    确定每个第二子边缘的第二判断结果;其中,若所述第二子边缘中每一列像素中各个像素的像素值大致相等,则所述第二判断结果包括相等,否则不相等;若所述第二子边缘中每一列像素中各个像素的像素值渐变,则所述第二判断结果包括渐变,否则不渐变;
    如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果均包括相等,则所述第一图像为第一类型;
    如果每个第一子边缘的第一判断结果均包括相等、每个第二子边缘的第二判断结果均包括渐变,则所述第一图像为第二类型;
    如果每个第一子边缘的第一判断结果均包括渐变,每个第二子边缘的第二判断结果均包括相等,则所述第一图像为第三类型;
    如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果均包括渐变,则所述第一图像为第四类型;
    如果每个第一子边缘的第一判断结果、每个第二子边缘的第二判断结果中的至少一个包括不相等和不渐变,则所述第一图像为第五类型。
  17. 一种显示控制方法,应用于显示控制装置,所述显示控制方法包括:
    响应于显示控制装置上电,读取开机画面的图像数据;
    执行权利要求1~16任一项所述的图像处理方法,得到第二图像的图像数据;其中,所述图像处理方法中的第一图像为所述开机画面;
    输出所述第二图像的图像数据,以控制显示面板根据所述第二图像的图像数据进行显示。
  18. 根据权利要求17所述的显示控制方法,其中,
    输出所述第二图像的图像数据包括:
    响应于显示控制装置处于开机初始化状态,输出所述第二图像的图像数据;
    所述显示控制方法还包括:
    响应于显示控制装置的开机初始化状态结束,输出工作画面的图像数据。
  19. 一种图像处理装置,包括:
    第一处理模块,被配置为将第一图像的图像数据用作基础区域的图像数据,其中所述第一图像具有第一分辨率;
    第二处理模块,被配置为基于所述第一图像的图像数据,按照扩展策略生成扩展区域的图像数据,以得到包括所述基础区域的图像数据和所述扩展区域的图像数据的第二图像的图像数据,其中所述第二图像具有第二分辨率,所述第一分辨率小于所述第二分辨率。
  20. 一种显示控制装置,包括:
    读取模块,被配置为响应于显示控制装置上电,读取开机画面的图像数据;
    图像处理装置,被配置为执行权利要求1~16任一项所述的图像处理方法,得到第二图像的图像数据;其中,所述图像处理方法中的第一图像为所述开机画面;
    输出模块,被配置为输出所述第二图像的图像数据,以控制显示面板根据所述第二图像的图像数据进行显示。
  21. 一种显示装置,包括:
    权利要求20所述的显示控制装置;
    显示面板,被配置为显示图像。
  22. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序指令,所述计算机程序指令在计算机上运行时,使得所述计算机执行权利要求1~16任一项所述的图像处理方法,或者,权利要求17~18中任一项所述的显示控制方法。
PCT/CN2021/129899 2021-03-31 2021-11-10 图像处理方法及装置、显示控制方法及装置、显示装置 WO2022205924A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/016,430 US12008285B2 (en) 2021-03-31 2021-11-10 Image processing method and display control method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110349875.2 2021-03-31
CN202110349875.2A CN113050999B (zh) 2021-03-31 2021-03-31 图像处理方法及装置、显示控制方法及装置、显示装置

Publications (1)

Publication Number Publication Date
WO2022205924A1 true WO2022205924A1 (zh) 2022-10-06

Family

ID=76516731

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129899 WO2022205924A1 (zh) 2021-03-31 2021-11-10 图像处理方法及装置、显示控制方法及装置、显示装置

Country Status (3)

Country Link
US (1) US12008285B2 (zh)
CN (1) CN113050999B (zh)
WO (1) WO2022205924A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050999B (zh) * 2021-03-31 2024-04-23 京东方科技集团股份有限公司 图像处理方法及装置、显示控制方法及装置、显示装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039680A1 (en) * 2015-08-05 2017-02-09 Toshiba Tec Kabushiki Kaisha Display control device and method for displaying ui screen on display device
CN106847228A (zh) * 2017-04-25 2017-06-13 京东方科技集团股份有限公司 显示面板的驱动方法、显示面板的驱动装置和显示面板
CN108153570A (zh) * 2017-12-22 2018-06-12 联想(北京)有限公司 一种开机画面显示控制方法及装置
CN111459431A (zh) * 2020-03-19 2020-07-28 深圳市优博讯科技股份有限公司 一种开机logo显示方法及系统
CN113050999A (zh) * 2021-03-31 2021-06-29 京东方科技集团股份有限公司 图像处理方法及装置、显示控制方法及装置、显示装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100521342B1 (ko) * 1998-10-16 2005-12-21 삼성전자주식회사 컴퓨터 시스템의 로고 이미지 표시 방법
US8842938B2 (en) * 2012-06-27 2014-09-23 Xerox Corporation Method and system for performing resolution expansion with high quality edge enhancement
US9754347B2 (en) * 2014-03-10 2017-09-05 Sony Corporation Method and device for simulating a wide field of view
CN111381749A (zh) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 一种图像显示和处理方法、装置、设备和存储介质
CN110618803B (zh) * 2019-09-10 2023-07-14 北京金山安全软件有限公司 图像显示方法和装置
CN110796997B (zh) * 2019-11-14 2021-12-21 京东方科技集团股份有限公司 一种实现非均匀分辨率显示的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039680A1 (en) * 2015-08-05 2017-02-09 Toshiba Tec Kabushiki Kaisha Display control device and method for displaying ui screen on display device
CN106847228A (zh) * 2017-04-25 2017-06-13 京东方科技集团股份有限公司 显示面板的驱动方法、显示面板的驱动装置和显示面板
CN108153570A (zh) * 2017-12-22 2018-06-12 联想(北京)有限公司 一种开机画面显示控制方法及装置
CN111459431A (zh) * 2020-03-19 2020-07-28 深圳市优博讯科技股份有限公司 一种开机logo显示方法及系统
CN113050999A (zh) * 2021-03-31 2021-06-29 京东方科技集团股份有限公司 图像处理方法及装置、显示控制方法及装置、显示装置

Also Published As

Publication number Publication date
CN113050999A (zh) 2021-06-29
CN113050999B (zh) 2024-04-23
US20230273760A1 (en) 2023-08-31
US12008285B2 (en) 2024-06-11

Similar Documents

Publication Publication Date Title
US9489165B2 (en) System and method for virtual displays
CN105955687B (zh) 图像处理的方法、装置和系统
WO2018126594A1 (zh) 终端的切换显示方法及终端
US20140204007A1 (en) Method and system for liquid crystal display color optimization with sub-pixel openings
WO2022205924A1 (zh) 图像处理方法及装置、显示控制方法及装置、显示装置
US10650750B2 (en) Sub-pixel compensation
JP2006527396A (ja) 選択的ウィンドウ表示
US9613558B2 (en) Pixel driving method and associated display device
US10714034B2 (en) Display device, control circuit of display panel, and display method
US11636830B2 (en) Driving method and apparatus of display panel
TWI718913B (zh) 顯示方法
JP2016031468A (ja) 表示制御装置、表示装置及び表示システム
CN114283762A (zh) 一种动态刷新显示驱动方法、服务器及存储介质
JP2009251273A (ja) マルチディスプレイ方式のコンピュータ・システム
KR20210032857A (ko) 전자 장치 및 그 제어 방법
US20080055286A1 (en) Method And Apparatus For Displaying Bitmap Images
US6967661B2 (en) Computer system which scans lines in tiled blocks of a display area
US11651716B2 (en) Display device and driving method thereof
CN111681555B (zh) 显示方法
US20230237947A1 (en) Display device and method of driving the same
US11361698B2 (en) Electronic device and method of burn-in prevention for electronic device
WO2023184113A1 (zh) 图像显示方法、显示装置及车辆
US20230410710A1 (en) Contrast enhancement device, and display device including the same
JP2008122688A (ja) 情報表示装置および駆動方法
US9576540B2 (en) Operating an electrophoretic display with first and second operating formats

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21934560

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 190124)