WO2021217428A1 - 图像处理方法、装置、摄像设备和存储介质 - Google Patents

图像处理方法、装置、摄像设备和存储介质 Download PDF

Info

Publication number
WO2021217428A1
WO2021217428A1 PCT/CN2020/087531 CN2020087531W WO2021217428A1 WO 2021217428 A1 WO2021217428 A1 WO 2021217428A1 CN 2020087531 W CN2020087531 W CN 2020087531W WO 2021217428 A1 WO2021217428 A1 WO 2021217428A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image block
data
target
component
Prior art date
Application number
PCT/CN2020/087531
Other languages
English (en)
French (fr)
Inventor
刘召军
张胡梦圆
莫炜静
Original Assignee
深圳市思坦科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市思坦科技有限公司 filed Critical 深圳市思坦科技有限公司
Priority to US17/595,374 priority Critical patent/US12080030B2/en
Priority to PCT/CN2020/087531 priority patent/WO2021217428A1/zh
Publication of WO2021217428A1 publication Critical patent/WO2021217428A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/85Camera processing pipelines; Components thereof for processing colour signals for matrixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20216Image averaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/603Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer
    • H04N1/6033Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer using test pattern analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/64Systems for the transmission or the storage of the colour picture signal; Details therefor, e.g. coding or decoding means therefor
    • H04N1/646Transmitting or storing colour television type signals, e.g. PAL, Lab; Their conversion into additive or subtractive colour signals or vice versa therefor

Definitions

  • the embodiments of the present application relate to the field of image processing technology, for example, to an image processing method, device, imaging device, and storage medium.
  • the high-definition video is displayed by dividing the video into one frame and one frame of pictures. For each picture, RGB data is converted into YUV data and then encoded and compressed. After decoding, it is transmitted to a display device, such as a TV, for playback.
  • a display device such as a TV
  • Commonly used display devices use passive light-emitting mode for display, so the encoding method is also set for display devices in passive light-emitting mode. For some display devices in the active light-emitting mode, the coding method of the passive light-emitting mode is also used.
  • the encoding method of the passive light emitting mode is still used to encode each frame of the video, which will result in a very large amount of image data obtained by encoding.
  • the embodiments of the present application provide an image processing method, device, imaging device, and storage medium, so as to reduce the data amount of image data obtained by encoding.
  • An embodiment of the application provides an image processing device, including:
  • An image acquisition module configured to acquire an initial image, wherein the initial image corresponds to an RGB data format
  • a data conversion module configured to convert each image block of the plurality of image blocks from an RGB data format into a YUV data format, where Y is a luminance value, and U and V are chrominance values;
  • the data conversion module is further configured to invert the brightness value Y of the target image block into a darkness value D, and the target image block is at least one of the plurality of image blocks;
  • An embodiment of the application provides a camera device, including:
  • One or more processors are One or more processors;
  • Storage device set to store one or more programs
  • the one or more programs are executed by the one or more processors, so that the one or more processors implement the image processing method according to any embodiment of the present application.
  • the embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the image processing method as described in any embodiment of the present application is implemented.
  • FIG. 1 is a schematic flowchart of an image processing method provided in Embodiment 1 of the present application;
  • FIG. 3 is a schematic structural diagram of an image processing device according to Embodiment 3 of the present application.
  • FIG. 4 is a schematic structural diagram of an imaging device provided in Embodiment 4 of the present application.
  • first, second, etc. may be used herein to describe various directions, actions, steps or elements, etc., but these directions, actions, steps or elements are not limited by these terms. These terms are only used to distinguish a first direction, action, step or element from another direction, action, step or element.
  • the first target pixel may be referred to as the second target pixel
  • the second target pixel may be referred to as the first target pixel. Both the first target pixel and the second target pixel are target pixels, but the first target pixel and the second target pixel are not the same target pixel.
  • Figure 1 is a schematic flow chart of an image processing method provided by Embodiment 1 of the application, which can be applied to a scene in which images are processed.
  • the method can be executed by an image processing device, which can adopt software and/or hardware. Realization, and can be integrated on the camera equipment.
  • the image processing method provided by Embodiment 1 of the present application includes S110 to S150.
  • S110 Collect an initial image, where the initial image corresponds to an RGB data format.
  • the initial image refers to an image that needs to be encoded.
  • the initial image may be each frame of multiple frames of images that compose a video.
  • each frame of a high-definition video may be used as the initial image of this embodiment; the initial image may also be a picture. , Such as landscape photos, etc.
  • the initial image corresponds to the RGB data format.
  • the RGB data format refers to the data format in the RGB color mode.
  • R, G, and B are the coefficients of the three primary colors participating in the mixing, and r, g, and b respectively represent red light, green light, and blue light.
  • the initial image can be acquired by a camera or other imaging device with an image shooting function, and after color separation correction, the RGB data format of each pixel in the initial image, that is, the RGB value of each pixel, can be obtained.
  • the images between the multiple image blocks do not overlap each other.
  • Multiple image blocks refer to two or more image blocks, and the number of multiple image blocks is not limited.
  • the image size between multiple image blocks can be the same or different, and there is no restriction here.
  • the image size among multiple image blocks is the same.
  • the initial image is 160*160 size, divided into 20*20 8*8 pixel blocks, each pixel block has 64 pixels in total, and each pixel is represented by RGB mode.
  • each image block includes one or more pixels.
  • the number of pixels is determined according to the method of dividing image blocks, which is not limited in this embodiment.
  • the image block includes one or more pixels
  • the RGB data format includes R component, G component, and B component
  • the YUV data format includes Y component, U component, and V component.
  • Component said converting each image block of the plurality of image blocks from an RGB data format into a YUV data format includes:
  • the R component, G component, and B component of each pixel can be obtained by performing color separation correction on the initial image.
  • the Y component of each pixel is determined according to the R component, G component and B component of each pixel.
  • the first target pixel refers to the pixel for which the U component and/or V component need to be calculated. In an embodiment, for different YUV data formats, the first target pixel points are also different.
  • each pixel retains a Y (luminance) component, and in the horizontal direction, instead of taking U and V components for each row, only U component is taken for one row, and then only the U component is taken for the row after the row.
  • the V component is repeated in this way (ie 4:2:0, 4:0:2, 4:2:0, 4:0:2).
  • S140 Invert the brightness value Y of the target image block into a darkness value D, where the target image block is one or more of the plurality of image blocks.
  • the target image block refers to one or more image blocks in which the brightness value Y needs to be inverted to the darkness value D among multiple image blocks.
  • all image blocks in the multiple image blocks can be used as the target image block in this embodiment, thereby inverting the brightness value Y to the darkness value D;
  • Part of the image blocks, for example, the image blocks with the average brightness value of the block greater than 128 are used as the target image blocks in this embodiment, which is not limited here.
  • R, G, and B are the coefficients of the three primary colors participating in the mixing.
  • the inverting the brightness value Y of the target image block to the darkness value D includes:
  • the average brightness value can be obtained by summing the brightness value Y of the pixel points corresponding to each image block to obtain the average brightness value of this embodiment.
  • the first preset brightness threshold is 128.
  • the brightness value Y of each pixel of the target image block is inverted to the darkness value D.
  • the brightness value Y of the pixels of all image blocks is inverted to the darkness value D, which can reduce the amount of data.
  • inverting the brightness value Y of some image blocks to the darkness value D increases the amount of data. Therefore, the target image block whose average brightness value is greater than the first preset brightness threshold is inverted, and the non-target image block is not inverted, which reduces the amount of data obtained by decoding.
  • the inverting the brightness value Y of the target image block to the darkness value D includes:
  • the second target pixel refers to a pixel with a brightness value Y greater than a second preset brightness threshold.
  • the second preset brightness threshold is 128.
  • the brightness value Y of the second target pixel is inverted to the darkness value D, while the brightness value Y of other non-second target pixels does not need to be inverted to the darkness value D, which reduces the decoding result.
  • the amount of data is not required to be inverted to the darkness value D.
  • the target image data refers to data obtained by encoding the DUV data of the target image block and the YUV data of the uninverted image block in the initial image.
  • Encoding refers to the operation of compiling DUV data and YUV data into binary characters.
  • the uninverted image block is an image block other than the target image block among multiple image blocks.
  • the high-definition display device since the high-definition display device mostly adopts an active light-emitting mode (dark background+high-bright pattern, that is, white characters on a black background) to display video images, a large area is displayed in dark colors.
  • active lighting mode the code for dark display is 1
  • the code for bright display is 0, but in active lighting mode, the code for dark display is 0, and the code for bright display is 1.
  • the YUV data is encoded in accordance with the requirements of the active light-emitting mode, it will inevitably lead to a very large amount of data obtained by encoding.
  • the method further includes:
  • the target image data is transmitted to a display device for the display device to decode and play the target image data of the initial image.
  • the display device may be in an active lighting mode, or may be a device compatible with the active lighting mode and the passive lighting mode.
  • the display device when all the pixels of the initial image are inverted from the brightness value Y to the darkness value D, the display device needs to be in the active lighting mode, and the code for dark color display is 0, and for bright color display The code is 1.
  • the inverted part of the pixels are displayed in the active light emitting mode, that is, the code for dark display is 0, and the code for bright display is 1;
  • the non-inverted part of the pixels are displayed in passive lighting mode, that is, the code for dark color display is 1, and the code for bright color display is 0 for display.
  • Decoding and playing refers to the operation of playing and displaying the target image data after de-encoding.
  • the display device may decode and play the initial image.
  • the initial image is each frame of the multi-frame images that make up a video
  • each frame of the initial image is decoded and encoded according to the timestamp, so as to realize the complete playback of a video.
  • the initial image corresponds to the RGB data format; dividing the initial image into a plurality of image blocks; dividing each image block of the plurality of image blocks Converted from RGB data format to YUV data format, where Y is the brightness value, U and V are chrominance values; the brightness value Y of the target image block is inverted to the darkness value D, and the target image block is all One or more of the plurality of image blocks; encoding the DUV data of the target image block and the YUV data of the uninverted image block to obtain the target image data of the initial image, and by converting the brightness value Y is inverted to the darkness value D, and the display device in the active light-emitting mode is adapted to perform encoding, so as to achieve the technical effect of reducing the data volume of the image data obtained by encoding.
  • FIG. 2 is a schematic flowchart of an image processing method provided in Embodiment 2 of the present application. This embodiment is based on the above-mentioned technical solutions and is suitable for scenes in which images are processed.
  • the method can be executed by an image processing device, which can be implemented in software and/or hardware, and can be integrated on a camera device.
  • the image processing method provided in the second embodiment of the present application includes S210 to S260.
  • S210 Collect an initial image, where the initial image corresponds to an RGB data format.
  • the initial image refers to the image that needs to be encoded.
  • Multiple image blocks refer to two or more image blocks, and the number of multiple image blocks is not limited.
  • S230 Convert each image block of the plurality of image blocks from an RGB data format to a YUV data format, where Y is a luminance value, and U and V are chrominance values.
  • each image block includes one or more pixels.
  • the number of pixels is determined according to the method of dividing image blocks, which is not limited in this embodiment.
  • the target image block refers to one or more image blocks in which the brightness value Y needs to be inverted to the darkness value D among multiple image blocks.
  • S250 Perform discrete cosine transformation on the DUV data of the target image block and the YUV data of the uninverted image block to obtain DUV data after discrete cosine transformation and YUV data after discrete cosine transformation.
  • performing discrete cosine transformation on the target image block and the uninverted image block refers to converting the image block from the spatial domain to the frequency domain, which is to calculate which two-dimensional cosine waves the image is composed of.
  • the discrete cosine transform discards high-frequency coefficients (Alternating Current (AC) coefficients), and retains low-frequency information (Direct Current (DC) coefficients).
  • AC Alternating Current
  • DC Direct Current
  • the high-frequency coefficient generally saves the boundary and texture information of the image, and the low-frequency information is mainly the information of the flat area in the saved image.
  • S260 Encode the DUV data after the discrete cosine transform and the YUV data after the discrete cosine transform to obtain target image data of the initial image.
  • the DUV data after the discrete cosine transformation and the YUV data after the discrete cosine transformation are encoded, so as to obtain the target image data of the initial image.
  • the initial image corresponds to the RGB data format; dividing the initial image into a plurality of image blocks; dividing each image block of the plurality of image blocks Converted from RGB data format to YUV data format, where Y is the brightness value, U and V are chrominance values; the brightness value Y of the target image block is inverted to the darkness value D, and the target image block is all One or more of the plurality of image blocks; encoding the DUV data of the target image block and the YUV data of the uninverted image block to obtain the target image data of the initial image, and by converting the brightness value Y is inverted to the darkness value D, and the display device in the active light-emitting mode is adapted to perform encoding, so as to achieve the technical effect of reducing the data volume of the image data obtained by encoding.
  • FIG. 3 is a schematic structural diagram of an image processing device provided by Embodiment 3 of the present application. This embodiment is applicable to scenarios where images are processed.
  • the device can be implemented in software and/or hardware, and can be integrated in the camera. On the device.
  • the image processing apparatus may include an image acquisition module 310, a division module 320, a data conversion module 330, and an encoding module 340.
  • the image acquisition module 310 is configured to acquire an initial image, and the initial image corresponds to an RGB data format;
  • a dividing module 320 configured to divide the initial image into a plurality of image blocks
  • the data conversion module 330 is configured to convert each image block of the plurality of image blocks from an RGB data format into a YUV data format, where Y is a luminance value, and U and V are chrominance values;
  • the data conversion module 330 is further configured to invert the brightness value Y of the target image block to the darkness value D, and the target image block is one or more of the plurality of image blocks;
  • the encoding module 340 is configured to encode the DUV data of the target image block and the YUV data of the uninverted image block to obtain the target image data of the initial image.
  • the image block includes one or more pixels
  • the RGB data format includes R component, G component, and B component
  • the YUV data format includes Y component, U component, and V component.
  • Data conversion module 330 includes:
  • the pixel point determining unit is configured to determine one or more pixel points corresponding to each image block
  • the Y component determining unit is configured to determine the Y component of each pixel according to the R component, G component, and B component of each pixel;
  • a first target pixel point determining unit configured to determine a first target pixel point in the one or more pixels
  • the UV component determining unit is configured to determine the U component and/or V component of the first target pixel according to the R component, G component, and B component of the first target pixel.
  • the data conversion module 330 is configured to invert the brightness value Y of the target image block to the darkness value D through a first preset formula, and the first preset formula is:
  • the data conversion module 330 further includes:
  • the average brightness value obtaining unit is set to obtain the average brightness value of each image block
  • a target image block determining unit configured to determine an image block whose average brightness value is greater than a first preset brightness threshold as the target image block
  • the first darkness value inversion unit is configured to invert the brightness value Y of each pixel corresponding to the target image block into a darkness value D.
  • the data conversion module 330 further includes:
  • the brightness value Y obtaining unit is configured to obtain the brightness value Y of each pixel corresponding to each image block;
  • the second target pixel determining unit is configured to determine a pixel with a brightness value Y greater than a second preset brightness threshold as the second target pixel;
  • the second degree value inversion unit is configured to invert the brightness value Y of the second target pixel to the darkness value D.
  • the encoding module 340 includes:
  • the changing unit is configured to perform discrete cosine transformation on the DUV data of the target image block and the YUV data of the uninverted image block to obtain DUV data after the discrete cosine transformation and YUV data after the discrete cosine transformation;
  • the encoding unit is configured to encode the DUV data after the discrete cosine transformation and the YUV data after the discrete cosine transformation to obtain the target image data of the initial image.
  • the device further includes:
  • the transmission module is configured to transmit the target image data to a display device, so that the display device can decode and play the target image data of the initial image.
  • the image processing device provided in the embodiment of the present application can execute the image processing method provided in any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the image processing device can execute the image processing method provided in any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • FIG. 4 is a schematic structural diagram of an imaging device provided in Embodiment 4 of the present application.
  • FIG. 4 shows a block diagram of an exemplary imaging device 612 suitable for implementing the embodiments of the present application.
  • the imaging device 612 shown in FIG. 4 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present application.
  • the imaging device 612 is represented in the form of a general-purpose imaging device.
  • the components of the imaging device 612 may include: one or more processors 616, a storage device 628, and a bus 618 connecting different system components (including the storage device 628 and the processor 616).
  • the bus 618 represents one or more of several types of bus structures, including a storage device bus or a storage device controller, a peripheral bus, a graphics acceleration port, a processor, or a local bus using any bus structure among multiple bus structures.
  • these architectures include Industry Standard Architecture (Subversive Alliance, ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) ) Local bus and Peripheral Component Interconnect (PCI) bus.
  • the imaging device 612 includes a variety of computer system readable media. These media may be any available media that can be accessed by the imaging device 612, including volatile and non-volatile media, removable and non-removable media.
  • the storage device 628 may include a computer system readable medium in the form of a volatile memory, such as a random access memory (RAM) 630 and/or a cache 632.
  • the terminal 612 may include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • the storage system 634 may be used to read and write non-removable, non-volatile magnetic media (not shown in FIG. 4, referred to as a "hard drive").
  • a disk drive for reading and writing to removable non-volatile disks such as "floppy disks”
  • a removable non-volatile optical disk such as a compact disc (Compact Disc Read)
  • each drive can be connected to the bus 618 through one or more data media interfaces.
  • the storage device 628 may include at least one program product, and the program product has a set (for example, at least one) program modules, and these program modules are configured to perform the functions of multiple embodiments of the present application.
  • a program/utility tool 640 having a set of (at least one) program module 642 may be stored in, for example, the storage device 628.
  • Such program module 642 includes an operating system, one or more application programs, other program modules, and program data. Each or a combination of the examples may include the realization of a network environment.
  • the program module 642 usually executes the functions and/or methods in the embodiments described in this application.
  • the camera device 612 can also communicate with one or more external devices 614 (such as a keyboard, a pointing terminal, a display 624, etc.), and can also communicate with one or more terminals that enable a user to interact with the camera device 612, and/or communicate with Any terminal (such as a network card, modem, etc.) that enables the camera device 612 to communicate with one or more other computing terminals. Such communication can be performed through an input/output (I/O) interface 622.
  • the camera device 612 may also communicate with one or more networks (for example, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 620. As shown in FIG.
  • the network adapter 620 communicates with other modules of the camera device 612 through a bus 618.
  • other hardware and/or software modules can be used in conjunction with the camera device 612, including: microcode, terminal drives, redundant processors, external disk drive arrays, and disk arrays (Redundant Arrays of Independent Disks, RAID) Systems, tape drives, and data backup storage systems.
  • the processor 616 executes a variety of functional applications and data processing by running programs stored in the storage device 628, for example, to implement an image processing method provided in any embodiment of the present application.
  • the method may include:
  • the technical solution of the embodiment of the present application collects an initial image, which corresponds to the RGB data format; divides the initial image into a plurality of image blocks; divides each image block of the plurality of image blocks Converted from RGB data format to YUV data format, where Y is the brightness value, U and V are chrominance values; the brightness value Y of the target image block is inverted to the darkness value D, and the target image block is all One or more of the plurality of image blocks; encoding the DUV data of the target image block and the YUV data of the uninverted image block to obtain the target image data of the initial image, and by converting the brightness value Y is inverted to the darkness value D, and the display device in the active light-emitting mode is adapted to perform encoding, so as to achieve the technical effect of reducing the data volume of the image data obtained by encoding.
  • the fifth embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program When executed by a processor, it implements an image processing method as provided in any embodiment of the present application.
  • the method can include:
  • the computer-readable storage medium of the embodiment of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer-readable storage medium may be, for example, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above.
  • Computer-readable storage media include: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), and erasable Programmable Read-Only Memory (Erasable Programmable Read-Only Memory, EPROM) or flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or the above Any suitable combination.
  • the computer-readable storage medium can be any tangible medium that contains or stores a program, and the program can be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in baseband or as a part of a carrier wave, and computer-readable program code is carried therein. This propagated data signal can take many forms, including electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, and the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the storage medium can be transmitted by any suitable medium, including wireless, wire, optical cable, radio frequency (RF), etc., or any suitable combination of the above.
  • RF radio frequency
  • the computer program code used to perform the operations of this application can be written in one or more programming languages or a combination thereof.
  • the programming languages include object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or terminal.
  • the remote computer can be connected to the user's computer through any kind of network-including Local Area Network (LAN) or Wide Area Network (WAN)-or it can be connected to an external computer ( For example, use an Internet service provider to connect via the Internet).
  • LAN Local Area Network
  • WAN Wide Area Network
  • the initial image corresponds to the RGB data format; dividing the initial image into a plurality of image blocks; dividing each image block of the plurality of image blocks Converted from RGB data format to YUV data format, where Y is the brightness value, U and V are chrominance values; the brightness value Y of the target image block is inverted to the darkness value D, and the target image block is the value One or more of the plurality of image blocks; encoding the DUV data of the target image block and the YUV data of the uninverted image block to obtain the target image data of the initial image, and by converting the brightness value Y is inverted to the darkness value D, and the display device in the active light-emitting mode is adapted to perform encoding, so as to achieve the technical effect of reducing the amount of encoded image data obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法、装置、摄像设备和存储介质。该图像处理方法包括:采集初始图像,所述初始图像对应RGB数据格式(S110);将所述初始图像划分为多个图像区块(S120);将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值(S130);将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的至少一个(S140);将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据(S150)。

Description

图像处理方法、装置、摄像设备和存储介质 技术领域
本申请实施例涉及图像处理技术领域,例如涉及一种图像处理方法、装置、摄像设备和存储介质。
背景技术
随着第五代移动通信技术(5th Generation,5G)技术和高清视频技术的进展,高清显示的需求越来越多。
对高清视频进行显示,是通过将视频分成一帧一帧图片,针对每个图片由RGB数据转换成YUV数据后编码压缩,解码之后,传输至显示设备,例如电视上进行播放。常用的显示设备都是采用被动发光模式进行显示的,因此编码方法也是针对被动发光模式的显示设备来设置的。对于一些主动发光模式的显示设备来说,也在沿用被动发光模式的编码方式。
然而,对于主动发光模式的显示设备来说,还沿用被动发光模式的编码方式来对视频的每一帧图像进行编码,会导致编码得到的图像数据的数据量非常大。
发明内容
本申请实施例提供一种图像处理方法、装置、摄像设备和存储介质,以实现降低编码得到的图像数据的数据量。
本申请实施例提供了一种图像处理方法,包括:
采集初始图像,其中,所述初始图像对应RGB数据格式;
将所述初始图像划分为多个图像区块;
将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;
将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的至少一个;
将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
本申请实施例提供了一种图像处理装置,包括:
图像采集模块,设置为采集初始图像,其中,所述初始图像对应RGB数据格式;
划分模块,设置为将所述初始图像划分为多个图像区块;
数据转换模块,设置为将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;
所述数据转换模块还设置为将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的至少一个;
编码模块,设置为将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
本申请实施例提供了一种摄像设备,包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本申请任意实施例所述的图像处理方法。
本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本申请任意实施例所述的图像处理方法。
附图说明
图1是本申请实施例一提供的一种图像处理方法的流程示意图;
图2是本申请实施例二提供的一种图像处理方法的流程示意图;
图3是本申请实施例三提供的一种图像处理装置的结构示意图;
图4是本申请实施例四提供的一种摄像设备的结构示意图。
具体实施方式
下面结合附图和实施例对本申请进行说明。附图中仅示出了与本申请相关的部分而非全部结构。
一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将多个步骤描述成顺序的处理,但是其中的许多步骤可以被并行地、并发地或者同时实施。此外,多个步骤的顺序可以被重新安排。当其操作完成时处理可以 被终止,但是还可以具有未包括在附图中的附加步骤。处理可以对应于方法、函数、规程、子例程、子程序等等。
此外,术语“第一”、“第二”等可在本文中用于描述多种方向、动作、步骤或元件等,但这些方向、动作、步骤或元件不受这些术语限制。这些术语仅用于将第一个方向、动作、步骤或元件与另一个方向、动作、步骤或元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一目标像素点称为第二目标像素点,且类似地,可将第二目标像素点称为第一目标像素点。第一目标像素点和第二目标像素点两者都是目标像素点,但第一目标像素点和第二目标像素点不是同一目标像素点。术语“第一”、“第二”等不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有限定。
实施例一
图1为本申请实施例一提供的一种图像处理方法的流程示意图,可适用于对图像进行处理的场景,该方法可以由图像处理装置来执行,该装置可以采用软件和/或硬件的方式实现,并可集成在摄像设备上。
如图1所示,本申请实施例一提供的图像处理方法包括S110至S150。
S110、采集初始图像,所述初始图像对应RGB数据格式。
在一实施例中,初始图像是指需要进行编码处理的图像。在一实施例中,初始图像可以是组成一个视频的多帧图像中的每一帧图像,例如高清视频的每一帧图像都可以作为本实施例的初始图像;初始图像还可以是一张图片,例如风景照等,此处不作限制。在本步骤中,初始图像对应RGB数据格式。RGB数据格式是指在RGB色彩模式下的数据格式。在一实施例中,任意一种色光F都可以用不同分量的R、G、B三色相加混合而成:F=r[R]+g[G]+b[B]。其中,R、G、B分别为三基色参与混合的系数,r、g、b分别表示红色光、绿色光、蓝色光。可选的,可以通过摄像机等具有图像拍摄功能的摄像设备采集得到初始图像,经过分色校正后得到初始图像中每个像素点的RGB数据格式,即每个像素点的RGB值。
S120、将所述初始图像划分为多个图像区块。
在本步骤中,多个图像区块之间的图像互不重叠。多个图像区块是指两个或两个以上图像区块,多个图像区块的数量不作限制。可选的,多个图像区块 之间的图像大小可以相同,也可以不同,此处不作限制。可选的,多个图像区块之间的图像大小相同。例如,初始图像为160*160大小,分为20*20个8*8的像素块,每个像素块共有64个像素点,每个像素点用RGB模式表示。
S130、将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值。
在本步骤中,每个图像区块包括一个或多个像素点。像素点的数量根据划分图像区块的方式决定,本实施例不作限定。
在一个可选的实施方式中,所述图像区块包括一个或多个像素点,所述RGB数据格式包括R分量、G分量和B分量,所述YUV数据格式包括Y分量、U分量和V分量,所述将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,包括:
确定每个图像区块对应的一个或多个像素点;根据每个像素点的R分量、G分量和B分量确定每个像素点的Y分量;确定所述一个或多个像素点中的第一目标像素点;根据所述第一目标像素点的R分量、G分量和B分量确定所述第一目标像素点的U分量和/或V分量。
在本实施方式中,可以通过对初始图像进行分色校正得到每个像素点的R分量、G分量和B分量。根据每个像素点的R分量、G分量和B分量确定每个像素点的Y分量。可选的,可以通过Y=(0.299R+0.587G+0.114B)计算得到每个像素点的Y分量。第一目标像素点是指需要计算U分量和/或V分量的像素点。在一实施例中,对于不同的YUV数据格式,第一目标像素点也不同。以YUV420格式为例,每个像素都保留一个Y(亮度)分量,而在水平方向上,不是每行都取U和V分量,而是一行只取U分量,则该行接着一行就只取V分量,以此重复(即4:2:0,4:0:2,4:2:0,4:0:2.......)。可选的,可以通过U=-0.147R-0.289G+0.436B计算得到第一目标像素点的U分量。可以通过V=0.615R-0.515G-0.100B计算得到第一目标像素点的V分量。
S140、将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个。
其中,目标图像区块是指在多个图像区块中,需要将亮度值Y反转为暗度值D的一个或多个图像区块。可选的,可以将多个图像区块中的全部图像区块作为本实施例的目标图像区块,从而将亮度值Y反转为暗度值D;还可以是将多个图像区块中的部分图像区块,例如区块的平均亮度值大于128的图像区块作为本实施例的目标图像区块,此处不作限定。
可选的,可以通过第一预设公式将目标图像区块的亮度值Y反转为暗度值 D,所述第一预设公式为:D=255-Y,其中Y=(0.299R+0.587G+0.114B)。其中,R、G、B分别为三基色参与混合的系数。
在一个可选的实施方式中,所述将目标图像区块的亮度值Y反转为暗度值D,包括:
获取每个图像区块的平均亮度值;将所述平均亮度值大于第一预设亮度阈值的图像区块确定为所述目标图像区块;将所述目标图像区块对应的每个像素点的亮度值Y反转为暗度值D。
在本实施方式中,平均亮度值可以通过对每个图像区块对应的像素点的亮度值Y进行求和,得到本实施方式的平均亮度值。可选的,第一预设亮度阈值为128。在本实施方式中,对目标图像区块的每个像素点的亮度值Y反转为暗度值D。
在一实施例中,对全部图像区块的像素点的亮度值Y都反转为暗度值D,可以降低数据量。但对部分图像区块的亮度值Y反转为暗度值D反而提高了数据量。因此,对于平均亮度值大于第一预设亮度阈值的目标图像区块进行反转,而非目标图像区块不进行反转,降低了解码得到的数据量。
在另一个可选的实施方式中,所述将目标图像区块的亮度值Y反转为暗度值D,包括:
获取每个图像区块对应的每个像素点的亮度值Y;将亮度值Y大于第二预设亮度阈值的像素点确定为第二目标像素点;将所述第二目标像素点的亮度值Y反转成暗度值D。
在本实施方式中,可选的,将所有的图像区块作为目标图像区块。第二目标像素点是指亮度值Y大于第二预设亮度阈值的像素点。可选的,第二预设亮度阈值为128。在本实施方式中,将第二目标像素点的亮度值Y反转成暗度值D,而其他非第二目标像素点的亮度值Y不需要反转成暗度值D,降低了解码得到的数据量。
S150、将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
其中,目标图像数据是指对初始图像中的目标图像区块的DUV数据以及未反转图像区块的YUV数据进行编码得到的数据。编码是指将DUV数据和YUV数据编译成二进制字符的操作。未反转图像区块为多个图像区块中,除目标图像区块以外的图像区块。
在本实施例中,由于高清显示设备大多采用主动发光模式(暗色背景+高亮图案,即黑底白字)显示视频画面,大面积是暗色的显示。而被动发光模式中, 暗色显示的编码为1,亮色显示的编码为0,但在主动发光模式中,暗色显示的编码为0,亮色显示的编码为1,此时如果只是将全部图像区块的YUV数据按照主动发光模式的要求进行编码时,必然会导致编码得到的数据量非常大。而本实施例通过对目标图像区块的亮度值Y反转为暗度值D,适配主动发光模式的显示设备进行编码,解决了编码得到的图像数据的数据量非常大的问题,解决了对于主动发光模式的显示设备进行编码得到的图像数据的数据量非常大的问题。此外,降低了解码得到的图像数据的数据量,也提高了传输到显示设备进行播放的传输效率。
在一个可选的实施方式中,在所述将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据之后,还包括:
将所述目标图像数据传输至显示设备,以供所述显示设备对所述初始图像的目标图像数据进行解码播放。
在本实施方式中,显示设备可以是主动发光模式,也可以是主动发光模式与被动发光模式兼容的设备。在一实施例中,当对初始图像的所有像素点都由亮度值Y反转为暗度值D时,则显示设备为主动发光模式的设备即可,暗色显示的编码为0,亮色显示的编码为1。当初始图像的部分像素点由亮度值Y反转成暗度值D时,则反转的部分像素点采用主动发光模式显示,即暗色显示的编码为0,亮色显示的编码为1;而剩余的未反转的部分像素点采用被动发光模式进行显示,即暗色显示的编码为1,亮色显示的编码为0进行显示。解码播放是指对目标图像数据进行反编码后进行播放显示的操作。
在一实施例中,当初始图像为一张图片时,显示设备可以对初始图像进行解码播放。当初始图像为组成一个视频的多帧图像中的每一帧图像时,则按照时间戳对每一帧初始图像解码编码,从而实现一个视频的完整播放。
本申请实施例的技术方案,通过采集初始图像,所述初始图像对应RGB数据格式;将所述初始图像划分为多个图像区块;将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个;将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据,通过将亮度值Y反转成暗度值D,适配主动发光模式的显示设备进行编码,达到降低编码得到的图像数据的数据量的技术效果。
实施例二
图2是本申请实施例二提供的一种图像处理方法的流程示意图。本实施例以上述技术方案的为基础,适用于对图像进行处理的场景。该方法可以由图像处理装置来执行,该装置可以采用软件和/或硬件的方式实现,并可集成在摄像设备上。
如图2所示,本申请实施例二提供的图像处理方法包括S210至S260。
S210、采集初始图像,所述初始图像对应RGB数据格式。
其中,初始图像是指需要进行编码处理的图像。
S220、将所述初始图像划分为多个图像区块。
在本步骤中,多个图像区块之间的图像互不重叠。多个图像区块是指两个或两个以上图像区块,多个图像区块的数量不作限制。
S230、将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值。
在本步骤中,每个图像区块包括一个或多个像素点。像素点的数量根据划分图像区块的方式决定,本实施例不作限定。
S240、将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个。
其中,目标图像区块是指在多个图像区块中,需要将亮度值Y反转为暗度值D的一个或多个图像区块。
S250、对所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行离散余弦变换,得到离散余弦变换后的DUV数据和离散余弦变换后的YUV数据。
在本步骤中,对目标图像区块和未反转图像区块进行离散余弦变换,是指将图像区块从空间域转换到频率域,就是计算出图像由哪些二维余弦波构成。在一实施例中,离散余弦变换舍弃高频系数(交流(Alternating current,AC)系数),保留低频信息(直流(Direct current,DC)系数)。高频系数一般保存的是图像的边界、纹理信息,低频信息主要是保存的图像中平坦区域信息。
S260、对所述离散余弦变换后的DUV数据和离散余弦变换后的YUV数据进行编码,得到所述初始图像的目标图像数据。
在本步骤中,对所述离散余弦变换后的DUV数据和离散余弦变换后的YUV数据进行编码,从而得到所述初始图像的目标图像数据。
本申请实施例的技术方案,通过采集初始图像,所述初始图像对应RGB数据格式;将所述初始图像划分为多个图像区块;将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个;将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据,通过将亮度值Y反转成暗度值D,适配主动发光模式的显示设备进行编码,达到降低编码得到的图像数据的数据量的技术效果。
实施例三
图3是本申请实施例三提供的一种图像处理装置的结构示意图,本实施例可适用于对图像进行处理的场景,该装置可以采用软件和/或硬件的方式实现,并可集成在摄像设备上。
如图3所示,本实施例提供的图像处理装置可以包括图像采集模块310、划分模块320、数据转换模块330和编码模块340。
图像采集模块310,设置为采集初始图像,所述初始图像对应RGB数据格式;
划分模块320,设置为将所述初始图像划分为多个图像区块;
数据转换模块330,设置为将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;
所述数据转换模块330还设置为将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个;
编码模块340,设置为将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
可选的,所述图像区块包括一个或多个像素点,所述RGB数据格式包括R分量、G分量和B分量,所述YUV数据格式包括Y分量、U分量和V分量,数据转换模块330包括:
像素点确定单元,设置为确定每个图像区块对应的一个或多个像素点;
Y分量确定单元,设置为根据每个像素点的R分量、G分量和B分量确定每个像素点的Y分量;
第一目标像素点确定单元,设置为确定所述一个或多个像素点中的第一目标像素点;
UV分量确定单元,设置为根据所述第一目标像素点的R分量、G分量和B分量确定所述第一目标像素点的U分量和/或V分量。
可选的,数据转换模块330是设置为通过第一预设公式将目标图像区块的亮度值Y反转为暗度值D,所述第一预设公式为:
D=255-Y,其中Y=(0.299R+0.587G+0.114B),R、G、B分别为三基色参与混合的系数。
可选的,数据转换模块330还包括:
平均亮度值获取单元,设置为获取每个图像区块的平均亮度值;
目标图像区块确定单元,设置为将所述平均亮度值大于第一预设亮度阈值的图像区块确定为所述目标图像区块;
第一暗度值反转单元,设置为将所述目标图像区块对应的每个像素点的亮度值Y反转为暗度值D。
可选的,该数据转换模块330还包括:
亮度值Y获取单元,设置为获取每个图像区块对应的每个像素点的亮度值Y;
第二目标像素点确定单元,设置为将亮度值Y大于第二预设亮度阈值的像素点确定为第二目标像素点;
第二该度值反转单元,设置为将所述第二目标像素点的亮度值Y反转成暗度值D。
可选的,该编码模块340包括:
变化单元,设置为对所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行离散余弦变换,得到离散余弦变换后的DUV数据和离散余弦变换后的YUV数据;
编码单元,设置为对所述离散余弦变换后的DUV数据和离散余弦变换后的YUV数据进行编码,得到所述初始图像的目标图像数据。
可选的,该装置还包括:
传输模块,设置为将所述目标图像数据传输至显示设备,以供所述显示设备对所述初始图像的目标图像数据进行解码播放。
本申请实施例所提供的图像处理装置可执行本申请任意实施例所提供的图像处理方法,具备执行方法相应的功能模块和有益效果。本申请实施例中未描述的内容可以参考本申请任意方法实施例中的描述。
实施例四
图4是本申请实施例四提供的一种摄像设备的结构示意图。图4示出了适于用来实现本申请实施方式的示例性摄像设备612的框图。图4显示的摄像设备612仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图4所示,摄像设备612以通用摄像设备的形式表现。摄像设备612的组件可以包括:一个或者多个处理器616,存储装置628,连接不同系统组件(包括存储装置628和处理器616)的总线618。
总线618表示几类总线结构中的一种或多种,包括存储装置总线或者存储装置控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括工业标准体系结构(Industry Subversive Alliance,ISA)总线,微通道体系结构(Micro Channel Architecture,MAC)总线,增强型ISA总线、视频电子标准协会(Video Electronics Standards Association,VESA)局域总线以及外围组件互连(Peripheral Component Interconnect,PCI)总线。
摄像设备612包括多种计算机系统可读介质。这些介质可以是任何能够被摄像设备612访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
存储装置628可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(Random Access Memory,RAM)630和/或高速缓存632。终端612可以包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为举例,存储系统634可以用于读写不可移动的、非易失性磁介质(图4未显示,称为“硬盘驱动器”)。尽管图4中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘,例如只读光盘(Compact Disc Read-Only Memory,CD-ROM),数字视盘(Digital Video Disc-Read Only Memory,DVD-ROM)或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线618相连。存储装置628可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本申请多个实施例的功能。
具有一组(至少一个)程序模块642的程序/实用工具640,可以存储在例如存储装置628中,这样的程序模块642包括操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或一种组合中可能包括 网络环境的实现。程序模块642通常执行本申请所描述的实施例中的功能和/或方法。
摄像设备612也可以与一个或多个外部设备614(例如键盘、指向终端、显示器624等)通信,还可与一个或者多个使得用户能与该摄像设备612交互的终端通信,和/或与使得该摄像设备612能与一个或多个其它计算终端进行通信的任何终端(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口622进行。并且,摄像设备612还可以通过网络适配器620与一个或者多个网络(例如局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN)和/或公共网络,例如因特网)通信。如图4所示,网络适配器620通过总线618与摄像设备612的其它模块通信。尽管图中未示出,可以结合摄像设备612使用其它硬件和/或软件模块,包括:微代码、终端驱动器、冗余处理器、外部磁盘驱动阵列、磁盘阵列(Redundant Arrays of Independent Disks,RAID)系统、磁带驱动器以及数据备份存储系统等。
处理器616通过运行存储在存储装置628中的程序,从而执行多种功能应用以及数据处理,例如实现本申请任意实施例所提供的一种图像处理方法,该方法可以包括:
采集初始图像,所述初始图像对应RGB数据格式;
将所述初始图像划分为多个图像区块;
将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;
将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个;
将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
本申请实施例的技术方案,通过采集初始图像,所述初始图像对应RGB数据格式;将所述初始图像划分为多个图像区块;将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个;将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据,通过将亮度值Y反转成暗度值D,适配主动发光模式的显示设备进行编码,达到降低编码得到的图像数据的数据量的技术效果。
实施例五
本申请实施例五还提供了一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现如本申请任意实施例所提供的一种图像处理方法,该方法可以包括:
采集初始图像,所述初始图像对应RGB数据格式;
将所述初始图像划分为多个图像区块;
将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;
将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个;
将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
本申请实施例的计算机可读存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(Random Access Memory,RAM)、只读存储器(Read-Only Memory,ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
存储介质上包含的程序代码可以用任何适当的介质传输,包括无线、电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、 Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或终端上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(Local Area Network,LAN)或广域网(Wide Area Network,WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
本申请实施例的技术方案,通过采集初始图像,所述初始图像对应RGB数据格式;将所述初始图像划分为多个图像区块;将所述多个图像区块中的每个图像区块由RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的一个或多个;将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据,通过将亮度值Y反转成暗度值D,适配主动发光模式的显示设备进行编码,达到降低所得到的图像数据编码数据量的技术效果。

Claims (10)

  1. 一种图像处理方法,包括:
    采集初始图像,其中,所述初始图像对应RGB数据格式;
    将所述初始图像划分为多个图像区块;
    将所述多个图像区块中的每个图像区块由所述RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;
    将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的至少一个;
    将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
  2. 如权利要求1所述的方法,其中,所述图像区块包括至少一个像素点,所述RGB数据格式包括R分量、G分量和B分量,所述YUV数据格式包括Y分量、U分量和V分量,所述将所述多个图像区块中的每个图像区块由所述RGB数据格式转换成YUV数据格式,包括:
    确定每个图像区块对应的至少一个像素点;
    根据每个像素点的R分量、G分量和B分量确定每个像素点的Y分量;
    确定所述至少一个像素点中的第一目标像素点;
    根据所述第一目标像素点的R分量、G分量和B分量确定所述第一目标像素点的以下至少之一:U分量、V分量。
  3. 如权利要求1所述的方法,其中,所述将目标图像区块的亮度值Y反转为暗度值D,包括:
    通过第一预设公式将所述目标图像区块的亮度值Y反转为暗度值D,所述第一预设公式为:
    D=255-Y,其中,Y=(0.299R+0.587G+0.114B),R、G、B分别为三基色参与混合的系数。
  4. 如权利要求1所述的方法,其中,所述将目标图像区块的亮度值Y反转为暗度值D,包括:
    获取每个图像区块的平均亮度值;
    将所述平均亮度值大于第一预设亮度阈值的图像区块确定为所述目标图像区块;
    将所述目标图像区块对应的每个像素点的亮度值Y反转为暗度值D。
  5. 如权利要求1所述的方法,其中,所述将目标图像区块的亮度值Y反转为暗度值D,包括:
    获取每个图像区块对应的每个像素点的亮度值Y;
    将亮度值Y大于第二预设亮度阈值的像素点确定为第二目标像素点;
    将所述第二目标像素点的亮度值Y反转成暗度值D。
  6. 如权利要求1所述的方法,其中,所述将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据,包括:
    对所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行离散余弦变换,得到离散余弦变换后的DUV数据和离散余弦变换后的YUV数据;
    对所述离散余弦变换后的DUV数据和所述离散余弦变换后的YUV数据进行编码,得到所述初始图像的目标图像数据。
  7. 如权利要求1所述的方法,其中,在所述将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据之后,还包括:
    将所述目标图像数据传输至显示设备,以供所述显示设备对所述初始图像的目标图像数据进行解码播放。
  8. 一种图像处理装置,包括:
    图像采集模块,设置为采集初始图像,其中,所述初始图像对应RGB数据格式;
    划分模块,设置为将所述初始图像划分为多个图像区块;
    数据转换模块,设置为将所述多个图像区块中的每个图像区块由所述RGB数据格式转换成YUV数据格式,其中,Y为亮度值,U和V是色度值;
    所述数据转换模块还设置为将目标图像区块的亮度值Y反转为暗度值D,所述目标图像区块为所述多个图像区块中的至少一个;
    编码模块,设置为将所述目标图像区块的DUV数据和未反转图像区块的YUV数据进行编码,得到所述初始图像的目标图像数据。
  9. 一种摄像设备,包括:
    至少一个处理器;
    存储装置,设置为存储至少一个程序;
    所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-7中任一项所述的图像处理方法。
  10. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的图像处理方法。
PCT/CN2020/087531 2020-04-28 2020-04-28 图像处理方法、装置、摄像设备和存储介质 WO2021217428A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/595,374 US12080030B2 (en) 2020-04-28 2020-04-28 Image processing method and device, camera apparatus and storage medium
PCT/CN2020/087531 WO2021217428A1 (zh) 2020-04-28 2020-04-28 图像处理方法、装置、摄像设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087531 WO2021217428A1 (zh) 2020-04-28 2020-04-28 图像处理方法、装置、摄像设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021217428A1 true WO2021217428A1 (zh) 2021-11-04

Family

ID=78373257

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087531 WO2021217428A1 (zh) 2020-04-28 2020-04-28 图像处理方法、装置、摄像设备和存储介质

Country Status (2)

Country Link
US (1) US12080030B2 (zh)
WO (1) WO2021217428A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040246A (zh) * 2021-11-08 2022-02-11 网易(杭州)网络有限公司 图形处理器的图像格式转换方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040208362A1 (en) * 2003-04-15 2004-10-21 Nokia Corporation Encoding and decoding data to render 2D or 3D images
US20090034872A1 (en) * 2007-08-03 2009-02-05 Hon Hai Precision Industry Co., Ltd. Method and apparatus for increasing brightness of image captured in low light
CN107360429A (zh) * 2016-05-09 2017-11-17 杨家辉 景深包装及解包装的rgb格式与yuv格式的转换与反转换的方法及电路
CN110136183A (zh) * 2018-02-09 2019-08-16 华为技术有限公司 一种图像处理的方法以及相关设备
CN110910333A (zh) * 2019-12-12 2020-03-24 腾讯科技(深圳)有限公司 图像处理方法和图像处理设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100630888B1 (ko) 2004-11-23 2006-10-04 삼성전자주식회사 이미지 암부인식률 개선을 위한 장치 및 방법
JP4156631B2 (ja) 2006-04-26 2008-09-24 シャープ株式会社 画像処理方法および画像処理装置
JP4998287B2 (ja) * 2008-01-25 2012-08-15 ソニー株式会社 画像処理装置および方法、並びにプログラム
US20110135198A1 (en) 2009-12-08 2011-06-09 Xerox Corporation Chrominance encoding and decoding of a digital image
WO2012127836A1 (ja) 2011-03-18 2012-09-27 パナソニック株式会社 生成装置、表示装置、再生装置、眼鏡
JP5762250B2 (ja) * 2011-11-07 2015-08-12 三菱電機株式会社 画像信号処理装置および画像信号処理方法
US20130162765A1 (en) * 2011-12-22 2013-06-27 2Dinto3D LLC Modifying luminance of images in a source video stream in a first output type format to affect generation of supplemental video stream used to produce an output video stream in a second output type format
EP2940889B1 (en) 2012-12-27 2019-07-31 Panasonic Intellectual Property Corporation of America Visible-light-communication-signal display method and display device
JP2015176252A (ja) 2014-03-13 2015-10-05 オムロン株式会社 画像処理装置および画像処理方法
JP6516851B2 (ja) * 2015-02-13 2019-05-22 テレフオンアクチーボラゲット エルエム エリクソン(パブル) 画素の前処理および符号化
US20170155905A1 (en) * 2015-11-30 2017-06-01 Intel Corporation Efficient intra video/image coding using wavelets and variable size transform coding
US10163029B2 (en) * 2016-05-20 2018-12-25 Gopro, Inc. On-camera image processing based on image luminance data
TWI756365B (zh) 2017-02-15 2022-03-01 美商脫其泰有限責任公司 圖像分析系統及相關方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040208362A1 (en) * 2003-04-15 2004-10-21 Nokia Corporation Encoding and decoding data to render 2D or 3D images
US20090034872A1 (en) * 2007-08-03 2009-02-05 Hon Hai Precision Industry Co., Ltd. Method and apparatus for increasing brightness of image captured in low light
CN107360429A (zh) * 2016-05-09 2017-11-17 杨家辉 景深包装及解包装的rgb格式与yuv格式的转换与反转换的方法及电路
CN110136183A (zh) * 2018-02-09 2019-08-16 华为技术有限公司 一种图像处理的方法以及相关设备
CN110910333A (zh) * 2019-12-12 2020-03-24 腾讯科技(深圳)有限公司 图像处理方法和图像处理设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040246A (zh) * 2021-11-08 2022-02-11 网易(杭州)网络有限公司 图形处理器的图像格式转换方法、装置、设备及存储介质

Also Published As

Publication number Publication date
US12080030B2 (en) 2024-09-03
US20220245862A1 (en) 2022-08-04

Similar Documents

Publication Publication Date Title
KR102367205B1 (ko) 컬러 맵핑 함수들을 이용하여 hdr 픽처 및 상기 hdr 픽처로부터 획득된 sdr 픽처의 양자를 인코딩하기 위한 방법 및 디바이스
AU2016212243B2 (en) A method and apparatus of encoding and decoding a color picture
KR102617258B1 (ko) 이미지 프로세싱 방법 및 장치
US11317108B2 (en) Method and device for decoding a color picture
CN112087648B (zh) 图像处理方法、装置、电子设备及存储介质
WO2024027287A1 (zh) 图像处理系统及方法、计算机可读介质和电子设备
US20180005358A1 (en) A method and apparatus for inverse-tone mapping a picture
US20180249166A1 (en) Coding and decoding method and corresponding devices
WO2021073304A1 (zh) 一种图像处理的方法及装置
US20190130546A1 (en) Method and device for obtaining a second image from a first image when the dynamic range of the luminance of said first image is greater than the dynamic range of the luminance of said second image
WO2021217428A1 (zh) 图像处理方法、装置、摄像设备和存储介质
WO2022141515A1 (zh) 视频的编解码方法与装置
TWI825410B (zh) 影像處理方法、裝置、攝影設備和儲存介質
US8891894B2 (en) Psychovisual image compression
JP2018507618A (ja) カラー・ピクチャを符号化および復号する方法および装置
WO2023185706A1 (zh) 影像处理方法、影像处理装置、存储介质
CN115063327A (zh) 图像处理方法和装置、以及视频处理方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933835

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933835

Country of ref document: EP

Kind code of ref document: A1