WO2022057746A1 - 一种图像处理方法、装置、设备及计算机可读存储介质 - Google Patents

一种图像处理方法、装置、设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2022057746A1
WO2022057746A1 PCT/CN2021/117864 CN2021117864W WO2022057746A1 WO 2022057746 A1 WO2022057746 A1 WO 2022057746A1 CN 2021117864 W CN2021117864 W CN 2021117864W WO 2022057746 A1 WO2022057746 A1 WO 2022057746A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
difference information
compressed
decompressed
target difference
Prior art date
Application number
PCT/CN2021/117864
Other languages
English (en)
French (fr)
Inventor
周凯
刘红波
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to KR1020237011638A priority Critical patent/KR20230062862A/ko
Priority to EP21868571.7A priority patent/EP4203474A4/en
Priority to JP2023517665A priority patent/JP2023547587A/ja
Publication of WO2022057746A1 publication Critical patent/WO2022057746A1/zh
Priority to US18/184,940 priority patent/US20230222696A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression

Definitions

  • the present application relates to the field of computer vision, and in particular, to an image processing method, apparatus, device, and computer-readable storage medium.
  • Image compression is one of the basic technologies in the field of computer vision. Its purpose is to remove redundant data and reduce the amount of data required to represent an image, thereby saving storage space. With the development of mobile communication technology, how to realize image compression in mobile environment to save transmission bandwidth is one of the hot topics in current computer vision research.
  • images are collected through industrial cameras, compressed and then transmitted to the server side.
  • the server side decompresses and analyzes, and triggers corresponding commands to realize the control and monitoring of workpiece processing, assembly and workpiece inspection in the production process. .
  • the precision loss error between the decompressed image and the original image is relatively large, which affects the recognition on the server side, and then affects the analysis results.
  • the present application provides an image processing method, apparatus, device, and computer-readable storage medium, which are beneficial to reduce the loss of precision between the image recovered by the second image device and the original image.
  • the present application provides an image processing method, which is applied to a first image device.
  • the method may be performed by the first image device, or may be performed by a device (eg, a processor or a chip, etc.) in the first image device.
  • the method may include: obtaining a compressed image of the original image; decompressing the compressed image to obtain a first decompressed image; determining target difference information, the target difference information being based on the original image The difference information between the image and the first decompressed image is obtained; the target difference information is compressed to obtain the compressed target difference information; the compressed image and the compressed target difference information are sent to second imaging device.
  • the daily experience of using compressed images can be learned.
  • the compressed image is sent to the second image device, and the second image device directly decompresses the compressed image, and the second image device directly decompresses the compressed image.
  • the image device can directly identify and analyze the decompressed image. If the second image device performs decompression processing on the received compressed image, the image obtained by the decompression process has a large accuracy error with respect to the original image, which will have a certain impact on subsequent identification and analysis. Therefore, in the embodiment of the present application, on the basis of the original compressed image, target difference information between the first decompressed image after decompression of the compressed image and the original image is added.
  • the accuracy loss of the original image and the first decompressed image can be analyzed, so that the second image device can restore the decompressed image to obtain a decompressed image with higher accuracy and reduce the accuracy loss difference from the original image. Improve the accuracy of decompressed images.
  • the target difference information is compressed and sent to the second image device to reduce the amount of data to be transmitted.
  • both the difference information and the target difference information include image matrices, and each image matrix includes a plurality of image element values; the determining the target difference information includes: the difference information includes: The image element values within the preset threshold range in the image matrix of , are modified to preset values to obtain the image matrix included in the target difference information.
  • the first image device may perform matrix subtraction processing between the image matrix included in the first decompressed image and the image matrix included in the original image to obtain the first decompressed image. Difference information between the image and the original image.
  • the obtaining a compressed image of the original image includes: receiving an original image from a photographing device; obtaining a reference image corresponding to the original image; and compressing the original image according to the reference image , to get the compressed image.
  • the original image captured by the photographing device and a reference image corresponding to the original image may be obtained.
  • Using the original image to compress the image based on the corresponding reference image that is, compressing the difference information between the original image and the reference image, can improve the compression efficiency of the original image, and can also be widely used in real-time image compression processing. in the scene.
  • the obtaining the compressed image of the original image includes: shooting to obtain the original image; obtaining a reference image corresponding to the original image; compressing the original image according to the reference image to obtain a compressed image image.
  • the first image device may include a photographing device, the photographed image may be used as an original image, and a corresponding reference image may be obtained for compression processing.
  • the photographing device By deploying the photographing device in the first image device, the number of equipment of redundancy.
  • using the original image to compress the image based on the relationship of the corresponding reference image that is, compressing the difference information between the original image and the reference image, can improve the compression efficiency of the original image, and can also be widely used in real-time image compression. In the scene where the compression process is performed.
  • the method further includes: sending the reference image to a second image device.
  • the reference image corresponding to the original image can be sent to the second image device, so that the second image device can use the reference image to decompress the compressed image of the original image, so that the original image can be decompressed according to the target difference information.
  • the decompressed image of the image is restored to obtain an image with higher accuracy, thereby reducing the loss of accuracy between the image restored by the second image device and the original image.
  • an embodiment of the present application provides an image processing method, and the method is applied to a second image device.
  • the method may be performed by the second image device, or may be performed by a device (eg, a processor or a chip, etc.) in the second image device.
  • the method may include: receiving a compressed image of the original image from the first image device, and compressed target difference information, where the target difference information is based on the original image and the first decompressed image. Decompression processing is performed on the compressed image to obtain the first decompressed image; image processing is performed on the first decompressed image according to the compressed target difference information to obtain a second decompressed image .
  • the compressed image of the original image and the compressed target difference information can be received at the second image device, and the compressed image can be decompressed.
  • performing the decompression processing on the compressed image of the original image may be an inverse operation of the compression processing on the original image by the first device, thereby obtaining the first decompressed image.
  • the compressed target difference information is decompressed to obtain the difference information between the decompressed image (ie, the first decompressed image) of the original image contained in the target difference information and the original image, so that the first decompressed image can be compensated. , to obtain a second decompressed image with higher accuracy, thereby reducing the loss of accuracy between the second decompressed image and the original image.
  • the method further includes: receiving a reference image corresponding to the original image from the first image device.
  • the first image device compresses the original image by using the reference image corresponding to the original image
  • the second image device receives the reference image sent by the first image device
  • the first image device can use the reference image to compress the first image.
  • Perform inverse operation decompression processing to obtain a first decompressed image.
  • a second decompressed image with higher precision is obtained, the loss of accuracy between the second decompressed image and the original image is reduced, and the accuracy of the second decompressed image is improved.
  • decompressing the compressed image to obtain the first decompressed image includes: decompressing the compressed image according to the reference image to obtain the first decompressed image image.
  • the first image device compresses the original image by using the reference image corresponding to the original image to obtain the compressed image; the second image device can decompress the compressed image by using the reference image corresponding to the original image of the compressed image processing to obtain a first decompressed image.
  • each compressed image can use the image of the previous frame as a reference image, and decompress the reference image of each compressed image to obtain the decompressed image, and then use the target difference information to decompress the processed image. Restoring is performed to obtain a second decompressed image with higher accuracy, thereby reducing the loss of accuracy between the second decompressed image and the original image.
  • both the target difference information and the first decompressed image include image matrices, and each image matrix includes a plurality of image element values; performing image processing on the first decompressed image to obtain a second decompressed image, comprising: performing decompression processing on the compressed target difference information to obtain target difference information; The image matrices included in the difference information are added to obtain an image matrix included in the second decompressed image; the second decompressed image is determined according to the image matrix included in the second decompressed image.
  • the second image device may perform a matrix addition operation on the image matrix included in the target difference information and the image matrix included in the first decompressed image, and add the difference information between the first decompressed image and the original image to the first decompressed image.
  • a second decompressed image with higher precision is obtained, thereby reducing the difference in accuracy loss between the restored second decompressed image and the original image.
  • an embodiment of the present application provides an image processing apparatus, and the image processing apparatus has part or all of the functions of the first image device described in the first aspect above.
  • the functions of the apparatus may have the functions of some or all of the embodiments of the first image device in the present application, and may also have the functions of independently implementing any one of the embodiments of the present application.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more units or modules corresponding to the above functions.
  • the image processing apparatus may include: an acquisition unit for acquiring a compressed image of the original image; a decompression unit for decompressing the compressed image to obtain a first decompressed image; and a determination unit for determining target difference information , the target difference information is obtained according to the difference information between the original image and the first decompressed image; a compression unit is used to compress the target difference information to obtain compressed target difference information; A sending unit, configured to send the compressed image and the compressed target difference information to a second image device.
  • both the difference information and the target difference information include image matrices, and each image matrix includes a plurality of image element values; the determining unit is specifically configured to: include the difference information including The image element values within the preset threshold range in the image matrix of , are modified to preset values to obtain the image matrix included in the target difference information.
  • the acquiring unit is specifically configured to: receive an original image from a photographing device; acquire a reference image corresponding to the original image; compress the original image according to the reference image to obtain Compress images.
  • the obtaining unit is specifically configured to: capture an original image; obtain a reference image corresponding to the original image; and compress the original image according to the reference image to obtain a compressed image.
  • the sending unit is further configured to: send the reference image to a second image device.
  • an embodiment of the present application provides an image processing apparatus, and the image processing apparatus has some or all of the functions of the second image device described in the second aspect above.
  • the function of the apparatus may have the function of some or all of the embodiments of the second image device in the present application, or may have the function of independently implementing any one of the embodiments of the present application.
  • the functions can be implemented by hardware, or can be implemented by hardware executing corresponding software.
  • the hardware or software includes one or more units or modules corresponding to the above functions.
  • the image processing apparatus may include: a receiving unit configured to receive a compressed image of an original image from a first image device, and compressed target difference information, the target difference information being based on the original image and the first decompression The difference information between the images is obtained; the decompression unit is used for decompressing the compressed image to obtain the first decompressed image; the image processing unit is used for the first decompressed image according to the compressed target difference information. Perform image processing on a decompressed image to obtain a second decompressed image.
  • the receiving unit is further configured to: receive a reference image corresponding to the original image from the first image device.
  • the decompression unit is specifically configured to: perform decompression processing on the compressed image according to the reference image to obtain the first decompressed image.
  • both the target difference information and the first decompressed image include image matrices, and each image matrix includes a plurality of image element values;
  • the decompression unit is specifically configured to: decompress the compressed image Decompressing the obtained target difference information to obtain target difference information; performing addition processing on an image matrix included in the first decompressed image and an image matrix included in the target difference information to obtain an image matrix included in the second decompressed image;
  • the second decompressed image is determined according to an image matrix included in the second decompressed image.
  • an embodiment of the present application provides a computer device, where the computer device includes a processor, and the processor is configured to support the computer device to implement corresponding functions in the image processing method provided in the first aspect.
  • the computer device may also include a memory, coupled to the processor, which holds program instructions and data necessary for the computer device.
  • the computer device may also include a communication interface for the computer device to communicate with other devices or a communication network.
  • an embodiment of the present application provides a computer device, the computer device includes a processor, and the processor is configured to support the computer device to implement corresponding functions in the image processing method provided in the second aspect.
  • the computer device may also include a memory, coupled to the processor, which holds program instructions and data necessary for the computer device.
  • the computer device may also include a communication interface for the computer device to communicate with other devices or a communication network.
  • an embodiment of the present application provides a computer-readable storage medium for storing computer software instructions used for the first image device provided in the first aspect, including a program for executing the program designed in the first aspect .
  • an embodiment of the present application provides a computer-readable storage medium for storing computer software instructions used by the second image device provided in the second aspect, including a program for executing the program designed in the second aspect. .
  • an embodiment of the present application provides a computer program, where the computer program includes instructions, when the computer program is executed by a computer, the computer can execute the process performed by the first image device in the first aspect.
  • an embodiment of the present application provides a computer program, the computer program includes instructions, when the computer program is executed by a computer, the computer can execute the process performed by the second image device in the second aspect above.
  • the present application provides a chip system
  • the chip system includes a processor for supporting a computer device to implement the functions involved in the first aspect above, for example, generating or processing the image processing method in the first aspect above. the information involved.
  • the chip system further includes a memory for storing necessary program instructions and data of the first image device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the present application provides a chip system
  • the chip system includes a processor for supporting a computer device to implement the functions involved in the second aspect, for example, generating or processing the image processing method in the second aspect. the information involved.
  • the chip system further includes a memory for storing necessary program instructions and data of the second image device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • FIG. 1 is a schematic diagram of the architecture of an image processing system provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a system architecture for image processing in the field of industrial vision provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • FIG. 4 is an application scenario diagram of an image processing method provided by an embodiment of the present application applied to industrial vision
  • 5a is a schematic diagram of the accuracy loss of an image before and after compression of an image provided by an embodiment of the present application
  • 5b is a schematic diagram of the accuracy loss of an image before and after compression of an image provided by an embodiment of the present application
  • FIG. 6 is a schematic time sequence diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of another computer device provided by an embodiment of the present application.
  • An image matrix can also be described as a matrix, a matrix of images. is the matrix representation of the image. The rows of the image matrix correspond to the height of the image, and the columns of the image matrix correspond to the width of the image.
  • the first image device may perform a subtraction operation of two matrices on an image matrix included in the original image and an image matrix included in the first decompressed image to obtain difference information.
  • the second image device may perform an addition operation of two matrices on the image matrix included in the first decompressed image and the image matrix included in the target difference information to obtain the image matrix included in the second decompressed image, so that the image matrix included in the second decompressed image can be obtained according to the second decompressed image.
  • An image element value can also be described as an element value, an element value of an image, an element value of an image matrix, an element value of a matrix, an image element value of an image matrix, an image element value. It can be understood that the number of image element values is the same as the number of pixel points, and each image element value corresponds to one pixel point in the image. The position of the image element value in the image matrix corresponds to the position of the pixel point in the image, and the image element value is the gray value of the pixel point.
  • the first image device performs the subtraction operation of two matrices on the image matrix contained in the original image and the image matrix contained in the first decompressed image
  • the obtained difference information specifically refers to: in the image matrix contained in the original image, A difference operation is performed between each image element value and the image element value in the corresponding matrix in the image matrix included in the first decompressed image to obtain the image matrix included in the difference information.
  • Video coding can also be described as video compression.
  • Video is composed of continuous image frames. Due to the visual persistence effect of the human eye, when the image frame sequence is played at a certain rate, a continuous video can be seen. In order to store and transmit the video more conveniently, the video can be encoded. Since the similarity between consecutive image frames in the video is extremely high, the video encoding usually removes the spatial redundancy and temporal redundancy in the video.
  • commonly used video coding standards include advanced video coding (advanced video coding, H.264/AVC), high efficiency video coding (high efficiency video coding, HEVC/H.265) and so on.
  • the first image device may acquire the original image captured by the photographing device, and perform compression processing on the original image in a video coding manner to obtain a compressed image of the original image.
  • this application takes the H.264 compression technology as an example for explanation.
  • H.264 compression technology mainly uses intra-frame prediction to reduce spatial data redundancy, inter-frame prediction (motion estimation and compensation) to reduce temporal data redundancy; transformation and quantization to compress residual data; and coding reduction Redundancy in residual and motion vector transmission and signaling.
  • the H.264 compression technology mainly includes the following parts for image compression: block division, frame grouping, inter-frame prediction, intra-frame prediction, discrete cosine transform, and coding compression.
  • dividing a block may include dividing a macroblock and dividing a subblock, and an image is regarded as a macroblock according to an area with a size of 16 ⁇ 16 or 8 ⁇ 8.
  • Dividing sub-blocks may be dividing a region that has been divided into macroblocks into smaller sub-blocks. Dividing macroblocks and dividing subblocks can be used for subsequent prediction and compression.
  • Frame grouping is to divide closely related image frames into a group of pictures (GOP) (an image sequence).
  • GOP group of pictures
  • I frame intra picture intra-frame coded frame
  • P frame predictive-frame forward predictive coding frame
  • B frame bi-directional interpolated prediction frame bi-directional interpolated prediction frame
  • the I frame can retain a complete picture during compression, and does not need to be generated by referring to other image frames.
  • an I frame contains a large amount of image information, and can be used as a reference image frame for subsequent P frames and B frames.
  • a P frame is an encoded image in which the amount of transmission data is compressed by compressing the temporal redundancy information of the encoded image frames in the image sequence.
  • the P frame represents the difference between the image frame and the previous I frame (or P frame).
  • the previously buffered image (I frame) needs to be superimposed on the difference defined by this frame to generate the final image.
  • the P frame does not have complete picture data, but only data that is different from the picture of the previous frame.
  • the previous I frame (or P frame) must also be referenced when decoding, and cannot be decoded separately.
  • P frames are differential transmission, so the compression of P frames is relatively high.
  • B frame considering both the coded image frame in the image sequence and the temporal redundancy information of the coded image frame after the image sequence to compress the coded image of the transmission data volume. It can be understood that, to decode the B frame, not only the image frame buffer image before the image frame, but also the image frame after decoding is required, and the final decoded image is obtained by superimposing the front and rear images with the data of the current frame.
  • Inter-frame prediction in an image sequence, by comparing the difference between the pictures between two consecutive image frames, which can be the difference between the P frame and the previous I frame, the same part is removed, and only the storage The part of the difference, as the compressed data.
  • Intra-frame prediction is based on the compression of the current image frame and has nothing to do with the adjacent image frames before and after.
  • the intra-frame prediction result is obtained by predicting the current pixel through the adjacent coded pixels in the same frame image by using the non-mutation characteristic of the chrominance value of adjacent pixels in the image.
  • the H.264 compression technology includes 9 intra-frame prediction modes, which can save the prediction mode and the difference obtained by subtracting the original image and the intra-frame prediction image, so that it can be restored during decoding.
  • the discrete cosine transform (discrete cosine transform, DCT) can perform integer DCT on the obtained difference, remove the correlation of the data, and further compress.
  • Coding compression which can compress data losslessly, is generally at the end of video compression. It can be lossless entropy coding according to the principle of information entropy.
  • Entropy coding converts a series of element symbols used to represent video sequences into a compressed code stream for transmission or storage. The input symbols may include quantized transform coefficients, motion vectors information, prediction mode information, etc. Entropy coding can effectively remove the statistical redundancy of these video element symbols, and is one of the important tools to ensure the compression efficiency of video coding.
  • coding and compression can be performed using context-based adaptive binary arithmetic coding (CABAC).
  • CABAC context-based adaptive binary arithmetic coding
  • FIG. 1 is a schematic structural diagram of an image processing system provided by an embodiment of the present application.
  • the image processing system architecture in this application may include the first image device 101 and the second image device 102 in FIG. 1 , and the first image device 101 and the second image device 102 may communicate through a network.
  • the first image device 101 and the second image device 102 may be respectively deployed in any computer device involved in image processing.
  • the first image device 101 and the second image device 102 may be deployed on one or more computing devices (eg, a central server) in a cloud environment, or on one or more computing devices (edge computing devices) in an edge environment, respectively , the edge computing device can be a server.
  • the cloud environment refers to a cluster of central computing equipment owned by the cloud service provider and used to provide computing, storage, and communication resources.
  • the cloud environment has more storage resources and computing resources.
  • the edge environment refers to the cluster of edge computing devices that are geographically close to the original data collection device and used to provide computing, storage, and communication resources.
  • the target tracking system in this application can also be deployed on one or more terminal devices.
  • the target tracking system can be deployed on a terminal device with certain computing, storage and communication resources, which can be a computer, a vehicle Terminals, mobile phones, tablet computers, notebook computers, PDAs, mobile internet devices (MIDs), gateways, etc.
  • the first image device 101 can decompress the compressed image of the original image, obtain target difference information between the original image and the decompressed first compressed image, compress the target difference information, and combine the original image with the compressed image.
  • the resulting target difference information is sent to the second image device 102 .
  • the second image device 102 decompresses the received compressed image and the compressed target difference information to obtain the target difference information and the first decompressed image, and uses the decompressed target difference information to compensate the first decompressed image, thereby obtaining Second decompressed image with higher precision.
  • the image processing system may further include a photographing apparatus 103, and the photographing apparatus 103 and the first image device 101 may communicate through a network.
  • the first image device may receive the original image captured by the capturing device 103, and perform compression processing on the original image to obtain a compressed image of the original image.
  • the photographing apparatus 103 may be deployed in the first imaging device 101 , or may be deployed outside the first imaging device 101 .
  • the photographing device 103 may include, but is not limited to, a video camera, an infrared camera, a lidar, and the like.
  • the architecture of the image processing system in FIG. 1 is only an exemplary implementation in the embodiment of the present application, and the architecture of the image processing system in the embodiment of the present application includes but is not limited to the above image processing system architecture.
  • the image processing method provided by the embodiment of the present application can track and reduce the error loss of the image before and after compression.
  • the image processing method described in this application can be applied to many fields such as unmanned driving, augmented reality (AR), industrial vision, medical image processing, etc., to achieve specific functions.
  • AR augmented reality
  • the image processing system in this application can be applied to the field of industrial vision.
  • the field of industrial vision refers to adding a vision system to an industrial automatic production line, simulating human visual functions by capturing images, extracting information, and processing it for detection, measurement and control of workpieces on an industrial automatic production line.
  • a vision system simulating human visual functions by capturing images, extracting information, and processing it for detection, measurement and control of workpieces on an industrial automatic production line.
  • the image of the workpiece on the conveyor belt of the assembly line in the factory can be obtained, and the workpiece in the image can be identified and analyzed, and the unqualified workpiece on the conveyor belt can be detected, so as to generate control instructions, which can be controlled by a programmable logic controller. (programmable logic controller, PLC) controls the robotic arm to move the unqualified workpieces out of the conveyor belt.
  • PLC programmable logic controller
  • JPEG Joint Photographic Experts Group
  • the quantization table in this compression algorithm is designed according to the perception of the human eye. Since the human eye is not as sensitive to the high-frequency part as it is to the low-frequency part, the image obtained after compression processing may be indistinguishable from the original image. The expected loss of precision. And because the high-frequency part includes features such as corners, edges, and lines that need to be extracted for subsequent image recognition and analysis to a certain extent, it will lead to poor recognition accuracy, thus affecting the analysis results.
  • FIG. 2 is a schematic diagram of the architecture of an image processing system in the field of industrial vision provided by an embodiment of the present application. As shown in FIG.
  • the image processing system architecture in this application may include a first image device 201 and a second image device 202 , wherein a photographing device 201 a and an industrial gateway 201 b may be deployed in the first image device 201 , and the photographing device 201 a It can be an industrial camera, and the second image device 202 can be a mobile edge computing (mobile edge computing, MEC) server.
  • the first image device 201 may establish a wired or wireless communication connection with the second image device 202.
  • the photographing device 201a and the industrial gateway 201b may also be connected by wired or wireless communication.
  • the manner of the above-mentioned communication connection may include, but is not limited to, wireless fidelity (Wi-Fi), Bluetooth, near field communication (near field communication, NFC), and the like.
  • the image processing system provided in this embodiment of the present application may further include a network device.
  • the device may be an access network device, a base station, a wireless access point (wireless access point, AP), and the like.
  • the camera 201a and the industrial gateway 201b can use the GigE Vision protocol to transmit images at high speed using a Gigabit Ethernet interface.
  • the photographing device 201a may capture an original image of a target object (eg, a workpiece on a conveyor belt on an assembly line), and send the original image to the industrial gateway 201b.
  • the industrial gateway 201b can be used to compress the acquired original image to obtain a first compressed image; decompress the first compressed image to obtain a first decompressed image; and then according to the difference between the original image and the first decompressed image.
  • the target difference information is obtained from the difference information; the target difference information is compressed to obtain compressed target difference information; and the compressed image and the compressed target difference information are sent to the MEC server (ie, the second image device 202 ).
  • the industrial gateway 201b wirelessly transmits the compressed image and the compressed target difference information, it can be forwarded to the user plane function (UPF) through the base station, and then forwarded by the UPF to the edge computing server MEC server.
  • UPF user plane function
  • the photographing device 201a in the image processing system can collect the original image of the target object, and directly compress the original image to obtain the compressed image; and then decompress the compressed image to obtain the first image. Decompress the image; then obtain target difference information according to the difference information between the original image and the first decompressed image; perform compression processing on the target difference information to obtain the compressed target difference information; The difference image is forwarded to the UPF through the base station, and then sent to the MEC server through the UPF.
  • the edge computing server receives the compressed image and the compressed target difference information, decompresses the compressed image and the compressed target difference information, obtains the first decompressed image and the target difference information, and then performs image processing on the first decompressed image according to the target difference information. processing to obtain a second decompressed image. Further, the second decompressed image can be identified, analyzed and processed to obtain a result, and a control instruction can be triggered by the result.
  • the control instruction generated by the edge computing server may be an instruction to identify unqualified workpieces; it may also be an instruction to identify unqualified workpieces and report them to a management application, or the like.
  • FIG. 2 is only an exemplary implementation in the embodiments of the present application, and the image processing system architectures in the embodiments of the present application include but are not limited to the above image processing system architectures.
  • FIG. 3 is a schematic flowchart of an image processing method provided by an embodiment of the present application.
  • the method can be applied to the above-mentioned image processing system architecture in FIG. 1 or FIG. 2 , wherein FIG. 1 or FIG. 2
  • the first image device in FIG. 3 can be used to support and execute the method flow steps S301-S305 shown in FIG. 3
  • the second image device in FIG. 1 or FIG. 2 can be used to support and execute the method shown in FIG. 3 .
  • the method may include the following steps S301-S307.
  • Step S301 The first image device acquires a compressed image of the original image.
  • the first image device may acquire the original image sent by the photographing device, and compress the original image to obtain a compressed image of the original image. If the first image device includes a photographing device, the image of the target object can also be photographed to obtain the original image. Then, the original image is compressed to obtain a compressed image of the original image.
  • the compression method may use an image compression method.
  • the first image device may have a built-in hardware encoder, and may use compression technologies such as H.264, H.265, and JPEG to compress the original image.
  • the compression method in this embodiment of the present application can also use a video compression scheme for compression.
  • Advanced video coding (AVC/H.264) compresses the original image in this application
  • high efficiency video coding (HEVC/H.265) can also be used to compress the original image in this application , which is not limited here.
  • the multi-frame images are divided into a group of pictures GOP, and a group of pictures includes I frames, P frames and B frames.
  • a group of pictures includes I frames, P frames and B frames.
  • one GOP may include one I frame and multiple P frames.
  • a preset reference image may be obtained, and the preset reference image is used as an I frame, and the original image of the first frame is used as a P frame for compression processing.
  • the subsequent multiple frames of images can be sequentially compressed by using the previous frame of the image as a reference image to obtain compressed images of multiple frames of original images.
  • FIG. 4 is an application scene diagram of an image processing method provided in this embodiment applied to industrial vision.
  • the workpieces are conveyed on a conveyor belt, as shown in Figure 4 for 4 workpieces.
  • the first image device may include a photographing device and an industrial gateway.
  • the photographing device can photograph the workpiece on the conveyor belt, obtain multiple frames of original images, and compress the original images to obtain compressed images of the original images.
  • the original image captured by the photographing device may also be sent to the industrial gateway in the first image device, and the industrial gateway compresses the original image to obtain a compressed image of the original image.
  • FIG. 4 is an application scene diagram of an image processing method provided in this embodiment applied to industrial vision.
  • the workpieces are conveyed on a conveyor belt, as shown in Figure 4 for 4 workpieces.
  • the first image device may include a photographing device and an industrial gateway.
  • the photographing device can photograph the workpiece on the conveyor belt, obtain multiple frames of original images, and compress the original images to obtain compressed images of the original
  • the original image of the first frame of the original image captured by the photographing device is P1, and the first frame of original image P1 is compressed in combination with the locally stored reference image stored from the cloud to obtain the compression of the first frame of original image P1 image.
  • the original image P1 of the first frame can be used as a reference image corresponding to the original image P2 of the second frame for compression to obtain a compressed image of the original image P2 of the second frame; the original image P2 of the second frame can be used as the original image of the third frame.
  • the reference image corresponding to P3 is compressed to obtain a compressed image of the third frame of the original image P3. Until the compressed image of each frame of the original image is obtained.
  • Step S302 The first image device performs decompression processing on the above-mentioned compressed image to obtain a first decompressed image.
  • the first image device may perform decompression processing on the compressed image according to the inverse operation of the compression processing to obtain the first decompressed image.
  • the entropy coding can be decompressed, and the intra-frame prediction and the inter-frame prediction can be decompressed to obtain the first decompressed image of the original image.
  • the decoding process is performed using a decoding method corresponding to entropy coding.
  • decompression processing is performed on the inter-frame compression and the intra-frame compression.
  • the intra-frame compression can be decompressed through the stored prediction mode information, and the inter-frame compression can be decompressed through the reference image corresponding to each frame of image to obtain a first decompressed image. Save the first decompressed image for subsequent processing.
  • Step S303 The first image device determines target difference information, and the target difference information is obtained according to the difference information between the original image and the first decompressed image.
  • the above-mentioned compression of the original image in the present application through H.264 is lossy compression.
  • the present application determines target difference information through the original image and the first decompressed image after decompression, and compresses the determined target difference information and sends it to the second image The device, so that after decompressing the image, the second image device can perform image processing on the image according to the target difference information, so as to obtain a decompressed image with higher precision.
  • both the original image and the first decompressed image include an image matrix. It can be understood that the image sizes of the original image and the first decompressed image are the same, and the sum of the widths of the original image and the first image matrix The height is also the same, and each image element value included in the image matrix included in the original image corresponds to each image element value included in the image matrix included in the first decompressed image.
  • the image matrix contained in the original image is subtracted from the image matrix contained in the first decompressed image, that is, the value of each image element in the matrix contained in the original image and the value of each image element in the corresponding position in the matrix contained in the first decompressed image are subtracted,
  • the image matrix including the difference information is obtained.
  • the operation of performing the difference operation may be performed by the first image device calling a software program.
  • the difference information is subtracted to obtain the target difference information.
  • the value of each image element in the image matrix included in the difference information to be obtained is compared with a preset threshold value range, and when it is determined that the image element value is within the preset threshold value range, the image element value is modified to a preset value. , obtain the image matrix including the target difference information.
  • the preset threshold range may be set manually, or may be preset by the first image device, and may also be adjusted according to different application scenarios. It can be understood that different preset threshold ranges can affect the compression precision of the image processing method used in the present application, so that the compression precision can be controlled. In this embodiment of the present application, the preset value may be 0, 1, etc., which is not limited here.
  • the preset value may be determined as 0, so as to compress the target difference information.
  • the value and the number of the same image element with the largest value in the image matrix included in the difference information can be counted, and the value with the largest value of the same image element can be used as a preset value to obtain the target difference information.
  • the part of the image element values in the image matrix of the original image and the first decompressed image that has a large difference in image element values is retained, and the part of the image element value in the image matrix in the original image and the first decompressed image that has a small difference in image element values is eliminated.
  • the decompressed image can be processed through the target difference information, and the part with a large difference in the image element value in the image matrix of the original image is added back to the decompression process to obtain In the image of the original image, the accuracy of the decompressed image is improved; and the data amount of the part with a small difference between the image element value in the image matrix of the original image and the image element value of the original image is reduced to reduce the amount of transmitted data.
  • Step S304 The first image device performs compression processing on the above target difference information to obtain compressed target difference information.
  • the first image device may compress the target difference information.
  • the target difference information may be encoded to obtain the compressed target difference information, and the target difference information may be entropy encoded to obtain the compressed target difference information.
  • the method of entropy coding may be Shannon coding, Huffman coding, arithmetic coding (arithmetic coding), etc.
  • the method of entropy coding is not limited here. It can be understood that the target difference information will not lose the amount of information before and after encoding.
  • the second image device decompresses the compressed target difference information to obtain lossless target difference information.
  • Step S305 The first image device sends the compressed image and the compressed target difference information to the second image device.
  • the first image device sends the compressed image and the compressed target difference information to the second image device, so that the second image device can decompress the compressed image and the compressed target difference information, and then use the target difference information to decompress the decompressed image.
  • the image is processed to improve the accuracy of the image.
  • the first image device (such as the photographing device and the industrial gateway in Fig. 4) sends the compressed image and the compressed target difference information to the MEC server.
  • the MEC server After receiving the compressed image and the compressed target difference information, the MEC server decompresses the compressed image and the compressed target difference information.
  • Step S306 The second image device decompresses the compressed image to obtain the first decompressed image.
  • the second image device includes a hardware decoder, which can be used to decompress the received compressed image to obtain the first decompressed image.
  • the reverse operation may be performed according to the sequence of video coding to obtain the first decompressed image of the original image.
  • the second image device saves the first decompressed image for subsequent processing.
  • Step S307 The second image device performs image processing on the first decompressed image according to the compressed target difference information to obtain a second decompressed image.
  • the second image device may call a software program to perform a matrix addition operation using the image matrix included in the target difference information and the image matrix of the first decompressed image to obtain the image matrix included in the second decompressed image . That is, each image element value in the image matrix of the target difference information and each image element value in the corresponding position in the image matrix of the first decompressed image are added to obtain the image element value in the image matrix included in the second decompressed image. Further, the pixel value of each pixel in the second decompressed image can be obtained according to the image matrix included in the second decompressed image, and then the second decompressed image can be obtained. As shown in FIG. 4 , the MEC server can decompress and restore the received image to obtain a second decompressed image. Further, the second decompressed image is identified and analyzed, so as to generate a control instruction.
  • FIG. 5 a and FIG. 5 b are schematic diagrams of accuracy loss of an image before and after compression according to an embodiment of the present application.
  • 5a is a schematic diagram of a result obtained by performing a subtraction operation on an image matrix included in the original image and an image matrix included in the first decompressed image without using the image processing method provided by the embodiment of the present application.
  • FIG. 5b is a schematic diagram of a result obtained by using the image processing method provided by the embodiment of the present application, that is, using the image matrix included in the original image and the image matrix included in the second decompressed image to perform a subtraction operation.
  • 0, 1, and 255 in the black box indicate that the difference between the gray value and the gray value of the pixel point at the same position in the original image is 0, 1, and 255, respectively.
  • 0 indicates that the gray value of the image obtained after processing has no difference with the original image
  • 1 and 255 indicate that the gray value of the second decompressed image obtained after processing differs from the gray value of the original image by plus or minus 1, and so on.
  • the numbers after 0, 1, and 255 represent the number of pixels.
  • the first line "0:7568827" in Fig. 5a indicates that the number of pixels in which the gray value of the first decompressed image has no difference with the original image is 7568827.
  • the first row "0:12900307" in Fig. 5b also indicates that the number of pixels in which the gray value of the second decompressed image has no difference with the original image is 12900307.
  • implementing the image processing method provided by the present application can increase the number of pixels whose gray value is 0 with respect to the gray value of the pixel at the same position of the original image, that is, it can increase the number of pixels obtained in the decompression process.
  • the number of pixels of the second decompressed image that remain unchanged relative to the gray value of the original image. Therefore, the precision of the compressed image after decompression can be improved, and the precision loss error between the image restored by the second image device and the original image can be reduced.
  • the compressed image is decompressed, and the target difference information is determined according to the difference information between the original image and the decompressed first decompressed image.
  • the target difference information can determine the difference between the original image and the image obtained after compression processing and decompression processing.
  • a loss of precision in compression to facilitate decompression of the image after the second image device decompresses the image.
  • the target difference information is compressed and sent to the second image device, so that the second image device can perform image processing on the image.
  • a decompressed image with higher precision that is, the second decompressed image
  • the target difference information and the decompressed image that is, the first decompressed image
  • a decompressed image with higher precision that is, the second decompressed image
  • FIG. 6 is a schematic time sequence diagram of an image processing method provided by an embodiment of the present application.
  • P1 The first image device acquires the original image and the reference image.
  • P2 The first image device performs compression processing on the original image through the built-in hardware encoder to obtain a compressed image of the original image.
  • P3 The first image device may send the compressed image of the original image to the second image device.
  • P4 The hardware decoder in the first image device can decompress the compressed image of the original image to obtain the first decompressed image.
  • P5 The first image device invokes a software program to perform subtraction processing on the image matrix included in the original image and the image matrix included in the first decompressed image to obtain difference information.
  • P6 The first image device modifies the image element values within the preset threshold range in the image matrix included in the difference information to the preset values to obtain target difference information.
  • P7 The first image device encodes the target difference information to obtain compressed target difference information.
  • P8 The first image device sends the compressed target difference information to the second image device.
  • the second image device receives the compressed image of the original image and the compressed target difference information.
  • P9 The second image device may use a built-in hardware decoder to decompress the compressed image of the original image to obtain the first decompressed image.
  • P10 The second image device decompresses the compressed target difference information to obtain target difference information.
  • P11 The second image device performs image processing on the first decompressed image according to the target difference information to obtain a second decompressed image.
  • P3 can be executed before P8 is executed, P3 can also be executed after P8 is executed, and can also be executed at the same time, which is not limited here.
  • the compressed image is decompressed, and the target difference information is determined according to the difference information between the original image and the decompressed first decompressed image.
  • the target difference information can determine the difference between the original image and the image obtained after compression processing and decompression processing.
  • a loss of precision in compression to facilitate decompression of the image after the second image device decompresses the image.
  • the target difference information is compressed and sent to the second image device, so that the second image device can restore the image. Thereby, the difference in accuracy loss between the restored image and the original image can be reduced, and the accuracy of the restored image can be improved.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus provided by an embodiment of the present application.
  • the image processing apparatus 700 may include an acquisition unit 701, a decompression unit 702, a determination unit 703, a compression unit 704, and a transmission unit 705.
  • the detailed description of each unit is as follows.
  • the above image processing apparatus has the function of implementing the first image device described in the embodiments of the present application.
  • the above image processing apparatus includes the modules or units or means (means) corresponding to the steps involved in the execution of the first image device described in the embodiments of the present application by the computer equipment.
  • the above functions or units or means (means) may be implemented by software, or by Hardware implementation can also be implemented through hardware executing corresponding software, or through a combination of software and hardware. For details, further reference may be made to the corresponding descriptions in the foregoing corresponding method embodiments.
  • the above-mentioned difference information and the above-mentioned target difference information both include image matrices, and each image matrix includes a plurality of image element values; the above-mentioned determining unit 703 is specifically configured to: the image matrix included in the above-mentioned difference information The image element values within the preset threshold range are modified to preset values, and the image matrix included in the target difference information is obtained.
  • the obtaining unit 701 is specifically configured to: receive an original image from a photographing device; obtain a reference image corresponding to the original image; and compress the original image according to the reference image to obtain a compressed image.
  • the obtaining unit 701 is specifically configured to: capture an original image; obtain a reference image corresponding to the original image; and compress the original image according to the reference image to obtain a compressed image.
  • the above-mentioned sending unit 705 is further configured to: send the above-mentioned reference image to the second image device.
  • FIG. 8 is a schematic structural diagram of another image processing apparatus provided by an embodiment of the present application.
  • the image processing apparatus 800 may include a receiving unit 801, a decompression unit 802, and an image processing unit 803, wherein the details of each unit are Described as follows.
  • the above image processing apparatus has the function of implementing the second image device described in the embodiments of the present application.
  • the above-mentioned image processing apparatus includes a computer device to perform the modules or units or means (means) corresponding to the steps involved in the second image device described in the embodiments of the present application, and the above-mentioned functions or units or means (means) may be implemented by software, or by Hardware implementation can also be implemented through hardware executing corresponding software, or through a combination of software and hardware.
  • the receiving unit 801 is used for receiving the compressed image of the original image from the first image device, and the compressed target difference information, the above-mentioned target difference information is obtained according to the difference information between the above-mentioned original image and the first decompressed image; decompression The unit 802 is configured to perform decompression processing on the above-mentioned compressed image to obtain the above-mentioned first decompressed image; the image processing unit 803 is configured to perform image processing on the above-mentioned first decompressed image according to the above-mentioned compressed target difference information to obtain the second decompressed image. .
  • the aforementioned receiving unit 801 is further configured to: receive a reference image corresponding to the aforementioned original image from the aforementioned first image device.
  • the decompression unit 802 is specifically configured to: perform decompression processing on the compressed image according to the reference image to obtain the first decompressed image.
  • the above-mentioned target difference information and the above-mentioned first decompressed image both include image matrices, and each image matrix includes a plurality of image element values;
  • the above-mentioned decompression unit 802 is specifically used for: the above-mentioned compressed target performing decompression processing on the difference information to obtain target difference information; performing addition processing on the image matrix included in the first decompressed image and the image matrix included in the target difference information to obtain the image matrix included in the second decompressed image; according to the second decompression The image matrix included in the image determines the above-mentioned second decompressed image.
  • FIG. 9 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the computer device 900 has the function of implementing the first image device described in the embodiment of the present application.
  • the computer device 900 includes at least one processor 901 , at least one memory 902 , and at least one communication interface 903 .
  • the device may also include general components such as an antenna, which will not be described in detail here.
  • the processor 901 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs in the above solutions.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication interface 903 is used to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), core network, wireless local area networks (wireless local area networks, WLAN) and the like.
  • devices or communication networks such as Ethernet, radio access network (RAN), core network, wireless local area networks (wireless local area networks, WLAN) and the like.
  • Memory 902 may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (RAM) or other type of static storage device that can store information and instructions It can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, CD-ROM storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of being executed by a computer Access any other medium without limitation.
  • the memory can exist independently and be connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the above-mentioned memory 902 is used for storing the application program code for executing the above solution, and the execution is controlled by the processor 901 .
  • the above-mentioned processor 901 is used for executing the application program codes stored in the above-mentioned memory 902 .
  • the code stored in the memory 902 can execute the image processing method provided in the above FIG. 3 , such as obtaining a compressed image of the original image; decompressing the above-mentioned compressed image to obtain a first decompressed image; determining target difference information, and the above-mentioned target difference information is based on the above.
  • the difference information between the original image and the above-mentioned first decompressed image is obtained; the above-mentioned target difference information is compressed to obtain the compressed target difference information; the above-mentioned compressed image and the above-mentioned compressed target difference information are sent to the second image equipment.
  • FIG. 10 is a schematic structural diagram of another computer device provided by an embodiment of the present application.
  • the device 1000 has the function of implementing the second image device described in the embodiment of the present application.
  • the computer device 1000 includes at least one processor 1001 , at least one memory 1002 , and at least one communication interface 1003 .
  • the device may also include general components such as an antenna, which will not be described in detail here.
  • the processor 1001 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits used to control the execution of the above programs.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the communication interface 1003 is used to communicate with other devices or communication networks, such as Ethernet, Radio Access Network (RAN), Core Network, Wireless Local Area Networks (Wireless Local Area Networks, WLAN) and the like.
  • RAN Radio Access Network
  • Core Network Core Network
  • Wireless Local Area Networks Wireless Local Area Networks, WLAN
  • Memory 1002 may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (RAM), or other type of static storage device that can store information and instructions It can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, CD-ROM storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of being executed by a computer Access any other medium without limitation.
  • the memory can exist independently and be connected to the processor through a bus.
  • the memory can also be integrated with the processor.
  • the above-mentioned memory 1002 is used for storing the application program code for executing the above solution, and the execution is controlled by the processor 1001 .
  • the above-mentioned processor 1001 is used for executing the application program codes stored in the above-mentioned memory 1002 .
  • the code stored in the memory 1002 can execute the image processing method provided in FIG. 3 above, such as receiving a compressed image of the original image from the first image device, and compressed target difference information, the above-mentioned target difference information is based on the above-mentioned original image and the first image processing method.
  • the difference information between the decompressed images is obtained; the compressed image is decompressed to obtain the first decompressed image; the first decompressed image is subjected to image processing according to the compressed target difference information to obtain the second decompressed image.
  • the disclosed apparatus may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the above-mentioned units is only a logical function division.
  • multiple units or components may be combined or integrated. to another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical or other forms.
  • the units described above as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated units are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium.
  • the technical solutions of the present application can be embodied in the form of software products in essence, or the parts that contribute to the prior art, or all or part of the technical solutions, and the computer software products are stored in a storage medium , including several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc., specifically a processor in the computer device) to execute all or part of the steps of the above methods in various embodiments of the present application.
  • the aforementioned storage medium may include: U disk, mobile hard disk, magnetic disk, optical disk, read-only memory (read-only memory, ROM) or random access memory (random access memory, RAM) and other various programs that can store programs medium of code.
  • a component may be, but is not limited to, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a computing device and the computing device may be components.
  • One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between 2 or more computers.
  • these components can execute from various computer readable media having various data structures stored thereon.
  • a component may, for example, be based on a signal having one or more data packets (eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals) Communicate through local and/or remote processes.
  • data packets eg, data from two components interacting with another component between a local system, a distributed system, and/or a network, such as the Internet interacting with other systems via signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

本申请提供一种图像处理方法、装置、设备及计算机可读存储介质,应用于计算机视觉领域。其中,该方法可以包括:第一图像设备获取原始图像的压缩图像,对压缩图像进行解压处理,得到第一解压图像;再根据原始图像和第一解压图像之间的差异信息确定目标差异信息;进而对目标差异信息进行压缩处理,得到压缩后的目标差异信息;最后将压缩图像和压缩后的目标差异信息发送至第二图像设备,以使第二图像设备根据接收到的信息恢复图像。采用本申请,有利于减小第二图像设备恢复出的图像和原始图像之间的精度损失。

Description

一种图像处理方法、装置、设备及计算机可读存储介质
本申请要求于2020年9月17日提交中国专利局、申请号为202010981884.9、申请名称为“一种图像处理方法、装置、设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机视觉领域,尤其涉及一种图像处理方法、装置、设备及计算机可读存储介质。
背景技术
图像压缩是计算机视觉领域的基础技术之一,其目的是去除多余的数据,减少表示图像时需要的数据量,从而节省存储空间。随着移动通信技术的发展,如何在移动环境中实现图像压缩,以节约传输带宽,是当前计算机视觉研究的热门课题之一。特别是应用在工业领域上,通过工业相机采集图像,经过压缩之后传输至服务器端,服务器端进行解压并分析,触发相应的指令,以实现生产过程中对工件加工装配和工件检验的控制和监视。而在实际的应用过程中,解压后的图像与原始图像的精度损失误差较大,从而对服务器端的识别造成影响,进而对分析结果造成影响。
因此,如何减小解压之后的图像与原始图像之间的精度损失误差,是亟待解决的技术问题。
发明内容
本申请提供一种图像处理方法、装置、设备及计算机可读存储介质,有利于减小第二图像设备恢复出的图像与原始图像之间的精度损失。
第一方面,本申请提供一种图像处理方法,该方法应用于第一图像设备。该方法可以由第一图像设备执行,也可以由第一图像设备中的装置(例如处理器或芯片等)执行。该方法以第一图像设备为例,可包括:获取原始图像的压缩图像;对所述压缩图像进行解压处理,得到第一解压图像;确定目标差异信息,所述目标差异信息是根据所述原始图像与所述第一解压图像之间的差异信息得到的;对所述目标差异信息进行压缩处理,得到压缩后的目标差异信息;将所述压缩图像和所述压缩后的目标差异信息发送至第二图像设备。
通过第一方面提供的方法,可以了解到日常对使用压缩图像的经验,在对原始图像压缩后,将压缩图像发送至第二图像设备,第二图像设备直接对压缩图像进行解压处理,第二图像设备可以直接对解压后的图像进行识别和分析。如果第二图像设备对接收到的压缩图像进行解压处理得到的图像相对于原始图像的精度误差较大,则会对后续识别和分析造成一定的影响。因此,在本申请实施例中,在原有压缩图像的基础上,增加压缩图像在解压后的第一解压图像与原始图像的目标差异信息。通过目标差异信息能够分析出原始图像和第一解压图像的精度损失,以便于第二图像设备对解压后的图像进行恢复处理,得到精度更高的解压图像,减少与原始图像的精度损失差,提高解压处理后的图像的精度。其次,对目标差异信息进行压缩后发送至第二图像设备,减小需要传输的数据量。
在一种可能的实现方式中,所述差异信息和所述目标差异信息均包括图像矩阵,每个图像矩阵包括多个图像元素值;所述确定目标差异信息,包括:将所述差异信息包括的图像矩阵中处于预设阈值范围内的图像元素值修改为预设值,得到目标差异信息包括的图像矩阵。在本申请实施例中,第一图像设备在得到解压处理后的第一解压图像后,可以通过第一解压图像包括的图像矩阵与原始图像包括的图像矩阵做矩阵相减处理,得到第一解压图像与原始图像之间的差异信息。为了减少差异信息在存储和传输过程中的数据量,保留两幅图像差异较大的部分,将差异信息的矩阵中处于预设阈值范围内的图像元素值修改为预设值,得到目标差异信息包括的图像矩阵。可以理解的是,在修改后的图像矩阵中,相同的图像元素值的个数增加,可以减少传输的数据量。可以通过目标差异信息减少在第二图像设备恢复出的图像与原始图像的精度损失。
在一种可能的实现方式中,所述获取原始图像的压缩图像,包括:接收来自拍摄装置的原始图像;获取所述原始图像对应的参考图像;根据所述参考图像对所述原始图像进行压缩,得到压缩图像。在本申请实施例中,第一图像设备在对原始图像进行压缩处理时,可以获取拍摄装置拍摄的原始图像和原始图像对应的参考图像。采用原始图像基于对应的参考图像的对图像进行压缩处理,即通过原始图像和参考图像的差异信息进行压缩,能够提高对原始图像的压缩效率,同时也能广泛应用于需要实时对图像进行压缩处理的场景中。
在一种可能的实现方式中,所述获取原始图像的压缩图像,包括:拍摄得到原始图像;获取所述原始图像对应的参考图像;根据所述参考图像对所述原始图像进行压缩,得到压缩图像。在本申请实施例中,第一图像设备可以包括拍摄装置,可以将拍摄到的图像作为原始图像,并获取对应的参考图像进行压缩处理,通过将拍摄装置部署到第一图像设备中,减少设备的冗余。同时,采用原始图像基于对应的参考图像的关系对图像进行压缩处理,即通过原始图像和参考图像的差异信息进行压缩,能够提高对原始图像的压缩效率,同时也能广泛应用于需要实时对图像进行压缩处理的场景中。
在一种可能的实现方式中,所述方法还包括:将所述参考图像发送至第二图像设备。在本申请实施例中,可以将原始图像对应的参考图像发送至第二图像设备,可以使第二图像设备能够使用参考图像对原始图像的压缩图像进行解压处理,从而能够根据目标差异信息对原始图像的解压图像进行恢复,得到精度更高的图像,减少第二图像设备恢复出的图像与原始图像之间的精度损失。
第二方面,本申请实施例提供了一种图像处理方法,该方法应用于第二图像设备。该方法可以由第二图像设备执行,也可以由第二图像设备中的装置(例如处理器或芯片等)执行。该方法以第二图像设备为例,可包括:接收来自第一图像设备的原始图像的压缩图像,以及压缩后的目标差异信息,所述目标差异信息是根据所述原始图像与第一解压图像之间的差异信息得到的;对所述压缩图像进行解压处理,得到所述第一解压图像;根据所述压缩后的目标差异信息对所述第一解压图像进行图像处理,得到第二解压图像。
通过第二方面提供的方法,可以在第二图像设备接收原始图像的压缩图像以及压缩后的目标差异信息,对压缩图像进行解压处理。其中,对原始图像的压缩图像进行解压处理可以是第一设备对原始图像的压缩处理的逆操作,进而得到第一解压图像。同时将压缩后的目标差异信息进行解压处理,能够得到目标差异信息中包含的原始图像的解压图像(即第一解压图像)与原始图像之间的差异信息,从而能够对第一解压图像进行补偿,得到精度更高的第二解压图像,从而能够减少第二解压图像与原始图像的精度损失。
在一种可能的实现方式中,所述方法还包括:接收来自所述第一图像设备的所述原始图 像对应的参考图像。在本申请实施例中,第一图像设备使用原始图像对应的参考图像对原始图像进行压缩,在第二图像设备接收到第一图像设备发送的参考图像后,可以通过参考图像对第一压缩图像进行逆操作解压处理,得到第一解压图像。以便根据目标差异信息和第一解压图像进行图像处理,得到精度更高的第二解压图像,减少第二解压图像与原始图像的精度损失,提高了第二解压图像的精度。
在一种可能的实现方式中,所述对所述压缩图像进行解压处理,得到所述第一解压图像,包括:根据所述参考图像对所述压缩图像进行解压处理,得到所述第一解压图像。在本申请实施例中,第一图像设备通过原始图像对应的参考图像对原始图像进行压缩处理,得到压缩图像;第二图像设备可以通过压缩图像的原始图像对应的参考图像,对压缩图像进行解压处理,得到第一解压图像。可以理解的是,每一个压缩图像可以将前一帧的图像作为参考图像,根据每一个压缩图像的参考图像进行解压处理,可以得到解压后的图像,进而通过目标差异信息对解压处理后的图像进行恢复,得到精度更高的第二解压图像,减少第二解压图像与原始图像之间的精度损失。
在一种可能的实现方式中,所述目标差异信息和所述第一解压图像均包含图像矩阵,每个图像矩阵包括多个图像元素值;所述根据所述压缩后的目标差异信息对所述第一解压图像进行图像处理,得到第二解压图像,包括:对所述压缩后的目标差异信息进行解压处理,得到目标差异信息;对所述第一解压图像包括的图像矩阵和所述目标差异信息包括的图像矩阵进行相加处理,得到第二解压图像包括的图像矩阵;根据所述第二解压图像包括的图像矩阵确定所述第二解压图像。在本申请实施例中,第二图像设备可以通过目标差异信息包括的图像矩阵和第一解压图像包括的图像矩阵做矩阵相加操作,将第一解压图像与原始图像的差异信息加到第一解压图像中,得到精度更高的第二解压图像,从而减少恢复出的第二解压图像与原始图像的精度损失差。
第三方面,本申请实施例提供了一种图像处理装置,该图像处理装置具有实现上述第一方面所述的第一图像设备的部分或全部功能。比如,装置的功能可具备本申请中第一图像设备的部分或全部实施例中的功能,也可以具备单独实施本申请中的任一个实施例的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的单元或模块。所述图像处理装置,可包括:获取单元,用于获取原始图像的压缩图像;解压单元,用于对所述压缩图像进行解压处理,得到第一解压图像;确定单元,用于确定目标差异信息,所述目标差异信息是根据所述原始图像与所述第一解压图像之间的差异信息得到的;压缩单元,用于对所述目标差异信息进行压缩处理,得到压缩后的目标差异信息;发送单元,用于将所述压缩图像和所述压缩后的目标差异信息发送至第二图像设备。
在一种可能的实现方式中,所述差异信息和所述目标差异信息均包括图像矩阵,每个图像矩阵包括多个图像元素值;所述确定单元,具体用于:将所述差异信息包括的图像矩阵中处于预设阈值范围内的图像元素值修改为预设值,得到目标差异信息包括的图像矩阵。
在一种可能的实现方式中,所述获取单元,具体用于:接收来自拍摄装置的原始图像;获取所述原始图像对应的参考图像;根据所述参考图像对所述原始图像进行压缩,得到压缩图像。
在一种可能的实现方式中,所述获取单元,具体用于:拍摄得到原始图像;获取所述原始图像对应的参考图像;根据所述参考图像对所述原始图像进行压缩,得到压缩图像。
在一种可能的实现方式中,所述发送单元还用于:将所述参考图像发送至第二图像设备。
第四方面,本申请实施例提供一种图像处理装置,该图像处理装置具有实现上述第二方面所述的第二图像设备的部分或全部功能。比如,装置的功能可具备本申请中第二图像设备的部分或全部实施例中的功能,也可以具备单独实施本申请中的任一个实施例的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的单元或模块。所述图像处理装置,可包括:接收单元,用于接收来自第一图像设备的原始图像的压缩图像,以及压缩后的目标差异信息,所述目标差异信息是根据所述原始图像与第一解压图像之间的差异信息得到的;解压单元,用于对所述压缩图像进行解压处理,得到所述第一解压图像;图像处理单元,用于根据所述压缩后的目标差异信息对所述第一解压图像进行图像处理,得到第二解压图像。
在一种可能的实现方式中,所述接收单元,还用于:接收来自所述第一图像设备的所述原始图像对应的参考图像。
在一种可能的实现方式中,所述解压单元,具体用于:根据所述参考图像对所述压缩图像进行解压处理,得到所述第一解压图像。
在一种可能的实现方式中,所述目标差异信息和所述第一解压图像均包含图像矩阵,每个图像矩阵包括多个图像元素值;所述解压单元,具体用于:对所述压缩后的目标差异信息进行解压处理,得到目标差异信息;对所述第一解压图像包括的图像矩阵和所述目标差异信息包括的图像矩阵进行相加处理,得到第二解压图像包括的图像矩阵;根据所述第二解压图像包括的图像矩阵确定所述第二解压图像。
第五方面,本申请实施例提供一种计算机设备,该计算机设备包括处理器,处理器被配置为支持该计算机设备实现第一方面提供的图像处理方法中相应的功能。该计算机设备还可以包括存储器,存储器用于与处理器耦合,其保存该计算机设备必要的程序指令和数据。该计算机设备还可以包括通信接口,用于该计算机设备与其他设备或通信网络通信。
第六方面,本申请实施例提供一种计算机设备,该计算机设备包括处理器,处理器被配置为支持该计算机设备实现第二方面提供的图像处理方法中相应的功能。该计算机设备还可以包括存储器,存储器用于与处理器耦合,其保存该计算机设备必要的程序指令和数据。该计算机设备还可以包括通信接口,用于该计算机设备与其他设备或通信网络通信。
第七方面,本申请实施例提供一种计算机可读存储介质,用于储存为上述第一方面提供的第一图像设备所用的计算机软件指令,其包含用于执行上述第一方面所设计的程序。
第八方面,本申请实施例提供一种计算机可读存储介质,用于储存为上述第二方面提供的第二图像设备所用的计算机软件指令,其包含用于执行上述第二方面所设计的程序。
第九方面,本申请实施例提供了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行上述第一方面中的第一图像设备所执行的流程。
第十方面,本申请实施例提供了一种计算机程序,该计算机程序包括指令,当该计算机程序被计算机执行时,使得计算机可以执行上述第二方面中的第二图像设备所执行的流程。
第十一方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持计算机设备实现上述第一方面中所涉及的功能,例如,生成或处理上述第一方面图像处理方法中所涉及的信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存第一图像设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
第十二方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持计算机设备实现上述第二方面中所涉及的功能,例如,生成或处理上述第二方面图像处理方法中所涉 及的信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存第二图像设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包含芯片和其他分立器件。
附图说明
图1是本申请实施例提供的一种图像处理系统的架构示意图;
图2是本申请实施例提供的一种在工业视觉领域中图像处理的系统架构示意图;
图3是本申请实施例提供的一种图像处理方法的流程示意图;
图4是本申请实施例提供的一种图像处理方法应用于工业视觉上的应用场景图;
图5a是本申请实施例提供的一种图像在压缩前后图像的精度损失示意图;
图5b是本申请实施例提供的一种图像在压缩前后图像的精度损失示意图;
图6是本申请实施例提供的一种图像处理方法的时序示意图;
图7是本申请实施例提供的一种图像处理装置的结构示意图;
图8是本申请实施例提供的另一种图像处理装置的结构示意图;
图9是本申请实施例提供的一种计算机设备的结构示意图;
图10是本申请实施例提供的另一种计算机设备的结构示意图。
具体实施方式
首先,对本申请中的部分用语进行解释说明,以便于本领域技术人员理解。
1、图像矩阵
图像矩阵也可以描述为矩阵、图像的矩阵。为图像的矩阵表示。图像矩阵的行对应图像的高,图像矩阵的列对应图像的宽。
应用在本申请中,第一图像设备可以对原始图像包含的图像矩阵和第一解压图像包含的图像矩阵做两个矩阵的相减操作,得到差异信息。第二图像设备可以对第一解压图像包含的图像矩阵和目标差异信息包括的图像矩阵做两个矩阵的相加操作,得到第二解压图像包含的图像矩阵,从而可以根据第二解压图像包含的图像矩阵得到第二解压图像。
2、图像元素值
图像元素值也可以描述为元素值、图像的元素值、图像矩阵的元素值、矩阵的元素值、图像矩阵的图像元素值、图像元素的值。可以理解的是,图像元素值的个数和像素点的个数相同,每一个图像元素值与图像中的一个像素点对应。该图像元素值在图像矩阵中的位置,与像素点在图像中的位置对应,图像元素值为该像素点的灰度值。
应用在本申请中,第一图像设备对原始图像包含的图像矩阵和第一解压图像包含的图像矩阵做两个矩阵的相减运算,得到的差异信息具体是指:原始图像包含的图像矩阵中每一个图像元素值与第一解压图像包含的图像矩阵中对应矩阵中的图像元素值做差操作,得到差异信息包括的图像矩阵。
3、视频编码
视频编码也可以描述为视频压缩。视频是由连续的图像帧构成,由于人眼的视觉暂留效应,当图像帧序列以一定的速率播放时,能看到动作连续的视频。为了更方便的储存和传输视频,可以对视频进行编码,由于视频中连续的图像帧之间相似性极高,因此视频编码通常是去除视频中的空间冗余和时间冗余。其中,常用的视频编码标准有高级视频编码(advanced  video coding,H.264/AVC)、高效率视频编码(high efficiency video coding,HEVC/H.265)等等。
应用在本申请中,第一图像设备可以获取拍摄装置拍摄到的原始图像,使用视频编码的方式对原始图像进行压缩处理,得到原始图像的压缩图像。为了描述方便,本申请以H.264压缩技术为例,进行讲解。
H.264压缩技术主要采用帧内预测来减少空域数据冗余,采用帧间预测(运动估计和补偿)来减少时域数据冗余;用转换和量化来进行残留数据的压缩;以及使用编码减少残留和运动矢量传输和信号发送中的冗余。其中,H.264压缩技术对图像进行压缩主要包括以下几个部分:划分块、帧分组、帧间预测、帧内预测、离散余弦变换、编码压缩。
具体的,划分块可以包括划分宏块和划分子块,将图像按照16×16或者8×8大小的区域作为宏块。划分子块可以是将已经划分宏块的区域划分为更小的子块。划分宏块和划分子块可以用于后续的预测和压缩。
帧分组,是将关联密切的图像帧分为一图像组(group of pictures,GOP)(一个图像序列),在该图像序列中,包括三种图像帧,帧内编码帧(I帧intra picture)、前向预测编码帧(P帧predictive-frame)和双向预测内插编码帧(B帧bi-directional interpolated prediction frame)。
其中,I帧,在压缩时可以保留完整的画面,不需要参考其他图像帧生成。可以理解的是,I帧包含的图像信息量较大,可以作为后续P帧和B帧的参考图像帧。P帧,是通过将在该图像序列中已编码的图像帧的时间冗余信息来压缩传输数据量的编码图像。P帧表示的是该图像帧跟之前的一个I帧(或P帧)的差别,解码时需要用之前缓存的画面(I帧)叠加上本帧定义的差别,生成最终画面。可以理解的是,P帧没有完整画面数据,只有与前一帧的画面差别的数据,解码处理的时候也必须所参考的之前的一个I帧(或P帧),不能单独进行解码处理,由于P帧是差值传输,所以P帧压缩比较高。B帧,既考虑与图像序列中已编码图像帧,也顾及图像序列后面已编码图像帧的时间冗余信息来压缩传输数据量的编码图像。可以理解的是,要解码B帧,不仅要取得该图像帧之前的图像帧缓存画面,还要解码之后的图像帧的画面,通过前后画面的与本帧数据的叠加取得最终的解码图像。
帧间预测,在一个图像序列中,通过比较两个连续图像帧之间的画面的差异,可以是P帧比较与前面I帧之间的图像之间的差异,把相同的部分去掉,只存储差异的部分,作为压缩处理的数据。帧内预测,是基于本图像帧进行压缩,对相邻的前后图像帧无关。利用图像中相邻像素点的色度值非突变的特性,通过同一帧图像内邻近已编码像素来预测当前的像素,得到的帧内预测结果。其中,H.264压缩技术包括9中帧内预测模式,可以保存预测模式和原始图像与帧内预测后的图像相减得到的差异保存,以便在解码时能够恢复。
离散余弦变换(discrete cosine transform,DCT),可以对得到的差异做整数DCT,去掉数据的相关性,进一步压缩。
编码压缩,可以对数据进行无损压缩,一般处于视频压缩的末端。可以是按照信息熵的原理进行无损熵编码,熵编码把一系列用来表示视频序列的元素符号转变为一个用来传输或存储的压缩码流,输入的符号可能包括量化的变换系数、运动矢量信息、预测模式信息等。熵编码可以有效去除这些视频元素符号的统计冗余,是保证视频编码压缩效率的重要工具之一。在H.264中,可以是使用基于上下文的自适应二进制算术编码(context-based adaptive binary arithmetic coding,CABAC)进行编码压缩。
下面将结合本申请实施例中的附图,对本申请实施例进行描述。
为了便于理解本申请实施例,下面先对本申请实施例所基于的其中一种图像处理系统架构进行描述。请参阅图1,图1是本申请实施例提供的一种图像处理系统构架示意图。本申请中的图像处理系统构架可以包括图1中的第一图像设备101和第二图像设备102,第一图像设备101和第二图像设备102可以通过网络通信。其中,第一图像设备101和第二图像设备102可以分别部署在任意一个涉及图像处理的计算机设备中。例如,第一图像设备101和第二图像设备102可以分别部署在云环境上的一个或多个计算设备(例如中心服务器),或者边缘环境中的一个或多个计算设备(边缘计算设备)上,边缘计算设备可以为服务器。其中,云环境是指云服务提供商拥有的,用于提供计算、存储、通信资源的中心计算设备集群,云环境中具备较多的存储资源和计算资源。边缘环境是指在地理位置上距离原始数据采集设备较近的,用于提供计算、存储、通信资源的边缘计算设备集群。本申请中的目标跟踪系统还可以部署在一个或多个终端设备上,例如,目标跟踪系统可以部署在一个终端设备上,该终端设备具有一定的计算、存储和通信资源,可以是计算机、车载终端、手机、平板电脑、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、网关等。
具体的,第一图像设备101可以对原始图像的压缩图像进行解压,并得到原始图像与解压得到的第一压缩图像之间的目标差异信息,并将目标差异信息进行压缩,将原始图像和压缩后的目标差异信息发送至第二图像设备102。第二图像设备102将接收到的压缩图像和压缩后的目标差异信息进行解压处理,得到目标差异信息和第一解压图像,并用解压得到的目标差异信息对第一解压图像进行补偿,从而能够得到精度更高的第二解压图像。
如图1所示,图像处理系统还可以包括拍摄装置103,拍摄装置103与第一图像设备101可以通过网络通信。第一图像设备可以接收拍摄装置103拍摄到的原始图像,并对原始图像进行压缩处理,得到原始图像的压缩图像。需要说明的是,拍摄装置103可以部署在第一图像设备101中,也可以部署在第一图像设备101的外部。拍摄装置103可以包括但不限于摄像机、红外相机、激光雷达等。
可以理解的是,图1中的图像处理系统的架构只是本申请实施例中的一种示例性的实施方式,本申请实施例中的图像处理系统架构包括但不仅限于以上图像处理系统架构。
本申请实施例提供的图像处理方法可以跟踪减小图像在压缩前后的误差损失。本申请所描述的图像处理方法,可以应用于无人驾驶、增强现实(augmented reality,AR)、工业视觉、医学图像处理等诸多领域,以实现特定功能。
示例性的,本申请中的图像处理系统可以应用于工业视觉领域。
工业视觉领域是指:给工业自动生产线增加视觉系统,通过摄取图像模拟人的视觉功能,并提取信息,加以处理,用于对工业自动生产线上的工件进行检测、测量和控制。以通过取代人工检查来提高生产质量和产量。例如,可以获取到工厂中流水线传送带上工件的图像,并对图像中的工件进行识别和分析,可以检测出传送带上不合格的工件,从而生成控制指令,控制指令可以是通过可编程逻辑控制器(programmable logic controller,PLC)控制机械臂将不合格的工件移出传送带。然而,目前在工业领域中的图像处理是以联合图像专家组(joint photographic experts group,JPEG)格式的压缩算法。这种压缩算法中的量化表是根据人眼感知设计的,由于人眼对高频部分的敏感程度不如对低频部分的敏感程度,因此,压缩处理后得到的图像相较于原始图像会有不可预期损失的精度。并且由于高频部分在一定程度上包括了后续图像识别和分析需要提取的角点、边缘、线等特征,会导致识别精度差,从而影响分析结果。
此时,可以在压缩图像的基础上,确定原始图像和压缩图像解压处理后得到的图像之间 的差异信息,以便于在得到解压图像后,可以通过差异信息进行补偿在压缩过程中损失的精度。并且对差异信息进行压缩,也可以保证在提高解压图像的精度的情况下,减少传输数据量,降低传输时延。请参阅图2,图2是本申请实施例提供在工业视觉领域中图像处理系统架构示意图。如图2所示,本申请中的图像处理系统架构可以包括第一图像设备201和第二图像设备202,其中,拍摄装置201a和工业网关201b可以部署在第一图像设备201中,拍摄装置201a可以是工业相机,第二图像设备202可以是移动边缘计算(mobile edge computing,MEC)服务器。第一图像设备201可以与第二图像设备202建立有线或无线的通信连接,在第一图像设备201中,拍摄装置201a和工业网关201b也可以有线或无线的通信连接。上述通信连接的方式可以包括但不限于无线保真(wireless fidelity,Wi-Fi)、蓝牙、近场通信(near field communication,NFC)等。当第一图像设备201中的拍摄装置201a和工业网关201b或者第一图像设备201和第二图像设备202建立无线的通信连接时,本申请实施例提供的图像处理系统还可以包括网络设备,网络设备可以是接入网设备、基站、无线访问接入点(wireless access point,AP)等。应用在5G领域,拍摄装置201a和工业网关201b可以通过GigE Vision协议,使用千兆以太网接口进行图像的高速传输。
在一种可能的实现方式中,拍摄装置201a可以采集目标物体(如流水线上传送带上的工件)的原始图像,并将原始图像发送至工业网关201b。工业网关201b可以用于对获取到的原始图像进行压缩处理,得到第一压缩图像;并对第一压缩图像进行解压处理,得到第一解压图像;进而根据原始图像和第一解压图像之间的差异信息得到目标差异信息;并对目标差异信息进行压缩处理,得到压缩后的目标差异信息;进而,将压缩图像和压缩后的目标差异信息发送至MEC服务器(即第二图像设备202)。进一步的,在工业网关201b通过无线传输压缩图像和压缩后的目标差异信息时,可以通过基站转发至用户平面功能(user plane function,UPF),由UPF转发至边缘计算服务器MEC服务器。
在一种可能的实现方式中,图像处理系统中的拍摄装置201a可以采集目标物体的原始图像,并直接对原始图像进行压缩处理,得到压缩图像;进而,对压缩图像进行解压处理,得到第一解压图像;然后根据原始图像和第一解压图像之间的差异信息得到目标差异信息;并对目标差异信息进行压缩处理,得到压缩后的目标差异信息;拍摄装置201a将压缩图像和压缩后的目标差异图像通过基站转发至UPF,再通过UPF发送至MEC服务器。
边缘计算服务器接收压缩图像和压缩后的目标差异信息,将压缩图像和压缩后的目标差异信息进行解压处理,得到第一解压图像和目标差异信息,再通过目标差异信息对第一解压图像进行图像处理,得到第二解压图像。进而可以对第二解压图像进行识别、分析和处理,得到结果,并通过结果触发控制指令。示例性的,边缘计算服务器生成的控制指令可以是识别不合格的工件的指令;也可以是将不合格的工件识别出来,上报给管理应用的指令等。
可以理解的是,图2中的图像处理系统架构只是本申请实施例中的一种示例性的实施方式,本申请实施例中的图像处理系统架构包括但不仅限于以上图像处理系统架构。
基于图1提供的图像处理系统的架构,图2提供的另一种图像处理系统的架构,结合本申请中提供的图像处理方法,对本申请中提出的技术问题进行具体分析和解决。请参阅附图3,图3是本申请实施例提供的一种图像处理方法的流程示意图,该方法可应用于上述图1或图2中上述的图像处理系统架构中,其中图1或图2中的第一图像设备可以用于支持并执行图3中所示的方法流程步骤S301-步骤S305,图1或图2中的第二图像设备可以用于支持并执行图3中所示的方法流程步骤S306-步骤S307。该方法可以包括以下步骤S301-步骤S307。
步骤S301:第一图像设备获取原始图像的压缩图像。
具体的,第一图像设备可以获取拍摄装置发送的原始图像,对原始图像进行压缩,得到原始图像的压缩图像。第一图像设备若包括拍摄装置,也可以拍摄目标物体的图像,得到原始图像。进而对原始图像进行压缩,得到原始图像的压缩图像。
其中,压缩的方法可以使用图像的压缩方法。其中,第一图像设备中可以内置硬件编码器,可以使用H.264、H.265、JPEG等压缩技术对原始图像进行压缩处理。为了更好的应用于工业领域,流水线上的相机拍摄到的工件具有极大的相似性,因此,本申请实施例中的压缩方法还可以使用视频压缩方案来进行压缩,示例性的,可以使用高级视频编码(advanced video coding,AVC/H.264)对本申请中的原始图像进行压缩,也可以使用高效率视频编码(high efficiency video coding,HEVC/H.265)对本申请中的原始图像进行压缩,这里不做限定。
需要说明的是,使用H.264或者H.265压缩算法对图像进行压缩的过程中,将多帧图像分为一个图像组GOP,一个图像组中包括I帧、P帧和B帧。应用在本申请实施例中,一个GOP中可以包括一个I帧和多个P帧。在对第一帧原始图像进行压缩处理时,可以获取预设的参考图像,将预设的参考图像作为I帧,第一帧原始图像作为P帧进行压缩处理。以此类推,后续的多帧图像可以依次以该帧图像的前一帧图像作为参考图像进行压缩处理,得到多帧原始图像的压缩图像。
示例性的,可以应用于工业视觉的领域。请一并参阅图4,图4是本请实施例提供的一种图像处理方法应用于工业视觉上的应用场景图。如图4所示,当应用于工业领域时,工件在传送带上传送,如图4所示的4个工件。其中,第一图像设备可以包括拍摄装置和工业网关。拍摄装置可以拍摄到传送带上的工件,得到多帧原始图像,并对原始图像进行压缩,得到原始图像的压缩图像。也可以通过拍摄装置拍摄的原始图像,发送至第一图像设备中的工业网关,由工业网关对原始图像进行压缩,得到原始图像的压缩图像。如图4所示,拍摄装置拍摄到的原始图像第一帧原始图像为P1,结合本地存储的从云端存储的参考图像对第一帧原始图像P1进行压缩,得到第一帧原始图像P1的压缩图像。进一步地,可以将第一帧原始图像P1作为第二帧原始图像P2对应的参考图像进行压缩,得到第二帧原始图像P2的压缩图像;可以将第二帧原始图像P2作为第三帧原始图像P3对应的参考图像进行压缩,得到第三帧原始图像P3的压缩图像。直到得到每一帧原始图像的压缩图像。
步骤S302:第一图像设备对上述压缩图像进行解压处理,得到第一解压图像。
具体的,第一图像设备可以依据对压缩处理的逆操作对压缩图像进行解压处理,得到第一解压图像。以H.264压缩方法为例,可以分别对熵编码进行解压处理,对帧内预测、帧间预测进行解压处理,得到原始图像的第一解压图像。例如,使用熵编码对应的解码方法进行解码处理。进一步的,对帧间压缩和帧内压缩进行解压处理。通过保存的预测模式信息,可以对帧内压缩进行解压处理,通过每一帧图像对应的参考图像可以对帧间压缩进行解压处理,得到第一解压图像。将第一解压图像保存下来,以便后续处理。
步骤S303:第一图像设备确定目标差异信息,目标差异信息是根据上述原始图像与上述第一解压图像之间的差异信息得到的。
具体的,上述通过H.264对本申请中的原始图像进行压缩为有损压缩,为了获得更大的压缩比,在压缩的过程中损失了一部分信息。因此,为了在第二图像设备处提高解压出的图像精度,本申请在解压之后,通过原始图像和第一解压图像确定目标差异信息,并将确定出的目标差异信息压缩后发送至第二图像设备,以便于第二图像设备在解压图像之后,可以依据目标差异信息对图像进行图像处理,得到精度更高的解压图像。
在一种可能的实现方式中,原始图像和第一解压图像中均包含一个图像矩阵,可以理解的是,原始图像和第一解压图像的图像大小相同,原始图像和第一图像矩阵的宽和高也相同,原始图像包含的图像矩阵中包含的每一个图像元素值与第一解压图像包含的图像矩阵中包含的每一个图像元素值都一一对应。用原始图像包含的图像矩阵与第一解压图像包含的图像矩阵相减,即原始图像包含的矩阵中每一个图像元素与第一解压图像包含的矩阵中对应位置的每一个图像元素值做差,就得到差异信息包括的图像矩阵。其中,做差运算的操作可以是第一图像设备调用软件程序执行的。
进一步的,为了减少传输的数据,对差异信息进行消减处理,得到目标差异信息。即将得到的差异信息包括的图像矩阵中的每一个图像元素值与预设的阈值范围比较,当判断出该图像元素值在预设的阈值范围内时,将该图像元素值修改为预设值,得到目标差异信息包括的图像矩阵。其中,预设的阈值范围可以是人为设定的,也可以是第一图像设备预设的,还可以根据不同的应用场景调整。可以理解的是,预设的阈值范围的不同,可以影响本申请使用的图像处理方法的压缩精度,从而可以控制压缩精度。在本申请实施例中,预设值可以是0、1等,这里不做限定。
在一种可能的实现方式中,当原始图像包括的图像矩阵和第一解压图像包含的图像矩阵中的图像元素值有较多相同时,原始图像包括的图像矩阵和第一解压图像包含的图像矩阵相减运算之后得到的差异信息包括的图像矩阵中的图像元素值有较多为0时,则可以将预设值确定为0,以便于对目标差异信息进行压缩。其中,可以统计差异信息包括的图像矩阵中的相同的图像元素值最多的数值和个数,将相同的图像元素值最多的数值作为预设值,得到目标差异信息。
在目标差异信息中,保留了原始图像和第一解压图像的图像矩阵中图像元素值相差较大的部分,消减了原始图像和第一解压图像中的图像矩阵中图像元素值相差较小的部分,因此,在将目标差异信息发送至第二图像设备后,可以通过目标差异信息对解压出的图像进行处理,将与原始图像的图像矩阵中图像元素值相差较大的部分补充回解压处理得到的图像中,提升解压出的图像的精度;并且将与原始图像的图像矩阵中图像元素值相差较小的部分的数据量消减掉,减少传输的数据量。
步骤S304:第一图像设备对上述目标差异信息进行压缩处理,得到压缩后的目标差异信息。
在一种可能的实现方式中,为了减少传输的数据量,第一图像设备可以对目标差异信息进行压缩。具体的,可以对目标差异信息进行编码,得到压缩后的目标差异信息,可以对目标差异信息进行熵编码,得到压缩后的目标差异信息。其中,熵编码的方法可以是香农(Shannon)编码,也可以是哈夫曼(Huffman)编码、算术编码(arithmetic coding)等,这里对熵编码的方式不做限定。可以理解的是,目标差异信息在编码前后是不会丢失信息量。将压缩后的目标差异信息发送至第二图像设备后,第二图像设备对压缩后的目标差异信息进行解压,能够得到无损的目标差异信息。
步骤S305:第一图像设备将上述压缩图像和上述压缩后的目标差异信息发送至第二图像设备。
第一图像设备将压缩图像和压缩后的目标差异信息发送至第二图像设备,以便于第二图像设备对压缩图像和压缩后的目标差异信息进行解压处理,进而用目标差异信息对解压后的图像进行处理,以提升图像的精度。
示例性的,如图4所示,第一图像设备(如图4中的拍摄装置、工业网关)将压缩图像 和压缩后的目标差异信息发送至MEC服务器。MEC服务器接收到压缩图像和压缩后的目标差异信息后,对压缩图像和压缩后的目标差异信息进行解压处理。
步骤S306:第二图像设备对该压缩图像进行解压处理,得到上述第一解压图像。
在一种可能的实现方式中,第二图像设备包括硬件解码器,可以用于对接收到的压缩图像进行解压处理,得到第一解压图像。具体的,可以是按照视频编码的顺序进行逆操作,得到原始图像的第一解压图像。进而,第二图像设备将第一解压图像保存下来,以便后续处理。
步骤S307:第二图像设备根据上述压缩后的目标差异信息对上述第一解压图像进行图像处理,得到第二解压图像。
在一种可能的实现方式中,第二图像设备可以调用软件程序,使用目标差异信息包括的图像矩阵和第一解压图像的图像矩阵做矩阵相加的操作,得到第二解压图像包括的图像矩阵。即目标差异信息的图像矩阵中的每一个图像元素值和第一解压图像的图像矩阵中对应位置的每一个图像元素值相加,得到第二解压图像包括的图像矩阵中的图像元素值。进一步的,可以根据第二解压图像包括的图像矩阵得到第二解压图像中每一个像素点的像素值,进而得到第二解压图像。如图4所示,MEC服务器可以对接收到的图像进行解压还原,得到第二解压图像。进而对第二解压图像进行识别、分析,以便生成控制指令。
请一并参阅图5a和图5b,图5a和图5b均是本申请实施例提供的一种图像在压缩前后图像的精度损失示意图。其中,图5a是未使用本申请实施例提供的图像处理方法,使用原始图像包括的图像矩阵和第一解压图像包括的图像矩阵做相减运算得到的结果示意图。图5b是使用本申请实施例提供的图像处理方法,即使用原始图像包括的图像矩阵和第二解压图像包括的图像矩阵做相减运算得到的结果示意图。
如图5a和图5b所示,黑色方框内的0、1、255,分别表示灰度值相对原始图像相同位置的像素点灰度值的差为0、1、255。需要说明的是,0表示处理后得到的图像与原始图像灰度值无任何差异,1和255表示处理后得到的第二解压图像与原始图像灰度值相差正负1,以此类推。分别在0、1、255后的数字表示像素点的个数。例如,图5a的第一行“0:7568827”表示第一解压图像与原始图像灰度值无任何差异的像素点的个数为7568827。同理,图5b中第一行“0:12900307”也表示第二解压图像与原始图像灰度值无任何差异的像素点的个数为12900307。相对于图5a所示的结果,实施本申请提供的图像处理方法,可以提高灰度值相对原图像相同位置的像素点灰度值的差为0的像素点数量,即能够增加在解压处理得到的第二解压图像相对于原始图像灰度值不变的像素点个数。因此,能够提高压缩图像在解压之后的精度,减小第二图像设备恢复得到的图像与原始图像之间的精度损失误差。
实施本申请实施例,在原有压缩图像的基础上,对压缩图像进行解压处理,并根据原始图像与解压出的第一解压图像之间的差异信息确定目标差异信息。在向第二图像设备发送压缩图像后,增加发送压缩图像解压后的第一解压图像与原始图像的目标差异信息,通过目标差异信息能够确定出原始图像和压缩处理后再解压处理得到的图像的压缩的精度损失,以便于在第二图像设备解压图像后对图像进行解压处理。其次,对目标差异信息进行压缩后发送至第二图像设备,以便于第二图像设备对图像进行图像处理。根据目标差异信息和解压处理后的图像(即第一解压图像),可以得到精度更高的解压图像(即第二解压图像),从而减少与原始图像的精度损失差,提高第二解压图像的精度。
请参阅图6,图6是本申请实施例提供的一种图像处理方法的时序示意图。如图6所示: P1:第一图像设备获取原始图像和参考图像。P2:第一图像设备中通过内置的硬件编码器,对原始图像进行压缩处理,得到原始图像的压缩图像。P3:第一图像设备可以将原始图像的压缩图像发送至第二图像设备。P4:第一图像设备中的硬件解码器可以将原始图像的压缩图像进行解压处理,得到第一解压图像。P5:第一图像设备调用软件程序对原始图像包括的图像矩阵与第一解压图像包括的图像矩阵进行相减处理,得到差异信息。P6:第一图像设备将差异信息包括的图像矩阵中处于预设阈值范围内的图像元素值修改为预设值,得到目标差异信息。P7:第一图像设备对目标差异信息进行编码,得到压缩后的目标差异信息。P8:第一图像设备将压缩后的目标差异信息发送至第二图像设备。第二图像设备接收原始图像的压缩图像和压缩后的目标差异信息。P9:第二图像设备可以使用内置的硬件解码器对原始图像的压缩图像进行解压处理,得到第一解压图像。P10:第二图像设备对上述压缩后的目标差异信息进行解压处理,得到目标差异信息。P11:第二图像设备根据目标差异信息对第一解压图像进行图像处理,得到第二解压图像。
其中,P3可以在P8执行之前执行,P3也可以在P8执行之后执行,还可以同时执行,这里不做限制。
实施本申请实施例,在原有压缩图像的基础上,对压缩图像进行解压处理,并根据原始图像与解压出的第一解压图像之间的差异信息确定目标差异信息。在向第二图像设备发送压缩图像后,增加发送压缩图像解压后的第一解压图像与原始图像的目标差异信息,通过目标差异信息能够确定出原始图像和压缩处理后再解压处理得到的图像的压缩的精度损失,以便于在第二图像设备解压图像后对图像进行解压处理。其次,对目标差异信息进行压缩后发送至第二图像设备,以便于第二图像设备对图像进行恢复。从而能够减少恢复出的图像与原始图像的精度损失差,提高恢复出的图像的精度。
请参见图7,图7是本申请实施例提供的一种图像处理装置的结构示意图,该图像处理装置700可以包括获取单元701、解压单元702、确定单元703、压缩单元704、发送单元705,其中,各个单元的详细描述如下。
上述图像处理装置具备实现本申请实施例描述的第一图像设备的功能。比如,上述图像处理装置包括计算机设备执行本申请实施例描述的第一图像设备涉及步骤所对应的模块或单元或手段(means),上述功能或单元或手段(means)可以通过软件实现,或者通过硬件实现,也可以通过硬件执行相应的软件实现,还可以通过软件和硬件结合的方式实现。详细可进一步参考前述对应方法实施例中的相应描述。
获取单元701,用于获取原始图像的压缩图像;解压单元702,用于对上述压缩图像进行解压处理,得到第一解压图像;确定单元703,用于确定目标差异信息,上述目标差异信息是根据上述原始图像与上述第一解压图像之间的差异信息得到的;压缩单元704,用于对上述目标差异信息进行压缩处理,得到压缩后的目标差异信息;发送单元705,用于将上述压缩图像和上述压缩后的目标差异信息发送至第二图像设备。
在一种可能的实现方式中,上述差异信息和上述目标差异信息均包括图像矩阵,每个图像矩阵包括多个图像元素值;上述确定单元703,具体用于:将上述差异信息包括的图像矩阵中处于预设阈值范围内的图像元素值修改为预设值,得到目标差异信息包括的图像矩阵。
在一种可能的实现方式中,上述获取单元701,具体用于:接收来自拍摄装置的原始图像;获取上述原始图像对应的参考图像;根据上述参考图像对上述原始图像进行压缩,得到压缩图像。
在一种可能的实现方式中,上述获取单元701,具体用于:拍摄得到原始图像;获取上述原始图像对应的参考图像;根据上述参考图像对上述原始图像进行压缩,得到压缩图像。
在一种可能的实现方式中,上述发送单元705还用于:将上述参考图像发送至第二图像设备。
需要说明的是,本申请实施例中所描述的图像处理装置700中各功能单元的功能可参见上述图3中上述的方法实施例中步骤S301-步骤S305的相关描述,此处不再赘述。
请参见图8,图8是本申请实施例提供的另一种图像处理装置的结构示意图,该图像处理装置800可以包括接收单元801、解压单元802、图像处理单元803,其中,各个单元的详细描述如下。
上述图像处理装置具备实现本申请实施例描述的第二图像设备的功能。比如,上述图像处理装置包括计算机设备执行本申请实施例描述的第二图像设备涉及步骤所对应的模块或单元或手段(means),上述功能或单元或手段(means)可以通过软件实现,或者通过硬件实现,也可以通过硬件执行相应的软件实现,还可以通过软件和硬件结合的方式实现。详细可进一步参考前述对应方法实施例中的相应描述。
接收单元801,用于接收来自第一图像设备的原始图像的压缩图像,以及压缩后的目标差异信息,上述目标差异信息是根据上述原始图像与第一解压图像之间的差异信息得到的;解压单元802,用于对上述压缩图像进行解压处理,得到上述第一解压图像;图像处理单元803,用于根据上述压缩后的目标差异信息对上述第一解压图像进行图像处理,得到第二解压图像。
在一种可能的实现方式中,上述接收单元801,还用于:接收来自上述第一图像设备的上述原始图像对应的参考图像。
在一种可能的实现方式中,上述解压单元802,具体用于:根据上述参考图像对上述压缩图像进行解压处理,得到上述第一解压图像。
在一种可能的实现方式中,上述目标差异信息和上述第一解压图像均包含图像矩阵,每个图像矩阵包括多个图像元素值;上述解压单元802,具体用于:对上述压缩后的目标差异信息进行解压处理,得到目标差异信息;对上述第一解压图像包括的图像矩阵和上述目标差异信息包括的图像矩阵进行相加处理,得到第二解压图像包括的图像矩阵;根据上述第二解压图像包括的图像矩阵确定上述第二解压图像。
需要说明的是,本申请实施例中所描述的图像处理装置800中各功能单元的功能可参见上述图3中上述的方法实施例中步骤S306-步骤S307的相关描述,此处不再赘述。
如图9所示,图9是本申请实施例提供的一种计算机设备的结构示意图,该计算机设备900具备实现本申请实施例描述的第一图像设备的功能。该计算机设备900包括至少一个处理器901,至少一个存储器902、至少一个通信接口903。此外,该设备还可以包括天线等通用部件,在此不再详述。
处理器901可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
通信接口903,用于与其他设备或通信网络通信,如以太网,无线接入网(RAN),核心网,无线局域网(wireless local area networks,WLAN)等。
存储器902可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,上述存储器902用于存储执行以上方案的应用程序代码,并由处理器901来控制执行。上述处理器901用于执行上述存储器902中存储的应用程序代码。
存储器902存储的代码可执行以上图3提供的图像处理方法,比如获取原始图像的压缩图像;对上述压缩图像进行解压处理,得到第一解压图像;确定目标差异信息,上述目标差异信息是根据上述原始图像与上述第一解压图像之间的差异信息得到的;对上述目标差异信息进行压缩处理,得到压缩后的目标差异信息;将上述压缩图像和上述压缩后的目标差异信息发送至第二图像设备。
需要说明的是,本申请实施例中所描述的计算机设备900中各功能单元的功能可参见上述图3中上述的方法实施例中的步骤S301-步骤S305相关描述,此处不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
如图10所示,图10是本申请实施例提供的另一种计算机设备的结构示意图,该设备1000具备实现本申请实施例描述的第二图像设备的功能。该计算机设备1000包括至少一个处理器1001,至少一个存储器1002、至少一个通信接口1003。此外,该设备还可以包括天线等通用部件,在此不再详述。
处理器1001可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
通信接口1003,用于与其他设备或通信网络通信,如以太网,无线接入网(RAN),核心网,无线局域网(Wireless Local Area Networks,WLAN)等。
存储器1002可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,上述存储器1002用于存储执行以上方案的应用程序代码,并由处理器1001来控制执行。上述处理器1001用于执行上述存储器1002中存储的应用程序代码。
存储器1002存储的代码可执行以上图3提供的图像处理方法,比如接收来自第一图像设备的原始图像的压缩图像,以及压缩后的目标差异信息,上述目标差异信息是根据上述原始 图像与第一解压图像之间的差异信息得到的;对上述压缩图像进行解压处理,得到上述第一解压图像;根据上述压缩后的目标差异信息对上述第一解压图像进行图像处理,得到第二解压图像。
需要说明的是,本申请实施例中所描述的计算机设备1000中各功能单元的功能可参见上述图3中上述的方法实施例中的步骤S306-步骤S307相关描述,此处不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可能可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如上述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以为个人计算机、服务端或者网络设备等,具体可以是计算机设备中的处理器)执行本申请各个实施例上述方法的全部或部分步骤。其中,而前述的存储介质可包括:U盘、移动硬盘、磁碟、光盘、只读存储器(read-only memory,ROM)或者随机存取存储器(random access memory,RAM)等各种可以存储程序代码的介质。
本申请的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
在本说明书中使用的术语“部件”、“模块”、“系统”等用于表示计算机相关的实体、硬件、固件、硬件和软件的组合、软件、或执行中的软件。例如,部件可以是但不限于,在处理器上运行的进程、处理器、对象、可执行文件、执行线程、程序和/或计算机。通过图示,在计算设备上运行的应用和计算设备都可以是部件。一个或多个部件可驻留在进程和/或执行线程中,部件可位于一个计算机上和/或分布在2个或更多个计算机之间。此外,这些部件可从在上面存储有各种数据结构的各种计算机可读介质执行。部件可例如根据具有一个或多个数据分组(例如来自与本地系统、分布式系统和/或网络间的另一部件交互的二个部件的数据,例如通过信号与其它系统交互的互联网)的信号通过本地和/或远程进程来通信。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种图像处理方法,其特征在于,应用于第一图像设备,包括:
    获取原始图像的压缩图像;
    对所述压缩图像进行解压处理,得到第一解压图像;
    确定目标差异信息,所述目标差异信息是根据所述原始图像与所述第一解压图像之间的差异信息得到的;
    对所述目标差异信息进行压缩处理,得到压缩后的目标差异信息;
    将所述压缩图像和所述压缩后的目标差异信息发送至第二图像设备。
  2. 根据权利要求1所述的方法,其特征在于,所述差异信息和所述目标差异信息均包括图像矩阵,每个图像矩阵包括多个图像元素值;
    所述确定目标差异信息,包括:
    将所述差异信息包括的图像矩阵中处于预设阈值范围内的图像元素值修改为预设值,得到目标差异信息包括的图像矩阵。
  3. 根据权利要求1或2所述的方法,其特征在于,所述获取原始图像的压缩图像,包括:
    接收来自拍摄装置的原始图像;
    获取所述原始图像对应的参考图像;
    根据所述参考图像对所述原始图像进行压缩,得到压缩图像。
  4. 根据权利要求1或2所述的方法,其特征在于,所述获取原始图像的压缩图像,包括:
    拍摄得到原始图像;
    获取所述原始图像对应的参考图像;
    根据所述参考图像对所述原始图像进行压缩,得到压缩图像。
  5. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    将所述参考图像发送至第二图像设备。
  6. 一种图像处理方法,其特征在于,应用于第二图像设备,包括:
    接收来自第一图像设备的原始图像的压缩图像,以及压缩后的目标差异信息,所述目标差异信息是根据所述原始图像与第一解压图像之间的差异信息得到的;
    对所述压缩图像进行解压处理,得到所述第一解压图像;
    根据所述压缩后的目标差异信息对所述第一解压图像进行图像处理,得到第二解压图像。
  7. 根据权利要求6所述的方法,其特征在于,所述方法还包括:
    接收来自所述第一图像设备的所述原始图像对应的参考图像。
  8. 根据权利要求7所述的方法,其特征在于,所述对所述压缩图像进行解压处理,得到所述第一解压图像,包括:
    根据所述参考图像对所述压缩图像进行解压处理,得到所述第一解压图像。
  9. 根据权利要求7或8所述的方法,其特征在于,所述目标差异信息和所述第一解压图像均包含图像矩阵,每个图像矩阵包括多个图像元素值;
    所述根据所述压缩后的目标差异信息对所述第一解压图像进行图像处理,得到第二解压图像,包括:
    对所述压缩后的目标差异信息进行解压处理,得到目标差异信息;
    对所述第一解压图像包括的图像矩阵和所述目标差异信息包括的图像矩阵进行相加处理,得到第二解压图像包括的图像矩阵;
    根据所述第二解压图像包括的图像矩阵确定所述第二解压图像。
  10. 一种图像处理装置,其特征在于,所述装置应用于第一图像设备,包括:
    获取单元,用于获取原始图像的压缩图像;
    解压单元,用于对所述压缩图像进行解压处理,得到第一解压图像;
    确定单元,用于确定目标差异信息,所述目标差异信息是根据所述原始图像与所述第一解压图像之间的差异信息得到的;
    压缩单元,用于对所述目标差异信息进行压缩处理,得到压缩后的目标差异信息;
    发送单元,用于将所述压缩图像和所述压缩后的目标差异信息发送至第二图像设备。
  11. 根据权利要求10所述的装置,其特征在于,所述差异信息和所述目标差异信息均包括图像矩阵,每个图像矩阵包括多个图像元素值;
    所述确定单元,具体用于:
    将所述差异信息包括的图像矩阵中处于预设阈值范围内的图像元素值修改为预设值,得到目标差异信息包括的图像矩阵。
  12. 根据权利要求10或11所述的装置,其特征在于,所述获取单元,具体用于:
    接收来自拍摄装置的原始图像;
    获取所述原始图像对应的参考图像;
    根据所述参考图像对所述原始图像进行压缩,得到压缩图像。
  13. 根据权利要求10或11所述的装置,其特征在于,所述获取单元,具体用于:
    拍摄得到原始图像;
    获取所述原始图像对应的参考图像;
    根据所述参考图像对所述原始图像进行压缩,得到压缩图像。
  14. 根据权利要求12或13所述的装置,其特征在于,所述发送单元还用于:
    将所述参考图像发送至第二图像设备。
  15. 一种图像处理装置,其特征在于,所述装置应用于第二图像设备,包括:
    接收单元,用于接收来自第一图像设备的原始图像的压缩图像,以及压缩后的目标差异信息,所述目标差异信息是根据所述原始图像与第一解压图像之间的差异信息得到的;
    解压单元,用于对所述压缩图像进行解压处理,得到所述第一解压图像;
    图像处理单元,用于根据所述压缩后的目标差异信息对所述第一解压图像进行图像处理,得到第二解压图像。
  16. 根据权利要求15所述的装置,其特征在于,所述接收单元,还用于:
    接收来自所述第一图像设备的所述原始图像对应的参考图像。
  17. 根据权利要求16所述的装置,其特征在于,所述解压单元,具体用于:
    根据所述参考图像对所述压缩图像进行解压处理,得到所述第一解压图像。
  18. 根据权利要求16或17所述的装置,其特征在于,所述目标差异信息和所述第一解压图像均包含图像矩阵,每个图像矩阵包括多个图像元素值;
    所述图像处理单元,具体用于:
    对所述压缩后的目标差异信息进行解压处理,得到目标差异信息;
    对所述第一解压图像包括的图像矩阵和所述目标差异信息包括的图像矩阵进行相加处理,得到第二解压图像包括的图像矩阵;
    根据所述第二解压图像包括的图像矩阵确定所述第二解压图像。
  19. 一种计算机设备,其特征在于,包括处理器和存储器,所述处理器和所述存储器耦合,其中,所述存储器用于存储计算机指令,所述处理器用于执行所述计算机指令,以使所述图像处理装置实现如权利要求1-5或6-9中任一项所述的方法。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储指令,当所述指令被执行时,使得如权利要求1-5或6-9中任一项所述的方法被实现。
PCT/CN2021/117864 2020-09-17 2021-09-11 一种图像处理方法、装置、设备及计算机可读存储介质 WO2022057746A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020237011638A KR20230062862A (ko) 2020-09-17 2021-09-11 이미지 처리 방법 및 장치, 디바이스, 및 컴퓨터 판독가능 저장 매체
EP21868571.7A EP4203474A4 (en) 2020-09-17 2021-09-11 IMAGE PROCESSING METHOD AND APPARATUS, APPARATUS AND COMPUTER READABLE STORAGE MEDIUM
JP2023517665A JP2023547587A (ja) 2020-09-17 2021-09-11 画像処理方法及び装置、デバイス、及びコンピュータ読み取り可能な記憶媒体
US18/184,940 US20230222696A1 (en) 2020-09-17 2023-03-16 Image processing method and apparatus, device, and computer-readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010981884.9A CN114205584A (zh) 2020-09-17 2020-09-17 一种图像处理方法、装置、设备及计算机可读存储介质
CN202010981884.9 2020-09-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/184,940 Continuation US20230222696A1 (en) 2020-09-17 2023-03-16 Image processing method and apparatus, device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2022057746A1 true WO2022057746A1 (zh) 2022-03-24

Family

ID=80644872

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/117864 WO2022057746A1 (zh) 2020-09-17 2021-09-11 一种图像处理方法、装置、设备及计算机可读存储介质

Country Status (6)

Country Link
US (1) US20230222696A1 (zh)
EP (1) EP4203474A4 (zh)
JP (1) JP2023547587A (zh)
KR (1) KR20230062862A (zh)
CN (1) CN114205584A (zh)
WO (1) WO2022057746A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116302600A (zh) * 2023-02-28 2023-06-23 网宿科技股份有限公司 一种浏览器页面的显示方法、电子设备及存储介质
CN116563286B (zh) * 2023-07-11 2023-09-15 深圳市惠德贵科技开发有限公司 一种移动硬盘盒生产质量快速检测系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050141772A1 (en) * 2003-12-25 2005-06-30 Nikon Corporation Image compressor for generating predicted difference code having fixed bit length and program thereof, image decompressor for decoding the code and program thereof, and electronic camera
CN101159875A (zh) * 2007-10-15 2008-04-09 浙江大学 二重预测视频编解码方法和装置
CN101194516A (zh) * 2005-06-08 2008-06-04 英国电讯有限公司 视频编码
CN106162180A (zh) * 2016-06-30 2016-11-23 北京奇艺世纪科技有限公司 一种图像编解码方法及装置
CN107105208A (zh) * 2017-06-06 2017-08-29 山东大学 一种Bayer图像的无损编码与解码方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITMO20030053A1 (it) * 2003-03-04 2004-09-05 Alcom Algoritmi Compressione Dati S Rl Dispositivo e metodo di codifica e decodifica per la compressione e decompressione di dati digitali.
CN101755461B (zh) * 2007-07-20 2012-06-13 富士胶片株式会社 图像处理设备、图像处理方法
KR20120083501A (ko) * 2009-10-27 2012-07-25 인텔 코포레이션 Jpeg-ls를 사용하는 스케일러블 압축

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050141772A1 (en) * 2003-12-25 2005-06-30 Nikon Corporation Image compressor for generating predicted difference code having fixed bit length and program thereof, image decompressor for decoding the code and program thereof, and electronic camera
CN101194516A (zh) * 2005-06-08 2008-06-04 英国电讯有限公司 视频编码
CN101159875A (zh) * 2007-10-15 2008-04-09 浙江大学 二重预测视频编解码方法和装置
CN106162180A (zh) * 2016-06-30 2016-11-23 北京奇艺世纪科技有限公司 一种图像编解码方法及装置
CN107105208A (zh) * 2017-06-06 2017-08-29 山东大学 一种Bayer图像的无损编码与解码方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4203474A4

Also Published As

Publication number Publication date
JP2023547587A (ja) 2023-11-13
US20230222696A1 (en) 2023-07-13
CN114205584A (zh) 2022-03-18
KR20230062862A (ko) 2023-05-09
EP4203474A1 (en) 2023-06-28
EP4203474A4 (en) 2024-01-24

Similar Documents

Publication Publication Date Title
US11694125B2 (en) Image encoder using machine learning and data processing method of the image encoder
US9414086B2 (en) Partial frame utilization in video codecs
US20230222696A1 (en) Image processing method and apparatus, device, and computer-readable storage medium
US11297341B2 (en) Adaptive in-loop filter with multiple feature-based classifications
EP2936427A1 (en) Spatially adaptive video coding
US12067054B2 (en) Method and system for characteristic-based video processing
US12047578B2 (en) Lossless coding of video data
CN113785573A (zh) 编码器、解码器和使用自适应环路滤波器的对应方法
US20240129498A1 (en) Methods for processing chroma signals
US20230362384A1 (en) Methods and systems for cross-component sample adaptive offset
KR20170044682A (ko) 비디오 코딩에서 인-루프 필터링을 위한 시스템 및 방법
US11638019B2 (en) Methods and systems for prediction from multiple cross-components
US8442338B2 (en) Visually optimized quantization
US20210266548A1 (en) Signaling of maximum transform size and residual coding method
JP2015076765A (ja) 画像処理装置及びその制御方法、並びに、コンピュータプログラム
WO2021263251A1 (en) State transition for dependent quantization in video coding
WO2024198897A1 (zh) 一种视频传输方法、装置、存储介质和系统
WO2024077772A1 (en) Method and system for image data processing
US20230403397A1 (en) Cross component prediction of chroma samples
US20210385491A1 (en) Method for processing adaptive color transform and low-frequency non-separable transform in video coding
JP6270472B2 (ja) 画像符号化装置、画像符号化方法、及びプログラム
JP2024536247A (ja) 後のアナリティクスおよび再構成要件に基づいた、アダプティブなビデオシンニング
WO2021133529A1 (en) Methods for coding video data in palette mode
TW202044834A (zh) 用於處理視訊內容的方法及系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21868571

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023517665

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021868571

Country of ref document: EP

Effective date: 20230322

ENP Entry into the national phase

Ref document number: 20237011638

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE