WO2022016756A1 - 图像处理方法、装置、设备及存储介质 - Google Patents

图像处理方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022016756A1
WO2022016756A1 PCT/CN2020/130305 CN2020130305W WO2022016756A1 WO 2022016756 A1 WO2022016756 A1 WO 2022016756A1 CN 2020130305 W CN2020130305 W CN 2020130305W WO 2022016756 A1 WO2022016756 A1 WO 2022016756A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame image
area
sector
scanning area
Prior art date
Application number
PCT/CN2020/130305
Other languages
English (en)
French (fr)
Inventor
刘诣荣
卢涛
Original Assignee
西安万像电子科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安万像电子科技有限公司 filed Critical 西安万像电子科技有限公司
Publication of WO2022016756A1 publication Critical patent/WO2022016756A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an image processing method, apparatus, device, and storage medium.
  • the radar situation map is a special kind of image, which has the characteristics of single color, basically unchanged background, and only the radar scanning area changes. Therefore, in order to reduce the data volume of the code stream of the radar situation map after encoding and decoding, a specific encoding method can be used for the radar situation map to reduce the redundancy of the code stream, thereby reducing the data volume of the code stream after the radar situation map is encoded and decoded.
  • the radar situation diagram Since there is currently no coding compression method specifically for radar situation diagrams, the radar situation diagram is still encoded according to the traditional compression protocol, and the shape and changing characteristics of the radar situation diagram are not fully utilized. Therefore, the radar situation diagram is coded according to the traditional compression protocol. During encoding, the data amount of the code stream of the encoded radar situation map is relatively large.
  • Embodiments of the present disclosure provide an image processing method, device, device, and storage medium, which can solve the problem that the code stream of the encoded radar situation map has a large amount of data when the radar situation map is encoded according to a traditional compression protocol.
  • the technical solution is as follows:
  • an image processing method comprising:
  • the current frame image is a dynamically scanned radar situation map
  • the first image data and the second image data are encoded and sent to the decoding end.
  • the image processing method provided by the embodiment of the present disclosure can acquire a current frame image, which is a dynamically scanned radar situation map; determine a first sector-shaped scanning area of the current frame image and a first image of the first sector-shaped scanning area data; determine the second sector scanning area of the target frame image according to the first sector scanning area; obtain the target frame image; obtain second image data from the second sector scanning area of the target frame image; After the two-image data is encoded and sent to the decoding end, it is not necessary to encode the current frame image and the target frame image and send them to the decoding end, only the first image data obtained from the first sector scanning area of the current frame image, And the second image data obtained from the second sector scanning area of the target frame image is encoded and sent to the decoding end, which greatly reduces the data amount of the code stream of the encoded radar situation map.
  • the method before acquiring the current frame image, the method further includes:
  • Determining the second sector scanning area of the target frame image according to the first sector scanning area includes:
  • a second sector scan area of the target frame image is determined according to the first sector scan area and the dynamic scan rule.
  • the second sector scanning area of the target frame image can be further determined according to the dynamic scanning rule.
  • the determining the first sector scanning area of the current frame image includes:
  • the initial image is a radar situation map that has not been dynamically scanned
  • the first sector scanning area of the current frame image is determined from the circular scanning area of the current frame image.
  • the sector scanning area can be further determined in the circular scanning area of the current frame image.
  • the determining the first sector scanning area of the current frame image from the circular scanning area of the current frame image includes:
  • Linear fitting is performed on the change area of the current frame image to generate the first fan-shaped area.
  • the first fan-shaped area can be accurately generated.
  • the determining the circular scanning area of the initial image includes:
  • Circular detection is performed on the initial image, and a circular scanning area in the initial image is determined.
  • the circular scanning area of the initial image can be accurately determined.
  • an image processing apparatus including:
  • a current frame image acquisition module configured to acquire a current frame image, where the current frame image is a dynamically scanned radar situation map
  • a first sector scanning area determination module configured to determine the first sector scanning area of the current frame image and the first image data of the first sector scanning area
  • a second sector scanning area determining module configured to determine a second sector scanning area of the target frame image according to the first sector scanning area
  • a target frame image acquisition module for acquiring the target frame image
  • a second image data acquisition module configured to acquire second image data from the second sector scanning area of the target frame image
  • the image data sending module is used for encoding the first image data and the second image data and sending them to the decoding end.
  • the apparatus further includes:
  • Dynamic scan rule determination module for:
  • the second sector scanning area determination module is specifically used for:
  • a second sector scan area of the target frame image is determined according to the first sector scan area and the dynamic scan rule.
  • the first sector scanning area determining module is specifically configured to:
  • the initial image is a radar situation map that has not been dynamically scanned
  • the first sector scanning area of the current frame image is determined from the circular scanning area of the current frame image.
  • the first sector scanning area determining module is specifically configured to:
  • Linear fitting is performed on the change area of the current frame image to generate the first fan-shaped area.
  • the dynamic scanning rule determination module is specifically used for:
  • Circular detection is performed on the initial image, and a circular scanning area in the initial image is determined.
  • an image processing device includes a processor and a memory, the memory stores at least one computer instruction, and the instruction is loaded and executed by the processor to implement the steps performed in the image processing method according to any one of the first aspects.
  • a computer-readable storage medium where at least one computer instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to implement any one of the first aspects. The steps performed in the image processing method described above.
  • FIG. 1 is a structural diagram of an image processing system provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of an initial image provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a region separation provided by an embodiment of the present disclosure.
  • FIG. 5 is a schematic diagram of a circular scanning area and a background area of an initial image provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram 1 of a first sector scanning area of the current frame image provided by an embodiment of the present disclosure
  • FIG. 7 is a second schematic diagram of a first sector scanning area of the current frame image provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic diagram of a second sector scanning area of a frame-marking image provided by an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of an initial image encoded by a color code table provided by an embodiment of the present disclosure.
  • FIG. 10 is a structural diagram 1 of an image processing apparatus provided by an embodiment of the present disclosure.
  • FIG. 11 is a second structural diagram of an image processing apparatus provided by an embodiment of the present disclosure.
  • FIG. 12 is a structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 1 is a structural diagram of an image processing system provided by an embodiment of the present disclosure. As shown in Figure 1, the system includes:
  • the encoding end 101 and the decoding end 102 are connected to the decoding end 102 for communication.
  • the encoding terminal 01 may be a computer, a mobile phone, a tablet, or other equipment, which is not specifically limited in this embodiment.
  • the decoding end 102 is a computer, a mobile phone, a tablet, or other equipment, which is not specifically limited in this embodiment.
  • the encoding terminal 101 is used to acquire the radar situation map, encode the acquired radar situation map, generate an encoded code stream, and then send the encoded code stream to the decoding terminal 102 .
  • the decoding end 102 decodes the encoded code stream to generate a decoded code stream, and then generates the radar situation diagram according to the decoded code stream. .
  • the code stream of the radar situation map encoded by the encoder 101 has a large amount of data, which makes decoding
  • the terminal 102 decodes the encoded code stream, and generates the decoded code stream with a relatively large amount of data.
  • FIG. 2 is a flowchart of an image processing method provided by an embodiment of the present disclosure, and the method is applied to an encoding end. As shown in Figure 2, the method includes:
  • FIG. 3 is a schematic diagram of an initial image provided by an embodiment of the present disclosure. As shown in Figure 3, the left figure (a) is the initial image of 256-level (8bit) grayscale.
  • this scheme uses the error diffusion method to convert the initial image of 256-level grayscale into an initial image of 32-level grayscale.
  • Error diffusion is often used when reducing the color depth of an image.
  • the principle is to diffuse the change error of the pixel color when reducing the color depth of the image, which makes the overall error of the adjacent pixel set smaller when the image is observed with the naked eye.
  • the original image of 256-level (8bit) grayscale is subjected to error diffusion, the grayscale of the original image of 256-level (8bit) grayscale is downgraded, and the 256-level (8bit) grayscale of the original image is degraded.
  • 8bit) grayscale original image is converted to 32-level grayscale original image.
  • the initial image of 32-level grayscale generated after the grayscale of the original image with 256-level (8bit) grayscale is downgraded is shown in the figure (b) on the right side of Fig. 3 .
  • the original image of 256-level (8bit) grayscale is downgraded, and after the initial image of 32-level grayscale is generated, the region is separated for the initial image of 32-level grayscale, and the initial grayscale of the 32-level grayscale is obtained.
  • the circular scan area of the image and the background area are two parts. Due to the particularity of the radar situation map, after the dynamic scanning is started, the main change area of the radar situation map exists in the circular scanning area, and there are only a few simple digital changes in the background area, and most of them are basically unchanged. Therefore, in this application, the radar situation map is divided into two parts, the circular scanning area and the background area. Exemplarily, a schematic diagram of region separation is shown in FIG. 4 .
  • This scheme uses the Hough circle detection algorithm to detect the outer frame circle of the circular scanning area in the initial image, so as to achieve the purpose of separating the images.
  • the outer frame circle and the area within the outer frame circle are circular scanning areas, and the area outside the outer frame circle is the background area.
  • the Hough circle detection algorithm is the same as the Hough transform detection of straight lines. By transforming the coordinates of each pixel in the initial image, the pixel points on the Y-X plane of the initial image are correspondingly converted to the a-b coordinate system.
  • (x, y) is the coordinate on the YX plane of a certain pixel in the initial image
  • a and b are the coordinates in the ab coordinate system, if a>0&&a ⁇ IMG_height, b>0&&b ⁇ IMG_width, then the position is superimposed.
  • IMG_height is the height of the initial image
  • IMG_width is the width of the initial image.
  • the coordinate system After the coordinate system is converted, if there are many pixels on a circular boundary on the Y-X plane, there will be many circles corresponding to the coordinate system a-b. Since these pixels are all on the same circle in the initial image, the converted a and b must also satisfy the equations of all circles in the a-b coordinate system. Intuitively, the circles corresponding to these many pixel points will intersect at one pixel point, then this intersection point may be the center of the circle (a, b).
  • the intersection point corresponding to the maximum value is the center (a, b) of the outer frame circle of the circular scanning area in the initial image. Then draw circles at the center of the circle (a, b) with steps of different radii to obtain multiple circles with different radii. The number of pixels on circles with different radii is counted, and the maximum number of pixels is taken. The circle corresponding to the maximum number of pixels is the outer frame circle of the initial image. The initial image is then segmented according to the outer frame circle to generate a circular scanning area and a background area of the initial image. The circular scanning area and the background area of the initial image are shown in the left panel (a) of FIG. 5 and the right panel (b) of FIG. 5 , respectively.
  • the initial image is encoded, and the position information of the outer frame circle in the initial image is sent to the decoding end. So that the decoding end decodes the encoded initial image, and determines the circular scanning area and the background area of the initial image.
  • a current frame image is acquired, the current frame image is a dynamic scanning radar situation map, and a first sector scanning area exists on the current frame image.
  • the circular scanning area of the current frame image is determined according to the circular scanning area of the initial image. Since the initial image is a radar situation map that has not been dynamically scanned, and the current frame image is a dynamically scanned radar frame image, the position of the circular scanning area of the current frame image in the image is different from the circular scanning area of the initial image. The positions in the image are consistent, so the circular scanning area of the current frame image can be determined according to the circular scanning area of the initial image.
  • the image data in the circular scanning area of the current frame and the image data in the circular scanning area of the initial frame do a difference calculation to obtain the current frame.
  • the change area of the image the image data in the change area of the current frame image is different from the initial image. Since only the circular scanning area is scanned when the radar situation map is dynamically scanned, in this embodiment, the image data in the circular scanning area of the current frame is compared with the circular scanning area of the initial frame. The difference operation is performed on the image data in the image to obtain the change area of the current frame image.
  • the formulas of the upper and lower sides l1 and l2 of the first sector-shaped scanning area are obtained as:
  • the first sector scanning area of the current frame image is shown in FIG. 6 and FIG. 7 . Further, after the first sector-shaped scanning area of the current frame image is determined, first image data is acquired from the first sector-shaped scanning area. The first image data is the image data in which the current frame image is changed compared with the initial image.
  • the first sector scan area of the current frame image and the sector angle ⁇ of the first sector scan area can be The second sector scanning area of the target frame image is determined and calculated, and the target frame image can be the next frame image or any frame image obtained at a moment after the current frame image is obtained.
  • the sector angle ⁇ of the first sector scanning area of the current frame image is 10°.
  • the sector angle of the sector scan area of the second frame image is 10°
  • the sector scan area of the second frame image is 11° to 20°
  • the third frame image The sector angle of the sector scanning area is 10°
  • the sector scanning area of the third frame image is 21° to 30°.
  • the leftmost figure (a) in FIG. 8 is the sector scanning area of the second frame image; the figure (b) in the middle of FIG. 8 is the third frame image. Sector scanning area; the rightmost picture (c) in Figure 8 is the sector scanning area of the first three frames of images.
  • At least one frame of image may be acquired before acquiring the current frame image, and the at least one frame of image is a dynamic scanning radar situation map , and determine the third sector-shaped scanning area of the at least one frame of image, and then determine the dynamic scanning rule according to the third sector-shaped scanning area of the at least one frame of image.
  • the second sector scan area of the target frame image is determined according to the first sector scan area of the current frame image and the dynamic scan rule.
  • the method for determining the third sector-shaped scanning area of the at least one frame of image here is similar to the method for determining the first sector-shaped scanning area of the current frame image, and details are not described herein again in this embodiment.
  • a frame of image may be acquired at a moment before the current frame image is acquired, and the dynamic scanning rule is determined according to the third sector-shaped scanning area of the one-frame image and the second sector-shaped scanning area of the current frame image.
  • the third sector scanning area of the one frame image is 1° to 5°
  • the second sector scanning area of the current frame image is 6° to 10°
  • the dynamic scanning rule is clockwise scanning, and every moment Scan 5°.
  • the second sector scanning area of the target frame image can be determined according to the first sector scanning area of the current frame image.
  • two consecutive frames of images may also be acquired at a moment before the current frame of images, and the dynamic scanning rule may be determined according to the third sector scanning area of the two frames of images.
  • the third sector scanning areas of the two consecutive frames of images are respectively 1° to 5° and 6° to 10°, then the dynamic scanning rule is clockwise scanning, and scanning 5° at each moment.
  • the second sector scanning area of the target frame image can be determined according to the first sector scanning area of the current frame image.
  • S205 Acquire second image data from the second sector scanning area of the target frame image.
  • second image data is acquired from the acquired second sector scanning area of the target image.
  • the second image data is the data in which the current frame image is changed compared with the initial image.
  • the first image data and the second image data are encoded and sent to the decoding end.
  • the first image data is encoded, and the encoded first image data and the first sector scanning area are sent to The decoding end, so that the decoding end decodes the first image data, and superimposes the decoded first image data in the first sector scanning area of the initial image to generate the current frame image.
  • the second image data is encoded, and the encoded second image data and the second sector scanning area are sent to the decoding end, so that The decoding end decodes the second image data, and superimposes the decoded second image data in the second sector scanning area of the initial image to generate the target frame image.
  • the following describes how to encode the initial image, the first image data and the second image data.
  • this scheme encodes the initial image by establishing a color code table.
  • the radar situation map is an image in RGB pixel format.
  • the color code table shown in Table 1 can be arranged by Huffman probability statistics on the initial image, and the color combination with high occurrence probability is arranged before the small number, so as to obtain the smallest possible code stream during entropy coding.
  • each color pixel in the initial image encode it with a number corresponding to the color in the established color code table.
  • JPEG Joint Photographic Experts Group
  • the first image data and the second image data are encoded in the same manner, and details are not described herein again in this embodiment.
  • the encoding end only needs to send the encoded code stream of the initial image to the decoding end, and the decoding end decodes and generates the initial image after receiving the code stream of the initial image.
  • the encoding end sends the code stream of the encoded initial image to the decoding end, for each frame of the dynamically scanned radar situation map obtained, it only needs to encode the image data of the changing area of the dynamically scanned radar situation map of the frame, and
  • the generated code stream of the image data of the changed area is sent to the decoding end, and the decoding end decodes the code stream of the image data of the changed area, and superimposes the image data of the changed area on the initial image to generate the code stream.
  • Frame dynamic scanning radar situation map is
  • the encoder does not need to encode each frame of the dynamically scanned radar situation map and send it to the decoder, and the decoder does not need to receive the code stream generated by each frame of the dynamically scanned radar situation map after encoding, and to The decoding of the radar situation map dynamically scanned for each frame greatly reduces the data amount of the code stream of the encoded radar situation map.
  • the image processing method provided by the embodiment of the present disclosure can acquire a current frame image, which is a dynamically scanned radar situation map; determine a first sector-shaped scanning area of the current frame image and a first image of the first sector-shaped scanning area data; determine the second sector scanning area of the target frame image according to the first sector scanning area; obtain the target frame image; obtain second image data from the second sector scanning area of the target frame image; After the two-image data is encoded and sent to the decoding end, it is not necessary to encode the current frame image and the target frame image and send them to the decoding end, only the first image data obtained from the first sector scanning area of the current frame image, And the second image data obtained from the second sector scanning area of the target frame image is encoded and sent to the decoding end, which greatly reduces the data amount of the code stream of the encoded radar situation map.
  • FIG. 10 is a structural diagram of an image processing apparatus provided by an embodiment of the present disclosure.
  • the device is applied to the encoding end.
  • the device 100 includes:
  • a current frame image acquisition module 1001 is configured to acquire a current frame image, where the current frame image is a dynamically scanned radar situation map;
  • a first sector scanning area determination module 1002 configured to determine a first sector scanning area of the current frame image and first image data of the first sector scanning area
  • a target frame image acquisition module 1004 configured to acquire the target frame image
  • a second image data acquisition module 1005, configured to acquire second image data from the second sector scanning area of the target frame image
  • the image data sending module 1006 is configured to encode the first image data and the second image data and send them to the decoding end.
  • the apparatus 100 further includes:
  • the dynamic scanning rule determination module 1007 is used for:
  • the second sector scanning area determination module 1003 is specifically used for:
  • a second sector scanning area of the target frame image is determined according to the first sector scanning area and the dynamic scanning rule.
  • the first sector scanning area determining module 1002 is specifically configured to:
  • the initial image is a radar situation map that has not been dynamically scanned
  • the first sector scanning area of the current frame image is determined from the circular scanning area of the current frame image.
  • the first sector scanning area determining module 1002 is specifically configured to:
  • Linear fitting is performed on the change area of the current frame image to generate the first fan-shaped area.
  • the dynamic scanning rule determination module is specifically used for:
  • Circular detection is performed on the initial image, and a circular scanning area in the initial image is determined.
  • FIG. 12 is a structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • the image processing device 120 includes a processor 1201 and a memory 1202.
  • the memory 1202 stores at least one computer instruction, and the instruction is loaded and executed by the processor 1201 to implement the method described in the above embodiment. image processing method.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, for example, a non-transitory computer-readable storage medium may be a read-only memory (English) : Read Only Memory, ROM), random access memory (English: Random Access Memory, RAM), CD-ROM, magnetic tape, floppy disk and optical data storage devices, etc.
  • Computer instructions are stored on the storage medium for executing the above-mentioned image processing method, which will not be repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

提供一种图像处理方法、装置、存储介质及设备,涉及图像处理技术领域,能够解决按照传统压缩协议对雷达态势图进行编码时,编码后的雷达态势图的码流的数据量较大的问题。具体技术方案为:获取当前帧图像,所述当前帧图像为动态扫描的雷达态势图;确定所述当前帧图像的第一扇形扫描区域以及所述第一扇形扫描区域的第一图像数据;根据所述第一扇形扫描区域确定目标帧图像的第二扇形扫描区域;获取所述目标帧图像;从所述目标帧图像的第二扇形扫描区域获取第二图像数据;将第一图像数据和第二图像数据编码后发送给解码端。上述方案用于减少编码后的雷达态势图的码流的数据量。

Description

图像处理方法、装置、设备及存储介质 技术领域
本公开涉及图像处理领域,尤其涉及图像处理方法、装置、设备及存储介质。
背景技术
视频编解码传输中,雷达态势图是一种比较特殊的图像,有着色彩单一,背景基本不变,只有雷达扫描区域变化的特点。因此,为减小雷达态势图在编解码后码流的数据量,可对雷达态势图采用特定编码方式,减少码流冗余,进而减少雷达态势图在编解后码流的数据量。
由于当前并没有专门针对雷达态势图的编码压缩方法,对于雷达态势图仍是按照传统压缩协议进行编码,并未充分利用雷达态势图的形状以及变化特征,因此按照传统压缩协议对雷达态势图进行编码时,编码后的雷达态势图的码流的数据量较大。
发明内容
本公开实施例提供一种图像处理方法、装置、设备及存储介质,能够解决按照传统压缩协议对雷达态势图进行编码时,编码后的雷达态势图的码流的数据量较大的问题。所述技术方案如下:
根据本公开实施例的第一方面,提供一种图像处理方法,该方法包括:
获取当前帧图像,所述当前帧图像为动态扫描的雷达态势图;
确定所述当前帧图像的第一扇形扫描区域以及所述第一扇形扫描区域的第一图像数据;
根据所述第一扇形扫描区域确定目标帧图像的第二扇形扫描区域;
获取所述目标帧图像;
从所述目标帧图像的第二扇形扫描区域获取第二图像数据;
将第一图像数据和第二图像数据编码后发送给解码端。
本公开实施例提供的图像处理方法,能够获取当前帧图像,该当前帧图像为动态扫描的雷达态势图;确定该当前帧图像的第一扇形扫描区域以及该第一扇形扫描区域的第一图像数据;根据该第一扇形扫描区域确定目标帧图像的第二扇形扫描区域;获取该目标帧图像;从该目标帧图像的第二扇形扫描区域获取第二图像数据;将第一图像数据和第二图像数据编码后发送给解码端,并不需要对当前帧图像以及目标帧图像进行编码并发送至解码端,只需将从该当前帧图像的第一扇形扫描区域获取的第一图像数据,以及该目标帧图像的第二扇形扫描区域获取的第二图像数据进行编码并发送至解码端,大大减少了编码后的雷达态势图的码流的数据量。
在一个实施例中,所述获取当前帧图像前,所述方法还包括:
获取至少一帧图像;
确定所述至少一帧图像的第三扇形扫描区域;
根据所述第三扇形扫描区域确定动态扫描规则;
根据所第一扇形扫描区域确定目标帧图像的第二扇形扫描区域包括:
根据所述第一扇形扫描区域和所述动态扫描规则确定所述目标帧图像的第二扇形扫描区域。
通过确定动态扫描规则,能够进一步根据动态扫描规则确定目标帧图像的第二扇形扫描区域。
在一个实施例中,所述确定当前帧图的第一扇形扫描区域包括:
获取初始图像,所述初始图像为未经过动态扫描的雷达态势图;
确定所述初始图像的圆形扫描区域;
根据所述初始图像的圆形扫描区域确定所述当前帧图像的圆形扫描区域;
从所述当前帧图像的圆形扫描区域中确定所述当前帧图像的第一扇形扫描区域。
通过确定圆形扫描区域,能够进一步在当前帧图像的圆形扫描区域中确定扇形扫描区域。
在一个实施例中,所述从所述当前帧图像的圆形扫描区域中确定所述当前帧图像的第一扇形扫描区域包括:
将所述当前帧图像的圆形扫描区域与所述初始帧图像的圆形扫描区域做差值运算,得到的所述当前帧图像的变化区域;
将所述当前帧图像的变化区域进行线性拟合,生成所述第一扇形区域。
通过对当前帧图像的变化区域进行线性拟合,能够准确的生成第一扇形区域。
在一个实施例中,所述确定所述初始图像的圆形扫描区域包括:
对所述初始图像进行圆形检测,确定所述初始图像中的圆形扫描区域。
通过对初始图像进行圆形检测,能够准确的确定该初始图像的圆形扫描区域。
根据本公开实施例的第二方面,提供一种图像处理装置,包括:
当前帧图像获取模块,用于获取当前帧图像,所述当前帧图像为动态扫描的雷达态势图;
第一扇形扫描区域确定模块,用于确定所述当前帧图像的第一扇形扫描区域以及所述第一扇形扫描区域的第一图像数据;
第二扇形扫描区域确定模块,用于根据所述第一扇形扫描区域确定目标帧图像的第二扇形扫描区域;
目标帧图像获取模块,用于获取所述目标帧图像;
第二图像数据获取模块,用于从所述目标帧图像的第二扇形扫描区域获取第二图像数据;
图像数据发送模块,用于将第一图像数据和第二图像数据编码后发送给解码端。
在一个实施例中,所述装置还包括:
动态扫描规则确定模块,用于:
获取至少一帧图像;
确定所述至少一帧图像的第三扇形扫描区域;
根据所述第三扇形扫描区域确定动态扫描规则;
所述第二扇形扫描区域确定模块具体用于:
根据所述第一扇形扫描区域和所述动态扫描规则确定所述目标帧图像的第二扇形扫描区域。
在一个实施例中,所述第一扇形扫描区域确定模块具体用于:
获取初始图像,所述初始图像为未经过动态扫描的雷达态势图;
确定所述初始图像的圆形扫描区域;
根据所述初始图像的圆形扫描区域确定所述当前帧图像的圆形扫描区域;
从所述当前帧图像的圆形扫描区域中确定所述当前帧图像的第一扇形扫描区域。
在一个实施例中,所述第一扇形扫描区域确定模块具体用于:
将所述当前帧图像的圆形扫描区域与所述初始帧图像的圆形扫描区域做差值运算,得到的所述当前帧图像的变化区域;
将所述当前帧图像的变化区域进行线性拟合,生成所述第一扇形区域。
在一个实施例中,所述动态扫描规则确定模块具体用于:
对所述初始图像进行圆形检测,确定所述初始图像中的圆形扫描区域。
根据本公开实施例的第三方面,提供一种图像处理设备,所述图像处理设备包括处理器和存储器,所述存储器中存储有至少一条计算机指令,所述指令由所述处理器加载并执行以实现第一方面中任一项所述的图像处理方法中所执行的步骤。
根据本公开实施例的第四方面,提供一种计算机可读存储介质,所述存储介质中存储有至少一条计算机指令,所述指令由处理器加载并执行以实现第一方面中任一项所述的图像处理方法中所执行的步骤。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是本公开实施例提供的一种图像处理系统的结构图;
图2是本公开实施例提供的一种图像处理方法的流程图;
图3是本公开实施例提供的一种初始图像的示意图;
图4是本公开实施例提供的一种区域分离的示意图;
图5是本公开实施例提供的一种初始图像的圆形扫描区域以及背景区域的示意图;
图6是本公开实施例提供的一种该当前帧图像的第一扇形扫描区域的示意图一;
图7是本公开实施例提供的一种该当前帧图像的第一扇形扫描区域的示意图二;
图8是本公开实施例提供的一种标帧图像的第二扇形扫描区域的示意图;
图9是本公开实施例提供的一种采用颜色码表编码后的初始图像的示意图;
图10是本公开实施例提供的一种图像处理装置的结构图一;
图11是本公开实施例提供的一种图像处理装置的结构图二;
图12是本公开实施例提供的一种图像处理设备的结构图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
图1是本公开实施例提供的一种图像处理系统的结构图。如图1所示,该系统包括:
编码端101和解码端102。其中,编码端101与解码端102进行通信连接。示例性地,该编码端01可以为计算机、手机、平板等设备,本实施例此处不做具体限制。同样地,该解码端102以为计算机、手机、平板等设备,本实施例此处不做具体限制。
在本实施例中,编码端101用于获取雷达态势图,并对获取的雷达态势图进行编码,生成编码后的码流,再将编码后的码流发送至解码端102。
进一步地,解码端102接收到编码端101发送的编码后的码流后,对该编码后的码流进行解码,生成解码后的码流,再根据该解码后的码流生成该雷达态势图。
但是,目前并没有专门针对雷达态势图的编码压缩方法,对于雷达态势图仍是按照传统压缩协议进行编码,因此编码端101编码后的雷达态势图的码流的数据量较大,进而使得解码端102对该编码后的码流进行解码,生成解码后的码流的数据量较大。
发明人注意到这一问题,提出一种图像处理方法,具体如下:
图2是本公开实施例提供的一种图像处理方法的流程图,该方法应用于编码端。如图2所示,该方法包括:
S201、获取当前帧图像,该当前帧图像为动态扫描的雷达态势图;
S202、确定该当前帧图像的第一扇形扫描区域以及该第一扇形扫描区域的第一图像数据。
示例性地,在获取动态扫描的雷达态势图前,先获取初始图像,该初始图像为未经过动态扫描的雷达态势图,该初始图像上不存在扇形扫描区域。图3是本公开实施例提供的一种初始图像的示意图。如图3所示,左侧的图(a)为的256级(8bit)灰度的初始图像。
由于雷达态势图的色彩较为单一,若直接对256级(8bit)灰度的原始图像直接进行压缩编码,会产生很多不必要的码流,因此为尽可能的减少编码量,又不影响待编码的雷达态势图,本方案采用误差扩散法来将256级灰度的初始图像转换为32级灰度的初始图像。
误差扩散在降低图像色彩深度时经常用到,其原理是在降低图像色彩深度时,将像素颜色的变化误差扩散出去,这使得在肉眼观察图像时,相邻的像素点集合整体的误差变小,在本申请中,按照3:2:3比例,将256级(8bit)灰度的原始图像进行误差扩散,对256级(8bit)灰度的原始图像的灰度进行降级,将256级(8bit)灰度的原始图像转换为32级灰度的初始图像。对256级(8bit)灰度的原始图像的灰度进行降级后生成的32级灰度的初始图像如图3右侧的图(b)所示。
进一步地,对256级(8bit)灰度的原始图像进行灰度降级,生成32级灰度的初始图像后,对该32级灰度的初始图像进行区域分离,得到该32级灰度的初始图像的圆形扫描区域与背景区域两部分。由于雷达态势图存在其特殊性,在开始动态扫描后,雷达态势图的主要变化区域存在于圆形扫描区域,背景区域只存在几个简单的数字变化,大部分基本不变。因此,本申请中,将雷达态势图分为圆形扫描区域与背景区域两部分。示例性地,区域分离的示意图如图4所示。
下面对如何对32级灰度的初始图像(以下简称为初始图像)进行区域分离进行说明:
本方案通过Hough圆形检测算法来检测初始图像中圆形扫描区域的外框圆,从而达到分离图像的目的。具体地,该外框圆以及该外框圆以内的区域为圆形扫描区域,该外框圆以外的区域即为背景区域。
Hough圆形检测算法同Hough变换检测直线一样,通过将初始图像中的各个像素点进行坐标变换,将初始图像在Y-X平面上的像素点对应转换到a-b坐标系。
在本实施例中,设置角度theta的变化范围和步长,半径r的变化范围和步长,示例性地,可以设置theta∈[0,2π],角度步长theta_step=0.1;r∈[450,550],半径步长,利用如下公式进行坐标变换:
Figure PCTCN2020130305-appb-000001
其中(x,y)为初始图像中的某个像素点的Y-X平面上坐标,a和b为其在 a-b坐标系下的坐标,若a>0&&a≤IMG_height,b>0&&b≤IMG_width,则对该位置进行叠加。其中,IMG_height为初始图像的高度,IMG_width为初始图像的宽度。
转换坐标系后,若Y-X平面上一个圆形边界上有很多像素点,对应到坐a-b标系中就会有很多个圆。由于在初始图像中这些像素点都在同一个圆形上,那么转换后a和b必定也满足a-b坐标系下的所有圆形的方程式。直观表现为这许多像素点对应的圆都会相交于一个像素点,那么这个交点就可能是圆心(a,b)。
统计局部交点处圆的个数,取局部交点处圆的个数的最大值,该最大值对应的交点即为初始图像中圆形扫描区域的外框圆的圆心(a,b)。再在圆心(a,b)处以不同的半径的步长画圆,得到多个不同半径的圆。在统计不同半径的圆上的像素点的个数,并取像素点的个数的最大值,该像素点的个数的最大值对应的圆即为该初始图像的外框圆。再根据该外框圆将该初始图像进行分割,生成该初始图像的圆形扫描区域和背景区域。该初始图像的圆形扫描区域和背景区域分别如图5左侧的图(a)和图5右侧的图(b)所示。
进一步地,生成该初始图像的圆形扫描区域和背景区域后,对该初始图像进行编码,并将该外框圆在初始图像中的位置信息发送至解码端。以便解码端对该编码后的初始图像进行解码,并确定该初始图像的圆形扫描区域和背景区域。
示例性地,将该初始图像进行编码并发送至解码端后,开始动态扫描雷达态势图。动态扫描的雷达态势图的圆形扫描区域上存在扇形扫描区域,该扇形扫描区域即为雷达扫描区域。
在本实施例中,获取当前帧图像,当前帧图像为动态扫描的雷达态势图,当前帧图像上存在第一扇形扫描区域。
示例性地,获取该当前帧图像后,根据初始图像的圆形扫描区域确定该当前帧图像的圆形扫描区域。由于该初始图像为未经过动态扫描的雷达态势图,该当前帧图像为动态扫描的雷达帧图像,因此该当前帧图像的圆形扫描区域在图像中的位置,与该初始图像圆形扫描区域在图像中的位置一致,因此可以根据初始图像的圆形扫描区域确定该当前帧图像的圆形扫描区域。
进一步地,确定该当前帧图像的圆形扫描区域后,将该当前帧的圆形扫描区域内的图像数据与该初始帧的圆形扫描区域内的图像数据做差值运算,得到该当前帧图像的变化区域,该当前帧图像的变化区域内的图像数据与该初始图像不同。由于在对雷达态势图进行动态扫描时,只针对圆形扫描区域进行扫描,因此,在本实施例中,将该当前帧的圆形扫描区域内的图像数据与该初始帧的圆形扫描区域内的图像数据做差值运算,即可得到该当前帧图像的变化区域。
确定该当前帧图像的变化区域后,将该当前帧图像的变化区域进行线性拟合,生成该第一扇形扫描区域。
示例性地,采用直线拟合公式,得到该第一扇形扫描区域的上下边l1、l2公式为:
Figure PCTCN2020130305-appb-000002
再根据公式(2),得到第一扇形扫描区域的扇形夹角θ:
Figure PCTCN2020130305-appb-000003
该当前帧图像的第一扇形扫描区域如图6和图7所示。进一步地,确定该当前帧图像的第一扇形扫描区域后,从该第一扇形扫描区域获取第一图像数据。该第一图像数据即为该当前帧图像与初始图像相比,存在变化的图像数据。
S203、根据该第一扇形扫描区域确定目标帧图像的第二扇形扫描区域。
在一个实施例中,由于对雷达态势图进行动态扫描区域时,动态扫描规则为呈匀速旋转扫描,因此根据当前帧图像的第一扇形扫描区域以及该第一 扇形扫描区域的扇形夹角θ可以确定计算出目标帧图像的第二扇形扫描区域,该目标帧图像可以为下一帧图像或者在获取当前帧图像之后的时刻获取的任意一帧图像。
例如,若该当前帧图像为第一帧动态扫描的雷达态势图,且动态扫描规则为顺时针扫描,当前帧图像的第一扇形扫描区域的扇形夹角θ为10°,例如当前帧图像的第一扇形扫描区域为1°到10°时,则第二帧图像的扇形扫描区域的扇形夹角为10°,且第二帧图像的扇形扫描区域为11°到20°,第三帧图像的扇形扫描区域的扇形夹角为10°,第三帧图像的扇形扫描区域为21°到30°。依次类推,可以确定当前帧图像之后的任意时刻获取的目标帧图像的第二扇形扫描区域。目标帧图像的第二扇形扫描区域如图8所示。示例性地,当该当前帧图像为第一帧图像时,图8最左侧的图(a)为第二帧图像的扇形扫描区域;图8中间的图(b)为第三帧图像的扇形扫描区域;图8最右侧的图(c)为前三帧图像的扇形扫描区域。
在另一个实施例中,若不确定动态扫描规则是顺时针扫描还是逆时针扫描时,可以在获取该当前帧图像之前,获取至少一帧图像,该至少一帧图像为动态扫描的雷达态势图,并确定该至少一帧图像的第三扇形扫描区域,再根据该至少一帧图像的第三扇形扫描区域确定动态扫描规则。确定动态扫描规则后,再根据当前帧图像的第一扇形扫描区域和该动态扫描规则确定目标帧图像的第二扇形扫描区域。此处确定该至少一帧图像的第三扇形扫描区域的方法和确定该当前帧图像的第一扇形扫描区域的方法类似,本实施例此处不再赘述。
示例性地,可以在获取该当前帧图像之前的时刻获取一帧图像,根据该一帧图像的第三扇形扫描区域和当前帧图像的第二扇形扫描区域确定动态扫描规则。例如,该一帧图像的第三扇形扫描区域为1°到5°,该当前帧图像的第二扇形扫描区域为6°到10°,则该动态扫描规则为顺时针扫描,且每个时刻扫描5°。确定该动态扫描规则后,根据当前帧图像的第一扇形扫描区域即可确定目标帧图像的第二扇形扫描区域。
示例性地,还可以在获取该当前帧图像之前的时刻获取连续两帧图像,根据该两帧图像的第三扇形扫描区域确定动态扫描规则。例如,该连续两帧图像的第三扇形扫描区域为分别1°到5°和6°到10°,则该动态扫描规则 为顺时针扫描,且每个时刻扫描5°。确定该动态扫描规则后,根据当前帧图像的第一扇形扫描区域即可确定该目标帧图像的第二扇形扫描区域。
S204、获取该目标帧图像;
S205、从该目标帧图像的第二扇形扫描区域获取第二图像数据。
进一步地,确定该目标帧图像的第二扇形扫描区域后,从获取的该目标图像的第二扇形扫描区域获取第二图像数据。该第二图像数据即为该当前帧图像与初始图像相比,存在变化的数据。
S206、将第一图像数据和第二图像数据编码后发送给解码端。
在本实施例中,从该当前帧图像的第一扇形扫描区域获取第一图像数据后,对该第一图像数据进行编码,并将编码后的第一图像数据和第一扇形扫描区域发送至解码端,以便解码端对该第一图像数据进行解码,并将解码后的第一图像数据叠加在初始图像的第一扇形扫描区域内,生成该当前帧图像。
同样地,从目标帧图像的第二扇形扫描区域获取第二图像数据后,对该第二图像数据进行编码,并将编码后的第二图像数据和第二扇形扫描区域发送至解码端,以便解码端对该第二图像数据进行解码,并将解码后的第二图像数据叠加在初始图像的第二扇形扫描区域内,生成该目标帧图像。
下面对如何对该初始图像、第一图像数据和第二图像数据进行编码进行说明。
由于初始图像为未经动态扫描的雷达态势图,其颜色较为单一,且对其进行了灰度级下降,将原256级灰度的初始图像采用误差扩散法下降至32级灰度的初始图像。因此,最终编码时,本方案通过建立颜色码表的方式对初始图像进行编码。
示例性地,在本实施例中,雷达态势图为RGB像素格式的图像,对于R、G、B三通道,每通道灰度级为32级,因此建立码表大小为32*32*32=32768;则该初始图像的颜色码表如表1所示:
(R,G,B) 编号
(0,0,0) 01
(0,0,8) 02
(0,0,16) 03
(248,248,248) 32768
表1
表1所示的颜色码表可通过对初始图像进行哈弗Huffman概率统计排列,将出现概率大的颜色组合排列在小编号前,从而使在进行熵编码时获得尽可能小的码流。
进一步地,针对该初始图像中的各个颜色的像素点,用建立的颜色码表中与该颜色相对应的编号对其进行编码。
采用颜色码表编码后的初始图像如图9所示。
进一步地,采用颜色码表对该初始图像进行编码后,再采用联合图像专家组(Joint Photographic Experts Group,JPEG)熵编码方式对该初始图像进行编码,得到最终输出的该初始图像的码流。
在本实施例中,采用同样的方式对该第一图像数据和该第二图像数据进行编码,本实施例此处不再赘述。
采用上述方案,编码端只需向解码端发送编码后的初始图像的码流,解码端接收到初始图像的码流后进行解码并生成该初始图像。编码端在向解码端发送编码后初始图像的码流后,针对获取的每一帧动态扫描的雷达态势图,只需将该帧动态扫描的雷达态势图的变化区域的图像数据进行编码,并将生成的变化区域的图像数据的码流发送至解码端,解码端对该变化区域的图像数据的码流进行解码,并将该变化区域的图像数据叠加在该初始图像上,即可生成该帧动态扫描的雷达态势图。因此,编码端并不需要将每一帧动态扫描的雷达态势图进行编码并发送至解码端,解码端也不需要接收每一帧动态扫描的雷达态势图在编码后生成的码流,并对该每一帧动态扫描的雷达态势图解码,大大减少了编码后的雷达态势图的码流的数据量。
本公开实施例提供的图像处理方法,能够获取当前帧图像,该当前帧图像为动态扫描的雷达态势图;确定该当前帧图像的第一扇形扫描区域以及该第一扇形扫描区域的第一图像数据;根据该第一扇形扫描区域确定目标帧图 像的第二扇形扫描区域;获取该目标帧图像;从该目标帧图像的第二扇形扫描区域获取第二图像数据;将第一图像数据和第二图像数据编码后发送给解码端,并不需要对当前帧图像以及目标帧图像进行编码并发送至解码端,只需将从该当前帧图像的第一扇形扫描区域获取的第一图像数据,以及该目标帧图像的第二扇形扫描区域获取的第二图像数据进行编码并发送至解码端,大大减少了编码后的雷达态势图的码流的数据量。
图10是本公开实施例提供的一种图像处理装置的结构图。该装置应用于编码端。如图10所示,该装置100包括:
当前帧图像获取模块1001,用于获取当前帧图像,所述当前帧图像为动态扫描的雷达态势图;
第一扇形扫描区域确定模块1002,用于确定所述当前帧图像的第一扇形扫描区域以及所述第一扇形扫描区域的第一图像数据;
第二扇形扫描区域确定模块1003,用于根据所述第一扇形扫描区域确定目标帧图像的第二扇形扫描区域;
目标帧图像获取模块1004,用于获取所述目标帧图像;
第二图像数据获取模块1005,用于从所述目标帧图像的第二扇形扫描区域获取第二图像数据;
图像数据发送模块1006,用于将第一图像数据和第二图像数据编码后发送给解码端。
在一个实施例中,如图11所示,该装置100还包括:
动态扫描规则确定模块1007,用于:
获取至少一帧图像;
确定所述至少一帧图像的第三扇形扫描区域;
根据所述第三扇形扫描区域确定动态扫描规则;
第二扇形扫描区域确定模块1003具体用于:
根据所述第一扇形扫描区域和所述动态扫描规则确定所述目标帧图像的第二扇形扫描区域。
在一个实施例中,第一扇形扫描区域确定模块1002具体用于:
获取初始图像,所述初始图像为未经过动态扫描的雷达态势图;
确定所述初始图像的圆形扫描区域;
根据所述初始图像的圆形扫描区域确定所述当前帧图像的圆形扫描区域;
从所述当前帧图像的圆形扫描区域中确定所述当前帧图像的第一扇形扫描区域。
在一个实施例中,第一扇形扫描区域确定模块1002具体用于:
将所述当前帧图像的圆形扫描区域与所述初始帧图像的圆形扫描区域做差值运算,得到的所述当前帧图像的变化区域;
将所述当前帧图像的变化区域进行线性拟合,生成所述第一扇形区域。
在一个实施例中,所述动态扫描规则确定模块具体用于:
对所述初始图像进行圆形检测,确定所述初始图像中的圆形扫描区域。
本公开实施例提供的图像处理装置,其实现过程和技术效果可以参见上述图2至图9实施例,在此不再赘述。
图12是本公开实施例提供的一种图像处理设备的结构图。如图12所示,该图像处理设备120包括处理器1201和存储器1202,该存储器1202中存储有至少一条计算机指令,所述指令由所述处理1201加载并执行以实现上述方法实施例中所描述的图像处理方法。
基于上述图2至图10对应的实施例中所描述的图像处理方法,本公开实施例还提供一种计算机可读存储介质,例如,非临时性计算机可读存储介质可以是只读存储器(英文:Read Only Memory,ROM)、随机存取存储器(英文:Random Access Memory,RAM)、CD-ROM、磁带、软盘和光数据存储装置等。该存储介质上存储有计算机指令,用于执行上述图像处理方法,此处不再赘述。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器, 磁盘或光盘等。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。

Claims (10)

  1. 一种图像处理方法,其特征在于,包括:
    获取当前帧图像,所述当前帧图像为动态扫描的雷达态势图;
    确定所述当前帧图像的第一扇形扫描区域以及所述第一扇形扫描区域的第一图像数据;
    根据所述第一扇形扫描区域确定目标帧图像的第二扇形扫描区域;
    获取所述目标帧图像;
    从所述目标帧图像的第二扇形扫描区域获取第二图像数据;
    将第一图像数据和第二图像数据编码后发送给解码端。
  2. 根据权利要求1所述的方法,其特征在于,所述获取当前帧图像前,所述方法还包括:
    获取至少一帧图像;
    确定所述至少一帧图像的第三扇形扫描区域;
    根据所述第三扇形扫描区域确定动态扫描规则;
    根据所第一扇形扫描区域确定目标帧图像的第二扇形扫描区域包括:
    根据所述第一扇形扫描区域和所述动态扫描规则确定所述目标帧图像的第二扇形扫描区域。
  3. 根据权利要求1所述的方法,其特征在于,所述确定当前帧图的第一扇形扫描区域包括:
    获取初始图像,所述初始图像为未经过动态扫描的雷达态势图;
    确定所述初始图像的圆形扫描区域;
    根据所述初始图像的圆形扫描区域确定所述当前帧图像的圆形扫描区域;
    从所述当前帧图像的圆形扫描区域中确定所述当前帧图像的第一扇形扫描区域。
  4. 根据权利要求3所述的方法,其特征在于,所述从所述当前帧图像的圆形扫描区域中确定所述当前帧图像的第一扇形扫描区域包括:
    将所述当前帧图像的圆形扫描区域与所述初始帧图像的圆形扫描区域做差值运算,得到的所述当前帧图像的变化区域;
    将所述当前帧图像的变化区域进行线性拟合,生成所述第一扇形区域。
  5. 根据权利要求3所述的方法,其特征在于,所述确定所述初始图像的圆形扫描区域包括:
    对所述初始图像进行圆形检测,确定所述初始图像中的圆形扫描区域。
  6. 一种图像处理装置,其特征在于,包括:
    当前帧图像获取模块,用于获取当前帧图像,所述当前帧图像为动态扫描的雷达态势图;
    第一扇形扫描区域确定模块,用于确定所述当前帧图像的第一扇形扫描区域以及所述第一扇形扫描区域的第一图像数据;
    第二扇形扫描区域确定模块,用于根据所述第一扇形扫描区域确定目标帧图像的第二扇形扫描区域;
    目标帧图像获取模块,用于获取所述目标帧图像;
    第二图像数据获取模块,用于从所述目标帧图像的第二扇形扫描区域获取第二图像数据;
    图像数据发送模块,用于将第一图像数据和第二图像数据编码后发送给解码端。
  7. 根据权利要求6所述的装置,其特征在于,所述装置还包括:
    动态扫描规则确定模块,用于:
    获取至少一帧图像;
    确定所述至少一帧图像的第三扇形扫描区域;
    根据所述第三扇形扫描区域确定动态扫描规则;
    所述第二扇形扫描区域确定模块具体用于:
    根据所述第一扇形扫描区域和所述动态扫描规则确定所述目标帧图像的第二扇形扫描区域。
  8. 根据权利要求6所述的装置,其特征在于,所述第一扇形扫描区域确定模块用于:
    获取初始图像,所述初始图像为未经过动态扫描的雷达态势图;
    确定所述初始图像的圆形扫描区域;
    根据所述初始图像的圆形扫描区域确定所述当前帧图像的圆形扫描区域;
    从所述当前帧图像的圆形扫描区域中确定所述当前帧图像的第一扇形扫描区域。
  9. 一种图像处理设备,其特征在于,所述图像处理设备包括处理器和存储器,所述存储器中存储有至少一条计算机指令,所述指令由所述处理器加载并执行以实现权利要求1至权利要求5任一项所述的图像处理方法中所执行的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条计算机指令,所述指令由处理器加载并执行以实现权利要求1至权利要求5任一项所述的图像处理方法中所执行的步骤。
PCT/CN2020/130305 2020-07-23 2020-11-20 图像处理方法、装置、设备及存储介质 WO2022016756A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010719810.8 2020-07-23
CN202010719810.8A CN112073722B (zh) 2020-07-23 2020-07-23 图像处理方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022016756A1 true WO2022016756A1 (zh) 2022-01-27

Family

ID=73657415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/130305 WO2022016756A1 (zh) 2020-07-23 2020-11-20 图像处理方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN112073722B (zh)
WO (1) WO2022016756A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018055449A2 (en) * 2016-09-20 2018-03-29 Innoviz Technologies Ltd. Lidar systems and methods
US20180299557A1 (en) * 2017-04-17 2018-10-18 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for updating maps
CN110648396A (zh) * 2019-09-17 2020-01-03 西安万像电子科技有限公司 图像处理方法、装置和系统
CN111372080A (zh) * 2020-04-13 2020-07-03 西安万像电子科技有限公司 雷达态势图的处理方法、装置、存储介质和处理器

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7602984B2 (en) * 2005-09-28 2009-10-13 Novell, Inc. Adaptive method and system for encoding digital images for the internet
CN102033224A (zh) * 2010-11-30 2011-04-27 电子科技大学 一种船用雷达视频信号编码方法
CN104407339B (zh) * 2014-11-21 2017-01-04 中国人民解放军海军工程大学 一种激光引偏干扰环境态势图生成系统
CN106162191A (zh) * 2015-04-08 2016-11-23 杭州海康威视数字技术股份有限公司 一种基于目标的视频编码方法及系统
CN105357494B (zh) * 2015-12-04 2020-06-02 广东中星微电子有限公司 视频编解码方法、装置
US20170206434A1 (en) * 2016-01-14 2017-07-20 Ford Global Technologies, Llc Low- and high-fidelity classifiers applied to road-scene images
CN108734171A (zh) * 2017-04-14 2018-11-02 国家海洋环境监测中心 一种深度协同稀疏编码网络的合成孔径雷达遥感图像海洋浮筏识别方法
CN110324617B (zh) * 2019-05-16 2022-01-11 西安万像电子科技有限公司 图像处理方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018055449A2 (en) * 2016-09-20 2018-03-29 Innoviz Technologies Ltd. Lidar systems and methods
US20180299557A1 (en) * 2017-04-17 2018-10-18 Baidu Online Network Technology (Beijing) Co., Ltd Method and apparatus for updating maps
CN110648396A (zh) * 2019-09-17 2020-01-03 西安万像电子科技有限公司 图像处理方法、装置和系统
CN111372080A (zh) * 2020-04-13 2020-07-03 西安万像电子科技有限公司 雷达态势图的处理方法、装置、存储介质和处理器

Also Published As

Publication number Publication date
CN112073722B (zh) 2024-05-17
CN112073722A (zh) 2020-12-11

Similar Documents

Publication Publication Date Title
US20240169597A1 (en) Method and device for encoding/reconstructing attributes of points of a point cloud
US11838547B2 (en) Method and device for encoding the geometry of a point cloud
US11647196B2 (en) Method and apparatus for encoding image, method and apparatus for decoding image, electronic device, and system
CN111372080B (zh) 雷达态势图的处理方法、装置、存储介质和处理器
US10750188B2 (en) Method and device for dynamically monitoring the encoding of a digital multidimensional signal
US8958642B2 (en) Method and device for image processing by image division
US10477219B2 (en) Image-processing apparatus and lossless image compression method using intra-frame prediction
CN113469869B (zh) 一种图像管理方法和装置
TW201712604A (zh) 圖像條碼編碼方法、圖像條碼解碼方法、圖像條碼編碼裝置及圖像條碼解碼裝置
WO2022016756A1 (zh) 图像处理方法、装置、设备及存储介质
CN111225214B (zh) 视频处理方法、装置及电子设备
CN111246208B (zh) 视频处理方法、装置及电子设备
CN114584776A (zh) 帧内预测模式的译码方法和装置
JP2005033763A (ja) 送信装置、画像処理システム、画像処理方法、プログラム、及び記録媒体
JP4523024B2 (ja) 画像符号化装置および画像符号化方法
CN115802038A (zh) 一种量化参数确定方法、装置及一种视频编码方法、装置
US11582464B2 (en) Using morphological operations to process frame masks in video content
CN111314701A (zh) 视频处理方法及电子设备
CN114786037B (zh) 一种面向vr投影的自适应编码压缩方法
US20230319287A1 (en) Systems and methods for video encoding
WO2024077797A1 (en) Method and system for retargeting image
CN113259663B (zh) 一种图像块划分方法及装置
CN109168007B (zh) 一种标定焦点及其图像传输的方法
WO2022111349A1 (zh) 图像处理方法、设备、存储介质及计算机程序产品
WO2022232547A1 (en) Learning-based point cloud compression via tearing transform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945893

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945893

Country of ref document: EP

Kind code of ref document: A1