CN116071421A - Image processing method, device and computer readable storage medium - Google Patents

Image processing method, device and computer readable storage medium Download PDF

Info

Publication number
CN116071421A
CN116071421A CN202111278879.2A CN202111278879A CN116071421A CN 116071421 A CN116071421 A CN 116071421A CN 202111278879 A CN202111278879 A CN 202111278879A CN 116071421 A CN116071421 A CN 116071421A
Authority
CN
China
Prior art keywords
pixel
pixel point
coordinate
coordinate system
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111278879.2A
Other languages
Chinese (zh)
Inventor
张仁宇
孙旭彤
顾江
左崇彦
池清华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111278879.2A priority Critical patent/CN116071421A/en
Publication of CN116071421A publication Critical patent/CN116071421A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/38Concurrent instruction execution, e.g. pipeline or look ahead
    • G06F9/3885Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
    • G06F9/3887Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled by a single instruction for multiple data lanes [SIMD]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

Embodiments of the present application provide an image processing method, apparatus, and computer readable storage medium, where the method includes: the method comprises the steps of obtaining N first areas by partitioning an image to be generated, converting coordinate values of each second pixel point in a second pixel point set corresponding to a first pixel point set included in each first area under a first coordinate system with a first vertex of an original image as an origin into coordinate values of the second coordinate system with a second vertex of a minimum circumscribed rectangle as the origin, generating a mapping table according to identification of the first pixel point in the first pixel point set included in the first area and the coordinate values of the second pixel point in the second pixel point set corresponding to the first area under the second coordinate system respectively, then reading pixel values from a storage address corresponding to the coordinate values of the second pixel point corresponding to the first pixel point under the second coordinate system according to the mapping table, and filling the pixel values into the first pixel point, so that storage space occupied by the mapping table can be reduced, and image processing efficiency can be improved.

Description

Image processing method, device and computer readable storage medium
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to an image processing method, an image processing device, and a computer readable storage medium.
Background
Due to limitations of vehicle resources, cost and other factors, the computing capacity and storage resources of the automatic driving computing platform are often limited. In the algorithm related to automatic driving, geometric transformation of an image, such as distortion correction, perspective transformation and the like, is often performed, and a mode of searching a pixel mapping table and calculating coordinate transformation in real time is often used for implementing the geometric transformation of the image, wherein the mode of searching the pixel mapping table is specifically as follows: and storing the pixel mapping relation between the original image and the image to be generated as a pixel mapping table, finding the coordinate position of the pixel in the image to be generated in the original image by searching the pixel mapping table, and acquiring the pixel value from the corresponding position in the original image, namely the pixel value of the image to be generated. The mode of real-time operation coordinate transformation is specifically as follows: and calculating the coordinate position of the pixel in the image to be generated in the original image according to the transformation matrix, and acquiring a pixel value from the corresponding position in the original image, namely the pixel value of the image to be generated, wherein algorithm time delay exists in the algorithm implementation process of the two modes. To meet the requirement of time delay, single instruction multiple data (single instruction multiple data, SIMD) technology is often required, for example, SIMD technology is applied to central processing units (central processing unit, CPU) and digital signal processing (digital signal process, DSP) of an ARM and x86 architecture, and partial codes are vectorized by using SIMD technology, so that algorithm time delay can be effectively reduced. However, these two methods still have some problems, for example, the pixel mapping table occupies a large space, and may occupy too much storage resources under the condition of limited storage space, so that the image processing efficiency is low, for example, because a mode of frequently using fixed point numbers to represent decimal numbers in the SIMD technology is adopted, certain precision errors exist in data under the condition of limited bit width, so that the perception effect is poor, and the user experience of an automatic driving scene is affected.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device and a computer readable storage medium, which are beneficial to improving the user experience of an automatic driving scene.
In a first aspect, an embodiment of the present application provides an image processing method, including: the image processing device blocks an image to be generated to obtain N first areas, wherein N is an integer greater than 1; for each of the N first regions, performing: according to a first mapping relation, a second pixel point set corresponding to a first pixel point set included in the first area in the original image is determined, the first pixel point set comprises a plurality of first pixel points, the second pixel point comprises a plurality of second pixel points, the plurality of second pixel points correspond to the plurality of first pixel points, the first mapping relation comprises a mapping relation between an identification of the first pixel point in the image to be generated and coordinate values of the corresponding second pixel point in the original image under a first coordinate system, and the first coordinate system is a coordinate system taking a first vertex of the original image as an origin. And then, determining the minimum circumscribed rectangle of all second pixel points in the second pixel point set, and converting the coordinate value of each second pixel point included in the second pixel point set under the first coordinate system into the coordinate value under the second coordinate system, wherein the second coordinate system is a coordinate system taking the second vertex of the minimum circumscribed rectangle as the origin. And generating a mapping table according to the identifiers of a plurality of first pixel points in a first pixel point set included in the first region and the coordinate values of a plurality of second pixel points in a second pixel point set corresponding to the first region under a second coordinate system respectively, wherein the mapping table comprises the mapping relation between the identifiers of the first pixel points in the image to be generated and the coordinate values of the corresponding second pixel points in the original image under the second coordinate system. For the identification of any first pixel point included in the first area, determining a first coordinate value of a second pixel point corresponding to the identification of the first pixel point under a second coordinate system according to a mapping table, and filling the first pixel point by using a pixel value read from a storage address corresponding to the first coordinate value.
According to the method, N first areas are obtained through blocking an image to be generated, for each first area, each second pixel point in a second pixel point set corresponding to a first pixel point set included in the first area is converted into a coordinate value in a second coordinate system with a first vertex of an original image as an origin, the coordinate value in the second coordinate system with the second vertex of a minimum circumscribed rectangle as the origin is converted into a coordinate value in the second coordinate system, the coordinate value of each second pixel point in the second pixel point set corresponding to the first area is represented by a smaller value, the bit width of each second pixel point in the second pixel point set is smaller, then, according to the identification of a plurality of first pixel points in the first pixel point set included in the first area and the coordinate value of a plurality of second pixel points in the second pixel point set corresponding to the first pixel point set included in the first area, a mapping table is generated, compared with the coordinate value in the first pixel point set included in the first area, the coordinate value in the mapping table is directly applied for a plurality of pixel points in the first pixel point set corresponding to the first pixel point set, the mapping table is further reduced, for example, the image storage efficiency can be further reduced, and the image storage can be further processed from the first pixel point in the first coordinate system to the second pixel point set is further reduced by a map image with the application to the map is further reduced by for example, the map image is generated.
In one possible implementation manner, generating the mapping table according to the identifiers of the plurality of first pixels in the first pixel set included in the first area and the coordinate values of the plurality of second pixels in the second pixel set corresponding to the first area under the second coordinate system, may include: and determining index values corresponding to a plurality of second pixels in the second pixel set corresponding to the first region according to the coordinate values of the plurality of second pixels in the second pixel set corresponding to the first pixel set included in the first region under a second coordinate system and the width and height of the minimum circumscribed rectangle. And then, generating a mapping table according to the identifications of a plurality of first pixel points in the first pixel point set included in the first region and index values corresponding to a plurality of second pixel points in the second pixel point set corresponding to the first region. For any first pixel point included in the first area, determining, according to a mapping table, a first coordinate value of a second pixel point corresponding to an identification of the first pixel point in a second coordinate system, and filling the first pixel point with a pixel value read from a storage address corresponding to the first coordinate value, where the first pixel point is located in the first area, may include: for any first pixel point included in the first area, determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to a mapping table, and filling the first pixel point by using a pixel value read from a storage address corresponding to the first index value.
Through the implementation mode, the coordinate values of all the pixel points in the mapping table can be replaced by index values, so that the index calculation process can be omitted during actual calculation, and the image processing efficiency can be improved.
In one possible implementation manner, if it is determined that, according to the coordinate value of the second pixel point in the original image under the first coordinate system and the width and height of the original image, the maximum bit width of the index value corresponding to the second pixel point in the original image is a first preset value, the maximum bit width of the index values corresponding to the second pixel points in the second pixel point set corresponding to the first area is smaller than the first preset value, where the first preset value is 32 bits, and the maximum bit width of the index value is smaller than 32 bits, for example, the maximum bit width may be 16 bits. By limiting the maximum bit width of the index values corresponding to the second pixel points in the second pixel point set corresponding to each first region, it is ensured that each index value does not exceed the range supported by aggregation (Gather) operation, and the number of parallel read data can be increased.
In one possible implementation manner, pixel values of a plurality of second pixels in the second pixel sets corresponding to the first pixel sets included in the N first areas are stored in N memory blocks, where the pixel values of the plurality of second pixels in each second pixel set are stored in the same memory block. Therefore, the Gather operation can be ensured to read data in parallel by adopting the index value in the mapping table.
In one possible implementation manner, for any first pixel point included in the first area, determining, according to a mapping table, a first index value corresponding to a second pixel point corresponding to an identifier of the first pixel point, and filling the first pixel point with a pixel value read from a storage address corresponding to the first index value, where the first index value includes: determining a memory block corresponding to the first area according to a start address corresponding to the first area, determining a first index value corresponding to a second pixel point corresponding to an identifier of the first pixel point according to a mapping table for any first pixel point included in the first area, reading the pixel value from a memory address corresponding to the first index value in the memory block corresponding to the first area, and filling the first pixel point with the pixel value of the second pixel point corresponding to the identifier of the first pixel point. According to the implementation mode, after the initial address corresponding to the first area and the index value corresponding to the second pixel corresponding to the identifier of any first pixel included in the first area are obtained, the pixel value of the second pixel corresponding to the identifier of the first pixel can be read from the memory block corresponding to the first area, so that the pixel value used for being filled into the first pixel can be obtained quickly.
In a second aspect, an embodiment of the present application provides an image processing method, including: the image processing device blocks an image to be generated to obtain N first areas, wherein N is an integer greater than 1; for each of the N first regions, performing: determining a coordinate system transformation relation according to a third coordinate system taking a third vertex of the first area as an origin and a fourth coordinate system taking a fourth vertex of the image to be generated as the origin, wherein the coordinate system transformation relation is used for transforming the coordinate value of the first pixel point under the fourth coordinate system and the coordinate value of the first pixel point under the third coordinate system; generating a second coordinate transformation relation according to the first coordinate transformation relation and the coordinate system transformation relation; the first coordinate transformation relation is used for transforming coordinate values of a first pixel point in the image to be generated under a fourth coordinate system and coordinate values of a second pixel point in the original image under the first coordinate system; the second coordinate transformation relation is used for transforming coordinate values of the first pixel points included in the first rectangular area under the third coordinate system and coordinate values of the second pixel points in the original image under the first coordinate system; the first coordinate system is a coordinate system taking a first vertex of the original image as an origin; for any first pixel point included in the first area, determining a second coordinate value of a second pixel point corresponding to the first pixel point under the first coordinate system according to the second coordinate transformation relation and the coordinate value of the first pixel point under the third coordinate system, and filling the pixel value of the first pixel point by using the pixel value read from the storage address corresponding to the second coordinate value.
According to the method, N first areas are obtained through blocking an image to be generated, for each first area in the image to be generated, a plurality of first pixel points in a first pixel point set included in the first area are represented by coordinate values under a third coordinate system, so that the coordinate values of the first pixel points are smaller, the coordinate values of the first pixel points under the third coordinate system pass through a second coordinate transformation relation, and the coordinate values of the first pixel points under the coordinate system of a fourth vertex of the image to be generated pass through the first coordinate transformation relation, the coordinate values of the second pixel points corresponding to the first pixel points under the first coordinate system can be located, but in the embodiment of the application, in the coordinate transformation process of the second coordinate transformation relation, the coordinate values of the first pixel points are represented by smaller values, the calculation accuracy can be improved in subsequent processing, and particularly in the operation, the calculation accuracy error can be reduced, so that the user experience of an automatic driving scene is achieved.
In a third aspect, embodiments of the present application provide an image processing apparatus, which includes a module/unit that performs the method of the first aspect, and any one of the possible designs of the first aspect, or performs the method of the second aspect; these modules/units may be implemented by hardware, or may be implemented by hardware executing corresponding software.
In a fourth aspect, embodiments of the present application further provide an image processing apparatus, including a processor and a memory; the memory is used for storing programs; the processor is configured to execute the program stored in the memory, so that the image processing apparatus executes the technical solutions of the first aspect of the embodiments of the present application and any possible designs of the first aspect, or execute the technical solutions of the second aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application further provide a computer readable storage medium, where the computer readable storage medium includes a computer program, where the computer program when executed on an image processing apparatus causes the image processing apparatus to perform the first aspect of the embodiments of the present application and any possible technical solutions of the first aspect, or perform the technical solutions of the second aspect of the embodiments of the present application.
In a sixth aspect, embodiments of the present application further provide a program product, including instructions that, when executed on an in-vehicle device, cause the in-vehicle device to perform the technical solutions of the first aspect of the embodiments of the present application and any one of the possible designs of the first aspect, or perform the technical solutions of the second aspect of the embodiments of the present application.
In a seventh aspect, a system on a chip is provided, which may include a processor. The processor is coupled to the memory and is operable to perform the method of the first aspect, and any of the possible implementations of the first aspect, or to perform the method of the second aspect of the embodiments of the present application. Optionally, the system on a chip further comprises a memory. Memory for storing a computer program (which may also be referred to as code, or instructions). A processor for invoking and running a computer program from memory, such that a device on which the system-on-chip is installed performs the method of the first aspect, and any possible implementation of the first aspect, or performs the method of the second aspect of an embodiment of the present application.
The technical effects related to the third aspect to the seventh aspect described above may be seen in the technical effects of the first aspect, and the methods in any one of the possible implementations of the first aspect, or the methods of the second aspect.
Drawings
FIG. 1 is a schematic diagram of a vehicle sensing system provided in an embodiment of the present application;
fig. 2 is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of an image processing method according to an embodiment of the present application;
Fig. 4 is a schematic diagram of an image to be generated according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an original image according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a blocking process of an image to be generated according to an embodiment of the present application;
fig. 7 is a flowchart of another image processing method according to an embodiment of the present application;
fig. 8 is a schematic diagram of an image to be generated according to an embodiment of the present application;
fig. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 11 is a schematic diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Wherein in the description of embodiments of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a and b, a and c, b and c, or a and b and c, wherein a, b, c may be single or plural.
And, unless otherwise specified, references to "first," "second," etc. ordinal words of the embodiments are used for distinguishing between multiple objects and are not used for limiting the priority or importance of the multiple objects. For example, the first area and the second area are only for distinguishing different areas, and are not different in priority, importance, and the like.
In the algorithm related to automatic driving of the vehicle, geometric transformation of the image, such as distortion correction, perspective transformation and the like, is often carried out, and at present, a mode of looking up a pixel mapping table and calculating coordinate transformation in real time is often used for realizing the geometric transformation of the image.
(1) Method for checking pixel mapping table
When the pixel mapping table searching method is used, a certain pixel mapping relation between an original image and an image to be generated is required, and a mapping table can be generated according to the pixel mapping relation. Each first pixel point in the image to be generated corresponds to a coordinate in the mapping table, and the coordinate represents the coordinate position of the first pixel point in the image to be generated in the original image. By finding the coordinate position of the first pixel point in the original image, the pixel value in the original image can be obtained (since the pixel position can be a decimal fraction, the pixel value may be interpolated first) so as to be used for filling the first pixel point in the image to be generated. In SIMD technology, the Gather operation is often used to acquire data that is non-serially distributed in memory. The Gather instruction requires a base address (base address) and a set of indices (index), converts it to a set of addresses, and retrieves data in parallel from the set of addresses. Each index corresponds to an address and thus to a data. Since the bit width of the vector registers is fixed in SIMD technology, the larger the bit width of the data and address index to be read, the less data and index can be accommodated in a single vector register. In image processing, the data bit width of a pixel value is typically 8 bits or 16 bits. If the data of the whole image is to be indexed, for example, the resolution is 1920×1080, the index range is at least 0-2073599, so the bit width of the index value is at least 32 bits, and the bit width of the excessive index value may exceed the supporting range of the Gather operation. Meanwhile, since the bit width of the index value is larger than that of the pixel value which is usually required, and the number of indexes which can be accommodated in vector registers with the same width is smaller than that of data which can be accommodated, the bit width of the index value becomes an important factor for restricting the upper limit of parallel reading data.
(2) Method for real-time operation of coordinate transformation
In general geometric transformation, a unified transformation relation is used from an original image to an image to be generated by all pixels, coordinates of the pixels of the image to be generated are given by taking the upper left corner of the image as an origin of coordinates of the pixels of the image to be generated, and then the coordinate positions of the pixels in the image to be generated in the original image can be calculated according to a transformation matrix. Since the SIMD technology is often used to represent the decimal point number, there is a certain precision error in the data when the bit width is limited. In the operation processes of multiplication, division and the like, the precision error is easily amplified, so that the perception result is inaccurate, and the user experience of an automatic driving scene is affected.
In order to solve the problems of the two ways, the present application provides an image processing method, in which an image processing device of a vehicle divides an image to be generated into a plurality of first areas, in one embodiment, for each first area, a bit width required for a coordinate value of a second pixel point in a generated mapping table is smaller, for example, from 16 bits to 8 bits, so that a space used for storing the mapping table can be reduced, and further, the subsequent image processing efficiency can be improved, so that a user experience of an automatic scene can be improved. In another embodiment, for each first region in the image to be generated, the coordinate value of the first pixel point in the third coordinate system using the third vertex of the first region as the origin passes through the coordinate transformation relationship, so that the positioning effect same as that of the coordinate value of the first pixel point in the coordinate system using the fourth vertex of the image to be generated by the existing coordinate transformation relationship can be achieved, that is, the coordinate value of the second pixel point corresponding to the first pixel point in the first coordinate system using the first vertex of the original image as the origin can be positioned, but in the embodiment of the present application, the coordinate value of the first pixel point is expressed by a smaller value in the third coordinate system, the calculation accuracy can be improved in subsequent processing, and particularly, the calculation accuracy error can be reduced in multiplication operation, so that the user experience of the automatic driving scene can be achieved.
In this embodiment, as shown in fig. 1, various sensors, such as a camera device, a laser radar, a millimeter wave radar, an ultrasonic sensor, and the like, may be installed on a vehicle, so as to obtain environmental information around the vehicle through the sensors, and analyze and process the obtained information, so as to implement functions such as obstacle sensing, target recognition, vehicle positioning, path planning, driver monitoring/reminding, and the like, thereby improving safety, automation degree, and comfort level of vehicle driving.
The camera device is used for acquiring image information of an environment where a vehicle is located, and a plurality of cameras can be installed on the vehicle at present to acquire information of more angles. The acquisition of the original image in the embodiment of the application comes from the camera.
The laser radar is a short name of a laser detection and ranging (LightLaser Detection and Ranging, liDAR) system, and mainly comprises a transmitter, a receiver and a signal processing unit, wherein the transmitter is a laser transmitting mechanism in the laser radar; after the laser emitted by the emitter irradiates the target object, the reflected light rays are converged on the receiver through the lens group by being reflected by the target object. The signal processing unit is responsible for controlling the emission of the emitter, processing the signals received by the receiver and calculating the position, speed, distance, size and/or other information of the target object.
The millimeter wave radar uses millimeter waves as a detection medium, and can measure the distance, angle, relative speed and the like from the millimeter wave radar to the measured object. The millimeter wave Radar may be classified into Long Range Radar (LRR), medium Range Radar (Mid-Range Radar, MRR), and short Range Radar (Short Range Radar, SRR) according to the distance of detection thereof. The application scene of the LRR mainly comprises active cruising, braking assistance and the like, the LRR has low requirements on the detected angular domain width, and the 3dB wave beam width of the antenna is low when the LRR reacts to the antenna. The MRR/SRR mainly faces to application scenes including automatic parking, lane combining assistance, blind spot detection and the like, and has higher requirements on the detected angular domain width, and the MRR/SRR responds to an antenna with higher requirements on the 3dB wave beam width of the antenna and requires the antenna to have lower side lobe level. The beam width is used for guaranteeing the range of the detectable angle domain, the low side lobe is used for reducing clutter energy reflected by the ground, the false alarm probability is reduced, and driving safety is guaranteed. The LRR can be arranged in front of the vehicle body, the MRR/SRR can be arranged at four corners of the vehicle, and the 360-range coverage on the periphery of the vehicle body can be realized when the MRR/SRR are used together.
The millimeter wave radar may include a housing with at least one printed circuit board (Printed circuit board, PCB) built-in, for example, a power PCB and a radar PCB, wherein the power PCB may provide internal use voltage for the radar, and may also provide an interface and security functions for communication with other devices; the radar PCB may provide transceiving and processing of millimeter wave signals, with components for millimeter wave signal processing and antennas (a transmitting antenna Tx and a receiving antenna Rx) for transceiving millimeter wave signals integrated thereon. The antenna may be formed in a microstrip array on the back of the radar PCB for transmitting and receiving millimeter waves.
The ultrasonic sensor, which may be also called an ultrasonic radar, is a sensing device using ultrasonic detection, and its working principle is that ultrasonic waves are emitted outwards through an ultrasonic emission device, ultrasonic waves reflected back through an obstacle are received through a receiving device, and the distance is calculated according to the time difference of the reflected ultrasonic waves. The distance measured and calculated by the ultrasonic sensor can be used for prompting the distance from the vehicle body to the obstacle, assisting parking or reducing unnecessary collision. It should be understood that the above-described sensors are merely illustrative of sensors that may be configured on a vehicle in embodiments of the present application and are not intended to be limiting, and in other embodiments, the sensors may include, but are not limited to, the above-described examples.
It should be noted that the geometric transformation referred to in the embodiments of the present application is generally in a visual sense scene. As shown in fig. 2, after the original (Raw) image information acquired by the camera module is processed by the ISP, the original (Raw) image information is subjected to geometric transformation to perform de-distortion or perspective transformation, and then is applied to a visual perception algorithm.
In this embodiment of the present application, the image processing apparatus may be an application program, and may be installed or run in a chip or a component of a vehicle, or on an intelligent device such as a mobile phone or a tablet computer on the vehicle. Alternatively, the image processing apparatus may be a software module, which may be disposed in each of the above-described ECUs of the vehicle. Alternatively, the image processing device may be a newly added hardware module in the vehicle, where the hardware module may be configured with related judgment logic or algorithm, and may be used as one ECU in the vehicle, to transfer information with other ECUs through an automobile bus, so as to implement driving control of the vehicle.
The image processing method according to the embodiment of the present application is described below with reference to the drawings.
Fig. 3 is a flowchart of an image processing method according to an embodiment of the present application. Wherein the method may be implemented by an image processing device, which may be deployed in a vehicle. As shown in fig. 3, the method may include the steps of:
step 301, partitioning an image to be generated to obtain N first regions, where N is an integer greater than 1.
In step 301, there are various ways of partitioning the image to be generated, and the image may be uniformly divided into N first areas, i.e. the shapes and sizes of the N first areas are consistent, or non-uniform partitioning may be performed, i.e. the shapes or sizes of the N first areas are not consistent. It should be understood that in a specific implementation process, the blocks may be performed according to actual needs, and the specific number of blocks is not specifically limited.
For example, taking an image to be generated as an example of uniform blocking, as shown in fig. 4, the image to be generated 410 is divided into 16 rectangular areas, where the rectangular areas are not limited to the blocking manner in the embodiment of the present application, and in a specific implementation, the first area in step 301 may also have other shapes, for example, a triangle, a hexagon, or other shapes.
Step 302, for each first region of the N first regions, determining, according to the first mapping relationship, a second set of pixel points in the original image, where the second set of pixel points corresponds to the first set of pixel points included in the first region.
The first pixel point set comprises a plurality of first pixel points, the second pixel point set comprises a plurality of second pixel points, and the plurality of second pixel points correspond to the plurality of first pixel points.
A first mapping relation exists between each first pixel point in the image to be generated and the second pixel point of the original image, the first mapping relation comprises a mapping relation between the identification of the first pixel point in the image to be generated and the coordinate value of the corresponding second pixel point in the original image under a first coordinate system, and the first coordinate system is a coordinate system taking the first vertex of the original image as an origin. Taking the first pixel point A as an example for any first pixel point in the image to be generated, determining the coordinate value of a corresponding second pixel point B of the first pixel point A in the original image according to a first mapping relation, acquiring the pixel value of the second pixel point B from a corresponding storage position according to the coordinate value of the second pixel point B in the original image, and filling the pixel value of the second pixel point B to the position of the first pixel point A in the image to be generated, so that the image to be generated can be obtained.
In step 302, one first region may correspond to one second set of pixels, and N first regions correspond to N second sets of pixels.
Step 303, determining the minimum bounding rectangle of all the second pixels in the second pixel set.
In this embodiment of the present application, a first pixel set corresponding to a first area corresponds to a second pixel set, and a second pixel set corresponds to a minimum bounding rectangle, where the shape and size of the minimum bounding rectangle are related to the selection of the first area and the first mapping relationship.
It should be understood that the second pixels in the second pixel set corresponding to the first region in the original image may be pixels in continuous positions, or may not be pixels in continuous positions, and all the second pixels in the second pixel set corresponding to the first region may be covered by a minimum circumscribed rectangle.
Taking the image to be generated as shown in fig. 4 as an example, the corresponding original image is 510 as shown in fig. 5, and taking the first area 411 as an example in fig. 4, the first area 411 includes a minimum bounding rectangle of the second pixel point in the second pixel set corresponding to the first pixel point set, which is shown as a minimum bounding rectangle 511 in fig. 5.
It should be noted that, for the acquisition of the circumscribed rectangle in the original image, different geometric transformation has different acquisition methods, for example, perspective transformation can be determined by the positions of four vertices of a rectangular area obtained by dividing an image to be generated in the original image, for example, lens distortion can be determined by the outline of a shape formed by corresponding second pixels in the original image by the first pixels at the edge of the first area, and the circumscribed rectangle is not limited herein, regardless of the first pixels of the first area, except the first pixels at the edge, which are located inside the first area. For all geometric transformations, the maximum value and the minimum value of x and the maximum value and the minimum value of y can be found respectively by counting the coordinate values of all second pixel points included in the second pixel point set corresponding to the first region, and then the circumscribed rectangle is determined according to the maximum value and the minimum value of x and the maximum value and the minimum value of y.
Step 304, converting the coordinate value of each second pixel point included in the second pixel point set under the first coordinate system into the coordinate value under the second coordinate system.
The first coordinate system takes a first vertex of the original image as an origin, and the second coordinate system takes a second vertex of the minimum circumscribed rectangle as an origin. Optionally, the transverse axis of the second coordinate system is consistent with the transverse axis direction of the first coordinate system, and the longitudinal axis of the second coordinate system is consistent with the longitudinal axis direction of the first coordinate system.
Taking the original image shown in fig. 5 as an example, the first coordinate system xOy takes the top left corner vertex of the original image as the origin, takes two perpendicular intersecting sides of the original image as the axis, and the second coordinate system x' O 1 ' y ' is centered on the vertex of the upper left corner of the minimum bounding rectangle 511, the two perpendicular intersecting sides of the minimum bounding rectangle 511 being centered on an axis, wherein x ' O of the second coordinate system 1 The x-axis of ' y ' coincides with the x-axis direction of the first coordinate system xOy, the x ' O of the second coordinate system 1 The y-axis of 'y' coincides with the y-axis direction of the first coordinate system xOy.
The coordinate value of the second pixel may be converted in step 304 to reduce the bit width required by the coordinate value of the second pixel, for example, by reducing the bit width required by the coordinate value of the second pixel from 16 bits to 8 bits. For example, x' O of the second coordinate system 1 The origin (0, 0) of ' y ' has a coordinate value (400, 5) in the first coordinate system xOy, and the second pixel point K in the minimum bounding rectangle 511 has a coordinate value (500, 12) in the first coordinate system xOy, so that the second pixel point K is in the second coordinate system x ' O 1 The coordinate value of 'y' is (100, 7). Thus, the value of the coordinate value of the second pixel point K becomes smaller, so that the bit width required by the coordinate value of the second pixel point K becomes smaller, and the storage space can be saved.
Implementation of the present applicationIn an example, the second pixel point set corresponding to each first region may construct a second coordinate system according to the minimum bounding rectangle corresponding to each first region, and coordinate values of the origin of coordinates of the second coordinate systems corresponding to different second pixel point sets in the first coordinate system are different, for example, the first region 411 corresponds to the minimum bounding rectangle 511, and a plurality of second pixel points in the second pixel point set corresponding to the first region 411 may be constructed with the top left corner vertex O of the minimum bounding rectangle 511 1 Second coordinate System x' O with origin 1 For another example, the first region 412 corresponds to the smallest bounding rectangle 512, and the plurality of second pixels in the second set of pixels corresponding to the first region 412 may be constructed with the top-left corner vertex O of the smallest bounding rectangle 512 2 Second coordinate System x' O with origin 2 ′y′。
In step 305, a mapping table is generated according to the identifiers of the plurality of first pixels in the first pixel set included in the first region and the coordinate values of the plurality of second pixels in the second pixel set corresponding to the first region under the second coordinate system. The mapping table comprises a mapping relation between the identification of a first pixel point in the image to be generated and the coordinate value of a corresponding second pixel point in the original image under a second coordinate system.
In a possible implementation manner, taking the first area 411 and the minimum bounding rectangle 511 corresponding to the first area 411 as an example, the second vertex O of the minimum bounding rectangle 511 may be determined according to the identification of the first pixel point included in the first area 411 and the second pixel point included in the minimum bounding rectangle 511 1 Second coordinate System x' O with origin 1 Coordinate values under 'y' generate a mapping table. The image to be generated also comprises other first areas, the marks of the first pixel points included in the other first areas and the second pixel points included in the minimum circumscribed rectangle corresponding to the other first areas are arranged in respective second coordinate systems x' O n The coordinate value under 'y' may also generate a mapping table. The mapping table correspondingly generated by the N first areas included in the image to be generated may constitute a large mapping table.
In this embodiment of the present application, when an image to be generated needs to be generated according to an original image, a pixel value of a second pixel corresponding to a first pixel may be found from the original image and used as the pixel value of the first pixel in the image to be generated. The pixel values of the second pixel points in the original image are stored in the storage address of the memory, and the pixel value of each second pixel point can be searched through the coordinate value of each second pixel point under the second coordinate system, and one possible implementation can be seen in step 306.
Step 306, for any first pixel point included in the first area, determining a first coordinate value of a second pixel point corresponding to the identification of the first pixel point in the second coordinate system according to the mapping table, and filling the first pixel point with a pixel value read from a storage address corresponding to the first coordinate value.
In this embodiment of the present application, N first areas are obtained by partitioning an image to be generated, for each first area, each second pixel in a second pixel set corresponding to a first pixel set included in the first area is converted into a coordinate value in a second coordinate system with a first vertex of an original image as an origin, the coordinate value of each second pixel in the second pixel set corresponding to the first pixel set included in the first area can be represented by a smaller value, then, according to the identification of a plurality of first pixels in the first pixel set included in the first area and the coordinate value of a plurality of second pixels in the second pixel set corresponding to the first area, a mapping table is generated under the second coordinate system, compared with the coordinate value of a plurality of first pixels in the first pixel set included in the first area and the coordinate value of a plurality of second pixels in the second coordinate system with the minimum circumscribed rectangle as the origin, the coordinate value of each second pixel in the first pixel set included in the first area can be directly represented by a smaller value, then, the mapping table can be generated by a smaller value, and the mapping table can be further stored in the second coordinate table can be further reduced by a smaller space than the mapping table, for example, the image can be generated by a mapping table is further generated by a mapping table in a smaller space from the first pixel in the second coordinate table, and the mapping table is further generated by a second pixel in the first pixel is generated, and the mapping table is further, for the image is further stored by a smaller by a space is further, and the image is more reduced.
In other embodiments, in step 305, the coordinate values of the second pixel point set corresponding to the first region in the second coordinate system may be converted into the index values, and the index values and the identification of the first pixel point included in the first region may be used to generate the mapping table. The coordinate values of all pixels in the mapping table are replaced by index values, so that the index calculation process can be omitted in actual calculation, and the image processing efficiency can be improved.
In this embodiment, the above step 305 may be implemented by: according to the coordinate values of a plurality of second pixel points in a second pixel point set corresponding to the first region under a second coordinate system and the width and height of the minimum circumscribed rectangle, index values corresponding to a plurality of second pixel points in the second pixel point set corresponding to the first region are determined, and according to the identification of a plurality of first pixel points in the first pixel point set included in the first region and index values corresponding to a plurality of second pixel points in the second pixel point set corresponding to the first region, a mapping table is generated.
Taking the minimum bounding rectangle 511 corresponding to the first area 411 as an example, determining coordinate values (x ', y') of the second pixel point set in the minimum bounding rectangle 511 under the second coordinate system, where the width of the minimum bounding rectangle 511 on the x 'axis is w, the height on the y' axis is h, and for any one of the second pixel points (x ', y') in the minimum bounding rectangle 511, the calculation formula of the index value is as follows:
index=y′*w+x′;
Wherein index is an index value, y ' is a y value under a second coordinate system corresponding to a minimum bounding rectangle to which the second pixel point belongs, x ' is an x value under the second coordinate system corresponding to the minimum bounding rectangle to which the second pixel point belongs, and w is a width of the minimum bounding rectangle to which the second pixel point belongs along the x ' axis direction.
By the implementation manner, the index value in the mapping table is smaller, the bit width required by the index value in the mapping table is reduced, for example, from 16 bits to 8 bits, so that the memory space occupied by the mapping table is smaller.
In this embodiment, when an image to be generated needs to be generated according to an original image, a pixel value of a second pixel corresponding to a first pixel may be found from the original image, and the pixel value of each second pixel in the original image is stored in a storage address of a memory as the pixel value of the first pixel in the image to be generated, and the pixel value of each second pixel may be found by an index value corresponding to each second pixel. In a possible implementation manner, the step 306 may be implemented in the following manner: for any first pixel point included in the first area, determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to a mapping table, and filling the first pixel point by using a pixel value read from a storage address corresponding to the first index value.
When the geometric transformation operation is actually performed, the minimum bounding rectangle is carried into a new memory according to the left upper corner coordinates and the width and the height of the corresponding minimum bounding rectangle in the original image recorded in each region, for example, the minimum bounding rectangle can be carried by using DMA (direct memory access), all pixel values corresponding to the second pixel point set included in one bounding rectangle are not continuously stored in the memory, and the minimum bounding rectangle is carried by using DMA, so that all pixel values corresponding to the second pixel point set included in the bounding rectangle can be ensured to be continuously stored in the memory. At this time, the Gather operation may read data in parallel with the index value in the newly generated mapping table.
In one possible implementation manner, the image to be generated is segmented to obtain N first areas, and the maximum bit width value in the index values corresponding to the plurality of second pixels in the second pixel set corresponding to each first area in the N first areas may be set to meet a certain condition, so that the bit width of the index value corresponding to the second pixel becomes smaller, so as to meet the bit width requirement of the vector (or vector) register or increase the number of parallel read data. For example, according to the coordinate value of the second pixel point in the original image under the first coordinate system and the width and height of the original image, it is determined that the index value corresponding to the second pixel point in the original image is a first preset value, and then the maximum value of the bit width of the index values corresponding to the plurality of second pixel points in the second pixel point set corresponding to the first area is smaller than the first preset value, and the first preset value is exemplified as 32 bits, and the maximum value of the bit width is smaller than 32 bits, for example, the maximum value of the bit width is 16 bits. The maximum bit width can be calculated by w×h.
In specific implementation, the method can be completed by one-time blocking, or can be completed by multiple times of blocking. The following describes the blocking process of the image to be generated.
Fig. 6 is a schematic diagram of a blocking process of an image to be generated according to an embodiment of the present application. As shown in fig. 6, the blocking process includes the steps of:
in step 601, the image to be generated is segmented to obtain a plurality of regions.
Step 602, obtaining a minimum bounding rectangle of a corresponding second pixel point set of the first pixel point set included in each region in the original image.
Step 603, if the index value corresponding to the second pixel point determined according to the coordinate value of the second pixel point in the original image in the first coordinate system and the width and height of the original image is the first preset value, determining whether the maximum value of the bit widths of the index values corresponding to the plurality of second pixel points in the second pixel point set corresponding to each region is smaller than the first preset value; if yes, go to step 604, and no further partitioning is performed; if not, step 601 is executed, and the image to be generated is further partitioned until the maximum bit width of the index value corresponding to the second pixel point included in the minimum bounding rectangle corresponding to each region is smaller than the first preset value, and the partitioning is stopped. The first preset value may include, but is not limited to, 32 bits, 16 bits, etc., for example, the first preset value is 32 bits, and if the binary bit corresponding to the maximum bit width is 16 bits, the block is stopped when the rising value is smaller.
Here, if the determination result in step 603 is no, the image to be generated may be re-segmented, or the segmented regions may be segmented to obtain smaller regions.
Step 604, converting the coordinate value of each second pixel point in the second pixel point set corresponding to each region under the first coordinate system into the coordinate value under the second coordinate system, and then generating the mapping table.
In this embodiment of the present application, when a pixel value needs to be read from a storage address corresponding to an original image, the pixel value may be read from a memory block corresponding to a first area, and in one possible implementation manner, the memory block corresponding to the first area is determined according to a start address corresponding to the first area; and for any first pixel point in the first pixel point set included in the first area, according to a mapping table and a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point, then reading a pixel value of the second pixel point corresponding to the identification of the first pixel point from a storage address which is in a memory block corresponding to the first area and corresponds to the first index value, and then filling the pixel value of the second pixel point corresponding to the identification of the first pixel point into the first pixel point. According to the embodiment, after the initial address corresponding to the first area and the index value corresponding to the second pixel corresponding to the identifier of any first pixel included in the first area are obtained, the pixel value of the second pixel corresponding to the identifier of the first pixel can be read from the memory block corresponding to the first area, so that the pixel value for filling the first pixel can be obtained quickly.
In the above embodiment, by partitioning the image to be generated so that the pixels in the mapping table are represented by smaller index values, the bit width required for the coordinate values of the second pixel point in the mapping table is smaller, for example, from 16 bits to 8 bits, so that the space used for storing the mapping table can be reduced, and the bit width of the index values can be reduced to increase the upper limit of the parallel number of the other parallelized data reading, thereby increasing the image processing efficiency.
Fig. 7 is a flowchart of another image processing method according to an embodiment of the present application. The method may be implemented by an image processing device, which may be deployed in a vehicle. As shown in fig. 7, the method may include the steps of:
in step 701, the image to be generated is segmented to obtain N first regions, where N is an integer greater than 1.
In step 701, there are various ways of partitioning the image to be generated, which may be uniformly divided into N first areas, i.e. the shapes and sizes of the N first areas are consistent, or may be unevenly partitioned, i.e. the shapes or sizes of the N first areas are inconsistent. It should be understood that in a specific implementation process, the blocks may be performed according to actual needs, and the specific number of blocks is not specifically limited.
For example, taking an image to be generated as an example of uniform blocking, as shown in fig. 8, the image to be generated 810 is divided into 16 first rectangular areas, where the rectangular areas are not limited to the blocking manner in the embodiment of the present application, and in a specific implementation, the first areas in step 701 may also have other shapes, for example, a triangle, a hexagon, or other shapes.
In step 702, for each of the N first regions, a coordinate system transformation relationship is determined according to a third coordinate system using a third vertex of the first region as an origin and a fourth coordinate system using a fourth vertex of the image to be generated as an origin, where the coordinate system transformation relationship is used to transform coordinate values of the first pixel point under the fourth coordinate system and coordinate values of the first pixel point under the third coordinate system.
In step 703, a second coordinate transformation relationship is generated based on the first coordinate transformation relationship and the coordinate system transformation relationship.
The first coordinate transformation relation is used for transforming coordinate values of a first pixel point in the image to be generated under a fourth coordinate system and coordinate values of a second pixel point in the original image under the first coordinate system; the second coordinate transformation relation is used for transforming coordinate values of the first pixel points included in the first rectangular area under the third coordinate system and coordinate values of the second pixel points in the original image under the first coordinate system; the first coordinate system is a coordinate system with the first vertex of the original image as the origin.
Specifically, the coordinate value of the first pixel point C in the fourth coordinate system may be converted into the coordinate value of the second pixel point C 'corresponding to the first pixel point C in the original image in the first coordinate system by the first coordinate conversion relationship, or the coordinate value of the first pixel point C in the third coordinate may be converted into the coordinate value of the second pixel point C' corresponding to the first pixel point C in the original image in the first coordinate system by the second coordinate conversion relationship. Here, the second pixel point C 'corresponding to the first pixel point C may be understood as a second pixel point C' that satisfies a first mapping relationship with the first pixel point C, where the first mapping relationship includes a mapping relationship between an identification of the first pixel point in the image to be generated and a coordinate value of the corresponding second pixel point in the original image in a first coordinate system, where the first coordinate system is a coordinate system with the first vertex of the original image as an origin.
As shown in fig. 8, the image to be generated is divided into 16 rectangular areas, and each rectangular area has a corresponding second coordinate mapping relationship, so that coordinate values of a first pixel point in the rectangular area under a coordinate system with an upper left corner vertex of the rectangular area as an origin can be converted into coordinate values of a corresponding second pixel point in the original image under the first coordinate system. The 16 rectangular areas correspond to 16 different second coordinate mappings, in one example, rectangular area 811 corresponds to second coordinate mapping 1, e.g., equation B, and rectangular area 812 corresponds to one second coordinate mapping 2, e.g., equation C.
Taking the first area as an example of a rectangular area 811 in the image to be generated in fig. 8, the fourth coordinate system is an XOY coordinate system with the top left corner of the image to be generated as the origin, and the third coordinate system corresponding to the first area 811 is an X ' O ' Y ' coordinate system with the top left corner of the first area 811 as the origin. Specific examples of the coordinate transformation relationship, the first coordinate transformation relationship, and the second coordinate transformation relationship are given below by taking fig. 8 as an example.
Illustratively, the coordinate transformation relationship between the third coordinate system X ' O ' Y ' and the fourth coordinate system XOY is as follows in the following formula (1):
Figure BDA0003330570980000121
x 'in the above formula (1)' 1 Is the abscissa, Y 'of the first pixel point in the first region 811 under the third coordinate system X' O 'Y' 1 Is the ordinate, X, of the first pixel point in the first region 811 in the third coordinate system X 'O' Y 1 For the first region 811, the abscissa, y, of the first pixel point in the fourth coordinate system 1 U is the abscissa offset between the third coordinate system X 'O' Y 'and XOY of the fourth coordinate system, v is the ordinate offset between the third coordinate system X' O 'Y' and XOY of the fourth coordinate system, which is the ordinate of the first pixel point in the first region 811 in the fourth coordinate system.
The first coordinate transformation relationship is as follows formula (2):
Figure BDA0003330570980000122
the above formula (2) can be used to determine the coordinate value (x 1 ,y 1 ) Transformed into the coordinate value (x) of the second pixel point in the original image under the first coordinate system 2 ,y 2 ) Wherein H is 00 、H 01 、H 02 、H 10 、H 11 、H 12 、H 20 、H 21 、H 22 Is constant.
The second coordinate transformation relationship is as follows formula (3):
Figure BDA0003330570980000123
the above formula (3) can be used to determine the coordinate value (x 'of the first pixel point in the image to be generated in the third coordinate system' 1 ,y’ 1 ) Transformed into the coordinate value (x) of the second pixel point in the original image under the first coordinate system 2 ,y 2 ) Wherein H is 00 、H 01 、H 02 、H 10 、H 11 、H 12 、H 20 、H 21 、H 22 Is constant.
Step 704, for any first pixel point included in the first area, determining a second coordinate value of a second pixel point corresponding to the first pixel point in the first coordinate system according to the second coordinate transformation relationship and the coordinate value of the first pixel point in the third coordinate system, and filling the first pixel point with the pixel value read from the storage address corresponding to the second coordinate value.
Specific implementation of step 704 may be referred to the relevant description of step 306, and will not be described herein.
By the method, after the image to be generated is segmented, for each first region in the image to be generated, the first region includes a plurality of first pixel points in the first pixel point set represented by the coordinate values under the third coordinate system, so that the coordinate values (x ', y') of the first pixel points are smaller, the coordinate values under the third coordinate system of the first pixel points pass through the second coordinate transformation relationship, and the coordinate values under the coordinate system of the fourth vertex of the image to be generated pass through the first coordinate transformation relationship with the first pixel points, but in the coordinate transformation process of the first pixel points by the second coordinate transformation relationship in the embodiment of the application, the coordinate values of the first pixel points are represented by smaller values, the calculation accuracy can be improved in subsequent processing, in particular, the calculation accuracy error can be reduced in multiplication operation, and the maximum value x in the image to be generated is calculated by 128, for example, if the maximum value x in the image to be generated is calculated by 128, and the maximum error is calculated by 8 times.
The image processing method can be applied to the fields of Augmented Reality (AR), intelligent robot navigation and the like besides the automatic driving field.
Based on the above embodiments and the same conception, the embodiments of the present application further provide an image processing apparatus, which is configured to execute the method executed by the image processing apparatus in the above embodiments, and relevant features may be referred to the above method embodiments, which are not described herein again. The image processing apparatus may be an in-vehicle apparatus, or may be a chip or a circuit, for example, a chip or a circuit that may be provided in the in-vehicle apparatus.
As shown in fig. 9, the image processing apparatus 900 may include:
the blocking unit 901 is configured to block an image to be generated to obtain N first areas, where N is an integer greater than 1;
a determining unit 902, configured to perform, for each of the N first areas: determining a second pixel point set corresponding to a first pixel point set included in the first region in the original image according to the first mapping relation, and determining the minimum circumscribed rectangle of all second pixel points in the second pixel point set; the first pixel point set comprises a plurality of first pixel points, the second pixel points comprise a plurality of second pixel points, the plurality of second pixel points correspond to the plurality of first pixel points, the first mapping relation comprises a mapping relation between the identification of the first pixel points in the image to be generated and the coordinate values of the corresponding second pixel points in the original image under a first coordinate system, and the first coordinate system is a coordinate system taking the first vertex of the original image as an origin;
A conversion unit 903, configured to convert coordinate values of each second pixel included in the second pixel set under the first coordinate system into coordinate values under a second coordinate system, where the second coordinate system is a coordinate system with a second vertex of the minimum bounding rectangle as an origin;
a generating unit 904, configured to generate a mapping table according to the identifiers of a plurality of first pixels in a first pixel set included in the first area and coordinate values of a plurality of second pixels in a second pixel set corresponding to the first pixel set included in the first area under a second coordinate system, where the coordinate values are respectively in the second coordinate system; the mapping table comprises a mapping relation between the identification of a first pixel point in the image to be generated and the coordinate value of a corresponding second pixel point in the original image under a second coordinate system;
the determining unit 902 is further configured to determine, for an identifier of any first pixel point included in the first area, a first coordinate value of a second pixel point corresponding to the identifier of the first pixel point in a second coordinate system according to the mapping table;
and a filling unit 905 for filling the first pixel point with the pixel value read from the storage address corresponding to the first coordinate value.
In a possible implementation manner, the determining unit 902 is specifically configured to: according to the coordinate values of a plurality of second pixel points in the second pixel point set corresponding to the first region under the second coordinate system and the width and height of the minimum circumscribed rectangle, determining index values corresponding to the plurality of second pixel points in the second pixel point set corresponding to the first region; the generating unit 904 is specifically configured to: generating a mapping table according to the identifications of a plurality of first pixel points in a first pixel point set included in the first region and the coordinate values of a plurality of second pixel points in a second pixel point set corresponding to the first pixel point set included in the first region under a second coordinate system; the mapping table comprises a mapping relation between the identification of a first pixel point in the image to be generated and the coordinate value of a corresponding second pixel point in the original image under a second coordinate system; the determining unit 902 is further configured to: determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to a mapping table aiming at the identification of any first pixel point included in the first area; a filling unit 905 for: the first pixel point is filled with pixel values read from the memory address corresponding to the first index value.
In a possible implementation manner, if it is determined that, according to the coordinate value of the second pixel point in the original image under the first coordinate system and the width and height of the original image, the maximum bit width of the index value corresponding to the second pixel point in the original image is a first preset value, the maximum bit width of the index values corresponding to the plurality of second pixel points in the second pixel point set corresponding to the first pixel point set included in the first area is smaller than the first preset value.
In a possible implementation manner, pixel values of a plurality of second pixels in a second pixel set corresponding to a first pixel set included in N first areas are stored in N memory blocks, where the pixel values of the plurality of second pixels in each second pixel set are stored in the same memory block.
In a possible implementation manner, the determining unit 902 is specifically configured to: determining a memory block corresponding to the first area according to the initial address corresponding to the first area; for any first pixel point included in the first area, determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to a mapping table; the image processing apparatus further includes a reading unit 906 for: reading a pixel value from a storage address which corresponds to the first index value and is in a memory block corresponding to the first region; a filling unit 905, specifically for: and filling the first pixel point by using the read pixel value.
The embodiment of the present application further provides an image processing apparatus, which is configured to execute the method executed by the image processing apparatus in the above embodiment of the method, and related features may be referred to the above embodiment of the method, which is not described herein again.
As shown in fig. 10, the image processing apparatus 1000 may include a blocking unit 1001, a determination unit 1002, a generation unit 1003, and a filling unit 1004. Wherein:
a partitioning unit 1001, configured to partition an image to be generated to obtain N first areas, where N is an integer greater than 1;
a determining unit 1002 for performing, for each of the N first areas: determining a coordinate system transformation relation according to a third coordinate system taking a third vertex of the first area as an origin and a fourth coordinate system taking a fourth vertex of the image to be generated as the origin, wherein the coordinate system transformation relation is used for transforming the coordinate value of the first pixel point under the fourth coordinate system and the coordinate value of the first pixel point under the third coordinate system;
a generating unit 1003 configured to generate a second coordinate transformation relationship according to the first coordinate transformation relationship and the coordinate system transformation relationship; the first coordinate transformation relation is used for transforming coordinate values of a first pixel point in the image to be generated under a fourth coordinate system and coordinate values of a second pixel point in the original image under the first coordinate system; the second coordinate transformation relation is used for transforming coordinate values of the first pixel points included in the first rectangular area under the third coordinate system and coordinate values of the second pixel points in the original image under the first coordinate system; the first coordinate system is a coordinate system taking a first vertex of the original image as an origin;
The determining unit 1002 is further configured to determine, for any first pixel point included in the first area, a second coordinate value of a second pixel point corresponding to the first pixel point in the first coordinate system according to the second coordinate transformation relationship and the coordinate value of the first pixel point in the third coordinate system;
and a filling unit 1004 for filling the first pixel point with the pixel value read from the storage address corresponding to the second coordinate value.
It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice. The functional units in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the present application may be embodied in essence or a part contributing to some solutions or all or part of the technical solutions, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In one embodiment, the image processing apparatus in the above embodiment may take the form shown in fig. 11.
The image processing apparatus 1100 shown in fig. 11 includes at least one processor 1110 and a memory 1120, and the specific connection medium between the processor 1110 and the memory 1120 is not limited in the embodiment of the present application. Optionally, the image processing apparatus may further include a communication interface 1130, and the processor 1110 may perform data transmission through the communication interface 1130 when communicating with other devices.
When the image processing apparatus takes the form shown in fig. 11, the processor 1110 in fig. 11 may cause the image processing apparatus 1100 to execute the method executed by the image processing apparatus in any of the above-described method embodiments by calling the computer-executable instructions stored in the memory 1120. It should be understood that the processors mentioned in the embodiments of the present application may be implemented by hardware or may be implemented by software. When implemented in hardware, the processor may be a logic circuit, an integrated circuit, or the like. When implemented in software, the processor may be a general purpose processor, implemented by reading software code stored in a memory.
By way of example, the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should be understood that the memories mentioned in the embodiments of the present application may be volatile memories or nonvolatile memories, or may include both volatile and nonvolatile memories. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data rate Synchronous DRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct RAM (DR RAM).
It should be noted that when the processor is a general purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, the memory (storage module) may be integrated into the processor.
It should be noted that the memory described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
According to the method provided by the embodiment of the application, the application further provides a computer program product, which comprises: computer program code or instructions which, when run on a computer, cause the computer to perform the method of any of the embodiments shown in fig. 3, 6 or 7.
According to the method provided in the embodiments of the present application, there is further provided a computer readable storage medium storing a program code, which when run on a computer, causes the computer to perform the method of any one of the embodiments shown in fig. 3, 6 or 7.
According to the method provided by the embodiment of the application, the application further provides a chip system, and the chip system can comprise a processor. The processor is coupled to the memory and is operable to perform the method of any of the embodiments shown in fig. 3, 6 or 7. Optionally, the system on a chip further comprises a memory. Memory for storing a computer program (which may also be referred to as code, or instructions). A processor for calling and running a computer program from a memory, causing a device on which the chip system is installed to perform the method of any of the embodiments shown in fig. 3, 6 or 7.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc., that contain an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (14)

1. An image processing method, comprising:
partitioning an image to be generated to obtain N first areas, wherein N is an integer greater than 1;
for each of the N first regions, performing:
determining a second pixel point set corresponding to a first pixel point set included in the first area in an original image according to a first mapping relation, wherein the first pixel point set comprises a plurality of first pixel points, the second pixel points comprise a plurality of second pixel points, the plurality of second pixel points correspond to the plurality of first pixel points, the first mapping relation comprises a mapping relation between an identification of the first pixel point in the image to be generated and a coordinate value of the corresponding second pixel point in the original image under a first coordinate system, and the first coordinate system is a coordinate system taking a first vertex of the original image as an origin;
determining the minimum circumscribed rectangle of all second pixel points in the second pixel point set;
converting coordinate values of each second pixel point included in the second pixel point set under a first coordinate system into coordinate values under a second coordinate system, wherein the second coordinate system is a coordinate system taking a second vertex of the minimum circumscribed rectangle as an origin;
Generating a mapping table according to the identifications of a plurality of first pixel points in a first pixel point set included in the first region and the coordinate values of a plurality of second pixel points in a second pixel point set corresponding to the first pixel point set included in the first region under a second coordinate system; the mapping table comprises a mapping relation between the identification of the first pixel point in the image to be generated and the coordinate value of the corresponding second pixel point in the original image under the second coordinate system;
and determining a first coordinate value of a second pixel point corresponding to the identification of the first pixel point in the second coordinate system according to the mapping table aiming at the identification of any first pixel point in the first area, and filling the first pixel point by using the pixel value read from the storage address corresponding to the first coordinate value.
2. The method according to claim 1, wherein the generating the mapping table according to the identifications of the plurality of first pixels in the first pixel set included in the first region and the coordinate values of the plurality of second pixels in the second pixel set corresponding to the first region in the second coordinate system includes:
According to the coordinate values of a plurality of second pixels in a second pixel set corresponding to the first pixel set and the width and height of the minimum circumscribed rectangle, which are included in the first region, respectively, determining index values corresponding to the plurality of second pixels in the second pixel set;
generating the mapping table according to the identifications of a plurality of first pixel points in the first pixel point set included in the first region and index values corresponding to a plurality of second pixel points in the second pixel point set;
the determining, for any first pixel point included in the first area, a first coordinate value of a second pixel point corresponding to an identifier of the first pixel point in the second coordinate system according to the mapping table, and filling the first pixel point with a pixel value read from a storage address corresponding to the first coordinate value, including:
and for any first pixel point included in the first area, determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to the mapping table, and filling the first pixel point by using the pixel value read from the storage address corresponding to the first index value.
3. The method according to claim 2, wherein if it is determined that a maximum value of bit widths of index values corresponding to second pixel points in the original image is a first preset value according to coordinate values of the second pixel points in the original image in the first coordinate system and widths and heights of the original image, the maximum value of bit widths of index values corresponding to the second pixel points in the second pixel point set corresponding to the first pixel point set included in the first region is smaller than the first preset value.
4. The method of claim 2, wherein the N first regions include second pixel sets corresponding to the first pixel sets, and pixel values of a plurality of second pixels in each of the second pixel sets are stored in N memory blocks, respectively, and wherein the pixel values of the plurality of second pixels in each of the second pixel sets are stored in a same memory block.
5. The method according to claim 2, 3 or 4, wherein determining, for any first pixel included in the first area, a first index value corresponding to a second pixel corresponding to an identification of the first pixel according to the mapping table, and filling the first pixel with a pixel value read from a storage address corresponding to the first index value, includes:
Determining a memory block corresponding to the first area according to the initial address corresponding to the first area;
for any first pixel point included in the first area, determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to the mapping table, and reading the pixel value from a memory block corresponding to the first area and a storage address corresponding to the first index value;
and filling any first pixel point by using the read pixel value.
6. An image processing method, comprising:
partitioning an image to be generated to obtain N first areas, wherein N is an integer greater than 1;
for each of the N first regions, performing:
determining a coordinate system transformation relation according to a third coordinate system taking a third vertex of a first area as an origin and a fourth coordinate system taking a fourth vertex of the image to be generated as the origin, wherein the coordinate system transformation relation is used for transforming coordinate values of the first pixel point under the fourth coordinate system and coordinate values of the first pixel point under the third coordinate system;
generating a second coordinate transformation relation according to the first coordinate transformation relation and the coordinate system transformation relation; the first coordinate transformation relation is used for transforming coordinate values of a first pixel point in the image to be generated under a fourth coordinate system and coordinate values of a second pixel point in the original image under the first coordinate system; the second coordinate transformation relation is used for transforming coordinate values of the first pixel points included in the first rectangular area under a third coordinate system and coordinate values of the second pixel points in the original image under the first coordinate system; the first coordinate system is a coordinate system taking a first vertex of the original image as an origin;
For any first pixel point included in the first area, determining a second coordinate value of a second pixel point corresponding to the first pixel point in a first coordinate system according to the second coordinate transformation relation and the coordinate value of the first pixel point in a third coordinate system, and filling the pixel value of the first pixel point by using the pixel value read from a storage address corresponding to the second coordinate value.
7. An image processing apparatus, comprising:
the image processing device comprises a blocking unit, a processing unit and a processing unit, wherein the blocking unit is used for blocking an image to be generated to obtain N first areas, and N is an integer greater than 1;
a determining unit configured to perform, for each of the N first areas: according to a first mapping relation, determining a second pixel point set corresponding to a first pixel point set included in the first region in an original image, and determining the minimum circumscribed rectangle of all second pixel points in the second pixel point set; the first pixel point set comprises a plurality of first pixel points, the second pixel points comprise a plurality of second pixel points, the second pixel points correspond to the first pixel points, the first mapping relation comprises a mapping relation between the identification of the first pixel point in the image to be generated and the coordinate value of the corresponding second pixel point in the original image under a first coordinate system, and the first coordinate system is a coordinate system taking the first vertex of the original image as an origin;
The conversion unit is used for converting the coordinate value of each second pixel point included in the second pixel point set under a first coordinate system into the coordinate value under a second coordinate system, wherein the second coordinate system is a coordinate system taking the second vertex of the minimum circumscribed rectangle as an origin;
a generating unit, configured to generate a mapping table according to the identifiers of a plurality of first pixel points in a first pixel point set included in the first area and coordinate values of a plurality of second pixel points in a second pixel point set corresponding to the first pixel point set included in the first area under a second coordinate system; the mapping table comprises a mapping relation between the identification of the first pixel point in the image to be generated and the coordinate value of the corresponding second pixel point in the original image under the second coordinate system;
the determining unit is further configured to determine, for an identifier of any first pixel point included in the first area, a first coordinate value of a second pixel point corresponding to the identifier of the first pixel point in a second coordinate system according to the mapping table;
and a filling unit for filling the first pixel point by using the pixel value read from the storage address corresponding to the first coordinate value.
8. The image processing device according to claim 7, wherein the determining unit is specifically configured to:
according to the coordinate values of a plurality of second pixels in a second pixel set corresponding to the first pixel set and the width and height of the minimum circumscribed rectangle, which are included in the first region, respectively, determining index values corresponding to the plurality of second pixels in the second pixel set;
the generating unit is specifically configured to:
generating the mapping table according to the identifications of a plurality of first pixel points in the first pixel point set included in the first region and index values corresponding to a plurality of second pixel points in the second pixel point set;
the determining unit is further configured to:
for any first pixel point included in the first area, determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to the mapping table;
the filling unit is used for: and filling the first pixel point by using the pixel value read from the storage address corresponding to the first index value.
9. The image processing apparatus according to claim 8, wherein if it is determined that a maximum value of bit widths of index values corresponding to second pixel points in an original image is a first preset value according to coordinate values of the second pixel points in the original image in the first coordinate system and widths and heights of the original image, the maximum value of bit widths of index values corresponding to a plurality of second pixel points in the second pixel point set corresponding to the first pixel point set included in the first region is smaller than the first preset value.
10. The image processing apparatus according to claim 8, wherein pixel values of a plurality of second pixels in the second pixel sets respectively corresponding to the first pixel sets included in the N first areas are stored in N memory blocks respectively, and wherein pixel values of a plurality of second pixels in each second pixel set are stored in a same memory block.
11. The image processing device according to claim 8, 9 or 10, characterized by a determination unit, in particular for:
determining a memory block corresponding to the first area according to the initial address corresponding to the first area; for any first pixel point included in the first area, determining a first index value corresponding to a second pixel point corresponding to the identification of the first pixel point according to the mapping table;
the image processing apparatus further includes a reading unit configured to:
reading a pixel value from a storage address which is in the memory block corresponding to the first region and corresponds to the first index value;
the filling unit is specifically configured to: and filling the first pixel point by using the read pixel value.
12. An image processing apparatus, comprising:
The image processing device comprises a blocking unit, a processing unit and a processing unit, wherein the blocking unit is used for blocking an image to be generated to obtain N first areas, and N is an integer greater than 1;
a determining unit configured to determine, for each of the N first regions, a coordinate system transformation relationship for transforming a coordinate value of the first pixel point in a fourth coordinate system and a coordinate value of the first pixel point in the third coordinate system, based on a third coordinate system having a third vertex of the first region as an origin and a fourth coordinate system having a fourth vertex of the image to be generated as an origin;
the generating unit is used for generating a second coordinate transformation relation according to the first coordinate transformation relation and the coordinate system transformation relation; the first coordinate transformation relation is used for transforming coordinate values of a first pixel point in the image to be generated under a fourth coordinate system and coordinate values of a second pixel point in the original image under the first coordinate system; the second coordinate transformation relation is used for transforming coordinate values of the first pixel points included in the first rectangular area under a third coordinate system and coordinate values of the second pixel points in the original image under the first coordinate system; the first coordinate system is a coordinate system taking a first vertex of the original image as an origin;
The determining unit is further configured to determine, for any first pixel point included in the first area, a second coordinate value of a second pixel point corresponding to the first pixel point in the first coordinate system according to the second coordinate transformation relationship and a coordinate value of the first pixel point in the third coordinate system;
and a filling unit for filling the first pixel point by using the pixel value read from the storage address corresponding to the second coordinate value.
13. An image processing apparatus, comprising a processor and a memory;
the memory is used for storing programs;
the processor is configured to execute a program stored in the memory to cause the image processing apparatus to execute the method according to any one of claims 1 to 5 or to execute the method according to claim 6.
14. A computer readable storage medium, characterized in that the computer readable storage medium comprises a computer program which, when run on an image processing apparatus, causes the image processing apparatus to perform the method according to any one of claims 1 to 5 or to perform the method according to claim 6.
CN202111278879.2A 2021-10-31 2021-10-31 Image processing method, device and computer readable storage medium Pending CN116071421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111278879.2A CN116071421A (en) 2021-10-31 2021-10-31 Image processing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111278879.2A CN116071421A (en) 2021-10-31 2021-10-31 Image processing method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116071421A true CN116071421A (en) 2023-05-05

Family

ID=86180749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111278879.2A Pending CN116071421A (en) 2021-10-31 2021-10-31 Image processing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116071421A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437126A (en) * 2023-12-21 2024-01-23 珠海鸿芯科技有限公司 Image conversion method, computer device, and computer-readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437126A (en) * 2023-12-21 2024-01-23 珠海鸿芯科技有限公司 Image conversion method, computer device, and computer-readable storage medium
CN117437126B (en) * 2023-12-21 2024-04-12 珠海鸿芯科技有限公司 Image conversion method, computer device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
US20210263138A1 (en) Position detecting method, device and storage medium for vehicle ladar
CN111712731A (en) Target detection method and system and movable platform
CN112513679B (en) Target identification method and device
CN113658256B (en) Target detection method and device based on laser radar and electronic equipment
CN109784250B (en) Positioning method and device of automatic guide trolley
CN113256729B (en) External parameter calibration method, device and equipment for laser radar and camera and storage medium
CN113111513B (en) Sensor configuration scheme determining method and device, computer equipment and storage medium
CN113569958B (en) Laser point cloud data clustering method, device, equipment and medium
CN114187579A (en) Target detection method, apparatus and computer-readable storage medium for automatic driving
US20230351635A1 (en) Optical axis calibration method and apparatus of optical axis detection system, terminal, system, and medium
CN113325388A (en) Method and device for filtering floodlight noise of laser radar in automatic driving
CN112799091A (en) Algorithm evaluation method, device and storage medium
CN115147333A (en) Target detection method and device
CN116071421A (en) Image processing method, device and computer readable storage medium
CN114705121A (en) Vehicle pose measuring method and device, electronic equipment and storage medium
CN115617042A (en) Collision detection method and device, terminal equipment and computer-readable storage medium
EP4123337A1 (en) Target detection method and apparatus
WO2021129483A1 (en) Method for determining point cloud bounding box, and apparatus
CN114862961B (en) Position detection method and device for calibration plate, electronic equipment and readable storage medium
CN116299401A (en) Constant false alarm method and device based on target scattering point position and storage medium thereof
CN113759363B (en) Target positioning method, device, monitoring system and storage medium
CN112364888A (en) Point cloud data processing method and device, computing equipment and computer storage medium
CN117677862A (en) Pseudo image point identification method, terminal equipment and computer readable storage medium
CN112630750A (en) Sensor calibration method and sensor calibration device
US20240193961A1 (en) Parking space detection method, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination