CN114445307A - Method, device, MEC and medium for acquiring target information based on radar and visible light image - Google Patents

Method, device, MEC and medium for acquiring target information based on radar and visible light image Download PDF

Info

Publication number
CN114445307A
CN114445307A CN202011197770.1A CN202011197770A CN114445307A CN 114445307 A CN114445307 A CN 114445307A CN 202011197770 A CN202011197770 A CN 202011197770A CN 114445307 A CN114445307 A CN 114445307A
Authority
CN
China
Prior art keywords
information
target
visible light
radar
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011197770.1A
Other languages
Chinese (zh)
Inventor
陈利军
董振江
付洪涛
吴冬升
孟祥宏
程庆
林焕凯
周谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gosuncn Technology Group Co Ltd
Original Assignee
Gosuncn Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gosuncn Technology Group Co Ltd filed Critical Gosuncn Technology Group Co Ltd
Priority to CN202011197770.1A priority Critical patent/CN114445307A/en
Publication of CN114445307A publication Critical patent/CN114445307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for acquiring target information based on radar and visible light images comprises the following steps: s1, acquiring a radar image and a visible light image; s2, registering the radar image and the visible light image according to the coordinate conversion relation; and S3, acquiring first information of a target corresponding to the target in the radar image in the visible light image after the registration, acquiring second information of the target in the visible light image, and storing the first information and the second information in a related manner to obtain first related information. And S4, sending the related information to a display unit so that the display unit displays the target information in the visible light image in a first display form. According to the method and the device, the first information of the vehicle is acquired through the radar, the position of the corresponding vehicle in the visible light image is acquired through the registration of the radar and the visible light image, the corresponding target second information is acquired, and the first information and the second information are displayed in the visible light image, so that the first information can be clearly displayed in the visible light image, and monitoring personnel can acquire comprehensive information.

Description

Method, device, MEC and medium for acquiring target information based on radar and visible light image
Technical Field
The invention relates to the technical field of vehicle-road cooperation, in particular to a method, equipment, MEC and medium for acquiring target information based on radar and visible light images.
Background
In road monitoring, a visible light camera is often used to acquire a visible light image in front of a road, and a radar camera is used to acquire a radar image in front of the road. The visible light image is better in visualization and easy to read and understand by human beings, and the radar image can acquire the speed, the distance and the like of the vehicle but is not easy to read and understand by human beings.
The background description provided herein is for the purpose of generally presenting the context of the disclosure. Unless otherwise indicated herein, the material described in this section is not prior art to the claims in this application and is not admitted to be prior art by inclusion in this section.
Disclosure of Invention
The invention aims to provide a method for acquiring target information based on radar and visible light images, which comprises the following steps:
s1, acquiring a radar image and a visible light image;
s2, registering the radar image and the visible light image according to the coordinate conversion relation;
and S3, acquiring first information of a target corresponding to the target in the radar image in the visible light image after registration, identifying second information of the target in the visible light image, and storing the first information and the second information in a related manner to obtain first related information.
And S4, sending the related information to a display unit.
Specifically, the display unit displays the target information in a first display form in the visible light image.
Specifically, step S3 further includes: the method comprises the steps of obtaining the position of a target in visible light to obtain a first position, and storing the first position, first information of the target and second information of the target in a correlation mode.
Specifically, the first information includes at least one of the following: speed of the target, position of the target; the second information includes at least one of: the license plate of the target, the color of the target, the type of the target.
Specifically, the first display form in step S4 is the color of the tab box or the color of the tab content.
Specifically, in step S4, only the target information indicating that the target speed is abnormal is displayed.
In a second aspect, an embodiment of the present invention further provides an apparatus for acquiring target information based on radar and a visible light image, including the following units:
the image acquisition unit is used for acquiring a radar image and a visible light image;
the image registration unit is used for registering the radar image and the visible light image according to the coordinate conversion relation;
and the target information acquisition unit is used for acquiring first information of a target corresponding to the target in the radar image in the visible light image after registration, identifying second information of the target in the visible light image, and storing the first information and the second information in a correlation manner.
And the sending unit is used for sending the associated information to the display unit.
Specifically, the display unit displays the target information in a first display form in the visible light image.
Specifically, the target information acquiring unit further includes: the method comprises the steps of obtaining the position of a target in visible light to obtain a first position, and storing the first position, first information of the target and second information of the target in a correlation mode.
Specifically, the first information includes at least one of the following: speed of the target, position of the target; the second information includes at least one of: the license plate of the target, the color of the target, the type of the target.
In a third aspect, embodiments of the present invention also provide an MEC, which includes a processor, a memory, and executable instructions stored in the memory, wherein the executable instructions, when executed, are configured to implement the method as described above.
In a fourth aspect, an embodiment of the present invention further provides a medium, on which executable instructions are stored, and when executed, the executable instructions are used for implementing the method described above.
According to the method and the device, the first information of the vehicle is obtained through the radar, for example, the first information comprises the vehicle speed, the position of the corresponding vehicle in the visible light image is obtained through the registration of the radar and the visible light image, the corresponding second information is obtained, for example, the license plate, the vehicle type and the like, the first information and the second information are displayed in the visible light image, so that the first information can be clearly displayed in the visible light image, namely, the information of the radar is fused in the visible light image, and therefore monitoring personnel can obtain comprehensive information.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings;
fig. 1 is a schematic diagram of a system architecture for acquiring target information based on radar and visible light images according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a method for acquiring target information based on radar and visible light images according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus for acquiring target information based on radar and visible light images according to an embodiment of the present invention;
FIG. 4 is a timing diagram illustrating the registration of a device with a visible light camera according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an apparatus for acquiring target information based on radar and visible light images according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to specific embodiments and with reference to the attached drawings.
Example one
With reference to fig. 1, in order to better understand a system and an apparatus for acquiring target information based on radar and visible light images, which are disclosed by the embodiments of the present invention, an architecture to which the embodiments of the present invention are applicable is first described below. Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture for acquiring vehicle speed based on radar and visible light images according to an embodiment of the present invention. The system comprises one or more vehicles 10, a visible light camera 20 arranged on one road side or erected on a road frame, and a radar 30 arranged on one road side or erected on the road frame, wherein the visible light camera 20 and the radar 30 can be arranged separately, can also be integrated into one device, and can reduce the calibration workload when being integrated into one device.
The visible light camera 20 is used for acquiring visible light images in front of the road, and the radar camera is used for acquiring radar images in front of the road. The visible light image is better in visualization and easy to read and understand by human beings, and the radar image can acquire the speed, the distance and the like of the vehicle but is not easy to read and understand by human beings.
The road side unit RSU (RSU) is used to broadcast corresponding information to the vehicle, which may communicate with the vehicle over LTE-v2x, 5G-NR-v 2X.
The edge computing unit MEC (mobile edge computer, MEC) is used for registering and fusing the radar image and the visible light image, recognizing the license plate and the like, and sending corresponding information to the RSU.
The system may further include a server 60 (not shown), wherein the server 60 is in communication with the edge calculation unit 50, the camera 20, the radar 30, and the roadside unit 40 to obtain the image obtained by the camera 20, the radar image obtained by the radar 30, and the fused image of the camera 20 and the radar 30 fused by the edge calculation unit 50. The server 60 may obtain the image obtained by the camera 20 and the radar image obtained by the radar 30 through a wired network or a wireless network, respectively, and perform image fusion of the camera 20 and the radar 30 on the server 60. In another embodiment, the server 60 may communicate with the edge calculation unit, for example via the RSU, to obtain images of the camera 20 and the radar 30 fused by the edge calculation unit.
Referring to fig. 2, the present embodiment provides a method of acquiring target information based on radar and visible light images,
s1, acquiring a radar image and a visible light image;
the camera 20 and the radar 30 acquire one frame of image at different times, for example, a typical lidar acquires one frame of point cloud image at 10 frames per second, while the camera can acquire 30 frames per second, so the two are generally asynchronous in time. Therefore, in the present embodiment, the radar point cloud image and the visible light image are temporally registered with a relatively slow radar frame as a reference.
Specifically, referring to fig. 3, in which fig. 3(a) is a frame rate of the camera, it is assumed that the frame rate of the camera is 30 frames per second, fig. 3(b) is a frame rate of the radar, it is assumed that the frame rate of the radar is 10 frames per second, and fig. 3(c) is a schematic diagram of registering the frame rate of the camera to the radar. Referring to fig. 3(a), (b), if the time of the camera and the radar are not registered, a large misalignment occurs when the images of the camera and the radar are merged, because the time when the radar acquires the images and the time when the camera acquires the images have a delayed time difference, and the images of the camera and the radar are deviated in spatial position for a moving object, such as a vehicle.
Specifically, when the system starts to work, an instruction for simultaneously starting to acquire images is given to ensure that the two can simultaneously acquire images according to the mode of fig. 3 (c).
In this embodiment, the instruction to start image capturing may be given by the MEC, or may be given by another control unit, for example, for a device integrating a radar and a camera, the instruction to start image capturing simultaneously may be given by the corresponding control unit.
S2, registering the radar image and the visible light image according to the coordinate conversion relation;
firstly, calibrating a radar and a visible light camera which are erected on a roadside to obtain a coordinate transformation relation, wherein the coordinate transformation relation is used for registering a radar image and a visible light image; for a specific calibration method, reference may be made to calibration methods known in the art, and this embodiment is not further limited.
The specific registration can be performed in the MEC, at this time, the coordinate conversion relationship is stored in the MEC, the camera and the radar send the acquired image to the MEC, and the MEC performs registration. In another embodiment, the registration may also be performed by a server or other executable unit, for example for a device that gathers together radar and camera, by the respective execution unit.
And S3, acquiring corresponding target first information of the target in the radar image in the visible light image after registration, identifying target second information in the visible light image, and performing association storage on the target second information and the visible light image to obtain first association information.
The radar can provide the speed and the position information of the vehicle, wherein the position information includes a relative position and an absolute position, the phase position is the position information of the position relative to the radar, the absolute position can be GPS position information, the corresponding position acquisition is the prior art in the field, and the embodiment is not repeated. That is, the target first information in the present embodiment includes at least one or two of the vehicle speed, the position, and the like.
The visible light image can provide license plate information, vehicle type information and the like of the target, and the vehicle information of the target can be identified in a computer vision mode. That is, the second target information in this embodiment at least includes one or two of a license plate, a vehicle type, and the like.
The embodiment of the present invention is described in terms of the license plate and the vehicle speed information, and it should be understood by those skilled in the art that the vehicle speed information may be extended to other information in the first information, and the license plate information may be extended to other information in the second information.
Specifically, the vehicle speeds of all targets in the radar image are firstly obtained according to the registered image, then the corresponding vehicle license plate information in the visible light is obtained according to the corresponding registered image, the recognized license plate and the vehicle speed are recorded, and the following driving can be adopted for recording:
yue AXXXX1, 30;
yue AXXXX2, 40.
Further, in the case of knowing the resolution of the camera, the embodiment may further record the position of the corresponding vehicle in the visible light image, where the visible light image takes the image at the upper left corner as the far point, downward as the X-axis, and rightward as the Y-axis, and the visible light image is represented by pixels. It is recorded as follows:
yue AXXXX1, 30, 300, 500;
yue AXXXX2, 40, 400, 600.
Where 300,500 are the XY coordinates of the first vehicle and 400,600 are the XY coordinates of the second vehicle.
S4, sending the first associated information to a display unit so that the display unit can display the target information in a first display form in a visible light image.
The display unit of the present embodiment may be any device that can display, such as a display.
Further, the vehicle speed information of the vehicle is displayed in the visible light image according to the incidence relation of the license plate, the speed and the position of the visible light image determined in the step S3.
The corresponding display mode can be seen in fig. 4, a label is added to the accessory of the vehicle 10, and the label displays the license plate and the vehicle speed information.
Specifically, the position of the tag may be determined according to the XY coordinates of the vehicle in the image, and the tag may be displayed within a preset distance from the XY coordinates of the vehicle.
Further, the border of the tag may be colored in different colors, black in the drawings, and red in another embodiment to indicate vehicle speeding.
In another embodiment, it is determined whether the speed of the vehicle exceeds the speed limit for the current road segment, and if the speed limit is exceeded, the tag is displayed using a second display form, such as red.
On the other hand, whether the speed of the vehicle exceeds the speed limit of the current road section is judged, if the speed exceeds the speed limit, the label is displayed by using a third display form, for example, the content of the label is displayed by using red, namely, the speed of the vehicle in the label is displayed by using a red font.
The red display is used for displaying, the abnormity of the vehicle can be conveniently displayed, and monitoring personnel can be reminded to pay attention to the abnormal vehicle information in time.
In another embodiment, the tag may display license plate information so that the monitoring personnel can also discover the license plate of an abnormal vehicle in time.
In another embodiment, the system determines whether there is a vehicle whose speed exceeds the current road segment speed limit and displays only information about vehicles that exceed the current route speed limit. The information provided by the system is more pertinent, and particularly, when the number of the labels displayed in the monitoring interface is too large, the labels are covered, so that only the abnormal vehicle information is displayed, and monitoring personnel can find the abnormal condition more easily.
According to the method and the device, the vehicle speed of the vehicle is acquired through the radar, the position of the corresponding vehicle in the visible light image is acquired through the registration of the radar and the visible light image, corresponding license plate information is acquired, and the license plate and the vehicle speed information are displayed in the visible light image, so that the vehicle speed information can be clearly displayed through the visible light image, and monitoring personnel can acquire comprehensive information.
Example two
The embodiment provides a device for acquiring target information based on radar and visible light images, which comprises the following units:
the image acquisition unit is used for acquiring a radar image and a visible light image;
the image registration unit is used for registering the radar image and the visible light image according to the coordinate conversion relation;
and the target information acquisition unit is used for acquiring first information of a target corresponding to the target in the radar image in the visible light image after registration, identifying second information of the target in the visible light image, and storing the first information and the second information in a correlation manner.
The radar can provide the speed and the position information of the vehicle, wherein the position information includes a relative position and an absolute position, the phase position is the position information of the position relative to the radar, the absolute position can be GPS position information, the corresponding position acquisition is the prior art in the field, and the embodiment is not repeated. That is, the target first information in the present embodiment includes at least one or two of the vehicle speed, the position, and the like.
The visible light image can provide license plate information, vehicle type information and the like of the target, and the vehicle information of the target can be identified in a computer vision mode. That is, the second target information in this embodiment at least includes one or two of a license plate, a vehicle type, and the like.
The embodiment of the present invention is described in terms of the license plate and the vehicle speed information, and it should be understood by those skilled in the art that the vehicle speed information may be extended to other information in the first information, and the license plate information may be extended to other information in the second information.
Specifically, the speed information of all targets in the radar image is firstly acquired according to the registered image, then the corresponding vehicle license plate information in the visible light is acquired according to the corresponding registered image, the identified license plate and the vehicle speed are recorded, and the following driving can be adopted for recording:
yue AXXXX1, 30;
yue AXXXX2, 40.
Further, in the case of knowing the resolution of the camera, the embodiment may further record the position of the corresponding vehicle in the visible light image, where the visible light image takes the image at the upper left corner as the far point, downward as the X-axis, and rightward as the Y-axis, and the visible light image is represented by pixels. It is recorded as follows:
yue AXXXX1, 30, 300, 500;
yue AXXXX2, 40, 400, 600.
Where 300,500 are the XY coordinates of the first vehicle and 400,600 are the XY coordinates of the second vehicle.
And the sending unit is used for sending the associated information to the display unit so that the display unit displays the target speed information in a first display form in the visible light image.
The display unit of the present embodiment may be any device that can display, such as a display.
Further, the vehicle speed information of the vehicle is displayed in the visible light image according to the incidence relation of the license plate, the speed and the position of the visible light image determined in the target speed determining unit.
The corresponding display mode can be seen in fig. 4, a label is added to the accessory of the vehicle 10, and the label displays the license plate and the vehicle speed information.
Specifically, the position of the tag may be determined according to the XY coordinates of the vehicle in the image, and the tag may be displayed within a preset distance from the XY coordinates of the vehicle.
Further, the border of the tag may be colored in different colors, black in the drawings, and red in another embodiment to indicate vehicle speeding.
In another embodiment, it is determined whether the speed of the vehicle exceeds the speed limit for the current road segment, and if the speed limit is exceeded, the tag is displayed using a second display form, such as red.
On the other hand, whether the speed of the vehicle exceeds the speed limit of the current road section is judged, if the speed exceeds the speed limit, the label is displayed by using a third display form, for example, the content of the label is displayed by using red, namely, the speed of the vehicle in the label is displayed by using a red font.
The red display is used for displaying, the abnormity of the vehicle can be conveniently displayed, and monitoring personnel can be reminded to pay attention to the abnormal vehicle information in time.
In another embodiment, the tag may display license plate information so that the monitoring personnel can also discover the license plate of an abnormal vehicle in time.
In another embodiment, the system determines whether there is a vehicle whose speed exceeds the current road segment speed limit and displays only information about vehicles that exceed the current route speed limit. The information provided by the system is more pertinent, and particularly, when the number of the labels displayed in the monitoring interface is too large, the labels are covered, so that only the abnormal vehicle information is displayed, and monitoring personnel can find the abnormal condition more easily.
According to the method and the device, the vehicle speed of the vehicle is acquired through the radar, the position of the corresponding vehicle in the visible light image is acquired through the registration of the radar and the visible light image, corresponding license plate information is acquired, and the license plate and the vehicle speed information are displayed in the visible light image, so that the vehicle speed information can be clearly displayed through the visible light image, and monitoring personnel can acquire comprehensive information.
EXAMPLE III
Referring to fig. 5, the present embodiment discloses a schematic structural diagram of an apparatus 70 for acquiring target information based on radar and a visible light image, and specifically, the apparatus 70 for acquiring target information based on radar and a visible light image may be an MEC. The apparatus 70 for acquiring target information based on radar and visible light images of this embodiment includes a processor 71, a memory 72, and a computer program stored in the memory 72 and executable on the processor 71. The processor 71, when executing the computer program, implements the steps in the above-described method embodiment for acquiring target information based on radar and visible light images, such as step S1 shown in fig. 2. Alternatively, the processor 71, when executing the computer program, implements the functions of the modules/units in the above-mentioned device embodiments, such as an information receiving module.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 72 and executed by the processor 71 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions for describing the execution of the computer program in the apparatus 70 for acquiring target information based on radar and visible light images.
The apparatus 70 for acquiring target information based on radar and visible light images may include, but is not limited to, a processor 71 and a memory 72. It will be understood by those skilled in the art that the schematic diagram is merely an example of the apparatus 70 for acquiring target information based on radar and visible light images, and does not constitute a limitation of the apparatus 70 for acquiring target information based on radar and visible light images, and may include more or less components than those shown, or combine some components, or different components, for example, the apparatus 70 for acquiring target information based on radar and visible light images may further include an input-output device, a network access device, a bus, etc.
The Processor 71 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 71 is a control center of the radar and visible light image acquisition target information-based device 70, and various interfaces and lines are used to connect the respective parts of the entire radar and visible light image acquisition target information-based device 70.
The memory 72 may be used to store the computer programs and/or modules, and the processor 71 may implement various functions of the apparatus 70 for acquiring target information based on radar and visible light images by running or executing the computer programs and/or modules stored in the memory 72 and calling data stored in the memory 72. The memory 72 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory 72 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein the integrated modules/units of the apparatus 70 for acquiring target information based on radar and visible light images may be stored in a computer readable storage medium if they are implemented in the form of software functional units and sold or used as independent products. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by the processor 71 to implement the steps of the above embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, and software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
The above-mentioned embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, and it should be understood that the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the invention are also within the protection scope of the invention.

Claims (10)

1. A method for acquiring target information based on radar and visible light images comprises the following steps:
s1, acquiring a radar image and a visible light image;
s2, registering the radar image and the visible light image according to the coordinate conversion relation;
and S3, acquiring first information of a target corresponding to the target in the radar image in the visible light image after registration, identifying second information of the target in the visible light image, and storing the first information and the second information in a related manner to obtain first related information.
And S4, sending the related information to a display unit.
2. The method of claim 1, the display unit displaying the target information in a first display form in the visible light image.
3. The method according to claim 1, wherein the step S3 further comprises: the method comprises the steps of obtaining the position of a target in visible light to obtain a first position, and storing the first position, first information of the target and second information of the target in a correlation mode.
4. The method of claim 2, the first information comprising at least one of: speed of the target, position of the target; the second information includes at least one of: the license plate of the target, the color of the target, and the type of the target.
5. An apparatus for acquiring target information based on radar and visible light images, comprising the following units:
the image acquisition unit is used for acquiring a radar image and a visible light image;
the image registration unit is used for registering the radar image and the visible light image according to the coordinate conversion relation;
and the target information acquisition unit is used for acquiring first information of a target corresponding to the target in the radar image in the visible light image after registration, identifying second information of the target in the visible light image, and storing the first information and the second information in a correlation manner.
And the sending unit is used for sending the associated information to the display unit.
6. The apparatus according to claim 5, said display unit displaying said target information in a first display form in said visible light image.
7. The apparatus of claim 5, the target information acquisition unit further comprising: the method comprises the steps of obtaining the position of a target in visible light to obtain a first position, and storing the first position, first information of the target and second information of the target in a correlation mode.
8. The apparatus of claim 5, the first information comprising at least one of: speed of the target, position of the target; the second information includes at least one of: the license plate of the target, the color of the target, the type of the target.
9. An MEC comprising a processor, a memory, the memory having stored thereon executable instructions, when executed, for implementing the method of claims 1-4.
10. A medium having stored thereon executable instructions for implementing the method of claims 1-4 when executed.
CN202011197770.1A 2020-10-30 2020-10-30 Method, device, MEC and medium for acquiring target information based on radar and visible light image Pending CN114445307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011197770.1A CN114445307A (en) 2020-10-30 2020-10-30 Method, device, MEC and medium for acquiring target information based on radar and visible light image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011197770.1A CN114445307A (en) 2020-10-30 2020-10-30 Method, device, MEC and medium for acquiring target information based on radar and visible light image

Publications (1)

Publication Number Publication Date
CN114445307A true CN114445307A (en) 2022-05-06

Family

ID=81357434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011197770.1A Pending CN114445307A (en) 2020-10-30 2020-10-30 Method, device, MEC and medium for acquiring target information based on radar and visible light image

Country Status (1)

Country Link
CN (1) CN114445307A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140703A (en) * 2007-10-26 2008-03-12 电子科技大学 Hand-hold vehicle speed testing grasp shoot device
CN106710240A (en) * 2017-03-02 2017-05-24 公安部交通管理科学研究所 Passing vehicle tracking and speed measuring method integrating multiple-target radar and video information
CN109359409A (en) * 2018-10-31 2019-02-19 张维玲 A kind of vehicle passability detection system of view-based access control model and laser radar sensor
CN110515073A (en) * 2019-08-19 2019-11-29 南京慧尔视智能科技有限公司 The trans-regional networking multiple target tracking recognition methods of more radars and device
CN111476099A (en) * 2020-03-09 2020-07-31 深圳市人工智能与机器人研究院 Target detection method, target detection device and terminal equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101140703A (en) * 2007-10-26 2008-03-12 电子科技大学 Hand-hold vehicle speed testing grasp shoot device
CN106710240A (en) * 2017-03-02 2017-05-24 公安部交通管理科学研究所 Passing vehicle tracking and speed measuring method integrating multiple-target radar and video information
CN109359409A (en) * 2018-10-31 2019-02-19 张维玲 A kind of vehicle passability detection system of view-based access control model and laser radar sensor
CN110515073A (en) * 2019-08-19 2019-11-29 南京慧尔视智能科技有限公司 The trans-regional networking multiple target tracking recognition methods of more radars and device
CN111476099A (en) * 2020-03-09 2020-07-31 深圳市人工智能与机器人研究院 Target detection method, target detection device and terminal equipment

Similar Documents

Publication Publication Date Title
US11214248B2 (en) In-vehicle monitoring camera device
US10255804B2 (en) Method for generating a digital record and roadside unit of a road toll system implementing the method
US11042761B2 (en) Method and system for sensing an obstacle, and storage medium
CN107564329B (en) Vehicle searching method and terminal
CN112990162B (en) Target detection method and device, terminal equipment and storage medium
JP2001023091A (en) Picture display device for vehicle
CN113240939A (en) Vehicle early warning method, device, equipment and storage medium
CN113903188B (en) Parking space detection method, electronic device and computer readable storage medium
JP3910345B2 (en) Position detection device
CN115082565A (en) Camera calibration method, device, server and medium
CN114445307A (en) Method, device, MEC and medium for acquiring target information based on radar and visible light image
CN113945219A (en) Dynamic map generation method, system, readable storage medium and terminal equipment
EP4261565A1 (en) Object detection method and apparatus for vehicle, device, vehicle and medium
CN114463229A (en) Method, device and medium for displaying vehicle speed based on radar and visible light fusion
CN116101174A (en) Collision reminding method and device for vehicle, vehicle and storage medium
CN114299760A (en) Driving reminding method, device and medium based on cloud
CN110884501B (en) Vehicle perception data processing method and device, electronic equipment and storage medium
CN115953328B (en) Target correction method and system and electronic equipment
CN115097628B (en) Driving information display method, device and system
CN114241775B (en) Calibration method for mobile radar and video image, terminal and readable storage medium
CN114627651B (en) Pedestrian protection early warning method and device, electronic equipment and readable storage medium
CN115700795A (en) Training set construction method and device and computer readable storage medium
CN117762365A (en) Navigation display method, device, vehicle and storage medium
CN116486342A (en) Identity binding method, device, terminal equipment and storage medium
CN115565371A (en) Emergency parking detection method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination