CN113496514B - Data processing method, monitoring system, electronic equipment and display equipment - Google Patents

Data processing method, monitoring system, electronic equipment and display equipment Download PDF

Info

Publication number
CN113496514B
CN113496514B CN202010250778.3A CN202010250778A CN113496514B CN 113496514 B CN113496514 B CN 113496514B CN 202010250778 A CN202010250778 A CN 202010250778A CN 113496514 B CN113496514 B CN 113496514B
Authority
CN
China
Prior art keywords
data
image
radar
distance
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010250778.3A
Other languages
Chinese (zh)
Other versions
CN113496514A (en
Inventor
冯亚闯
刘云夫
熊晔颖
夏循龙
邓兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010250778.3A priority Critical patent/CN113496514B/en
Publication of CN113496514A publication Critical patent/CN113496514A/en
Application granted granted Critical
Publication of CN113496514B publication Critical patent/CN113496514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/06Systems determining position data of a target
    • G01S13/08Systems for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Abstract

The embodiment of the application provides a data processing method, a monitoring system, electronic equipment and display equipment. The method comprises the following steps: acquiring image data and radar data of an object area; obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data; acquiring characteristic pixel points of the content to be identified based on the image data; and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map. According to the technical scheme, the distance map is utilized to obtain the distance data of the pixel points corresponding to the content to be identified in the image data, the scheme is simple and easy to implement, and the calculation efficiency and the calculation precision are high.

Description

Data processing method, monitoring system, electronic equipment and display equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, a monitoring system, an electronic device, and a display device.
Background
Currently, a visual detection and tracking algorithm is used for tracking monitoring objects, such as people or vehicles, and the number of the monitoring objects in a queue can be counted when the monitoring objects are in a queue.
Taking the existing public transportation video monitoring system as an example, the system can only output the number of queued vehicles, and cannot effectively reflect the distance from a certain vehicle in a queuing team to the head of the team or the distance from the vehicle at the tail of the team to the head of the team (namely the total queuing length), so that the practicability is low.
Disclosure of Invention
In view of the above, the present application is proposed to provide a data processing method, a monitoring system, an electronic device, and a display device that solve the above problems or at least partially solve the above problems.
Thus, in one embodiment of the present application, a data processing method is provided. The method comprises the following steps:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of the content to be identified based on the image data;
and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
In another embodiment of the present application, a data processing method is provided. The method comprises the following steps:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of a queuing team based on the image data;
and determining the queuing length according to the distance data in the distance map, which has a mapping relation with the characteristic pixel points of the queuing team.
In another embodiment of the present application, a data processing method is provided. The method comprises the following steps:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image and the radar data to provide data support when distance data needs to be determined based on image data;
the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data.
In an embodiment of the present application, a monitoring system is provided. The system, comprising: radar, image sensor and processing device; wherein the content of the first and second substances,
the radar is used for measuring the object area to obtain radar data;
an image sensor for acquiring image data of the object region;
the processing device is used for obtaining a distance map according to the radar data and the image data; the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data; acquiring characteristic pixel points of the content to be identified based on the image data; and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
In one embodiment of the present application, a data processing method is provided. The method comprises the following steps:
displaying an interactive interface;
responding to an object area designated by a user through an interactive interface, and acquiring image data and radar data of the object area;
determining distance data corresponding to contents to be identified in the image data according to the image data and the radar data; wherein the range data is determined from the radar data;
and displaying the distance data.
In one embodiment of the present application, an electronic device is provided. The electronic device includes: the device comprises a device body, an image sensor, a radar and a processor. The image sensor is arranged on the equipment body and used for acquiring image data of the object area; the radar is arranged on the equipment body and used for measuring the object area to obtain radar data; the processor is arranged in the equipment body and used for acquiring the image data and the radar data; obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data; acquiring characteristic pixel points of the content to be identified based on the image data; and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
In another embodiment of the present application, an electronic device is provided. The apparatus, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of the content to be identified based on the image data;
and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
In another embodiment of the present application, an electronic device is provided. The apparatus, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of a queuing team based on the image data;
and determining the queuing length according to the distance data in the distance map, which has a mapping relation with the characteristic pixel points of the queuing team.
In another embodiment of the present application, an electronic device is provided. The apparatus, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image and the radar data to provide data support when distance data needs to be determined based on image data;
the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data.
In one embodiment of the present application, a display device is provided. The display device includes: a memory, a processor, and a display; wherein the content of the first and second substances,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
controlling the display to display an interactive interface;
responding to an object area designated by a user through an interactive interface, and acquiring image data and radar data of the object area;
determining distance data corresponding to contents to be identified in the image data according to the image data and the radar data; wherein the range data is determined from the radar data;
and controlling the display to display the distance data.
In one technical scheme provided by the embodiment of the application, a distance map is obtained by fusing image data and radar data of an object area; after the characteristic pixel points of the contents to be identified in the image data are obtained, the distance data which has a mapping relation with the characteristic pixel points can be obtained by using the distance map, the scheme is simple and easy to realize, and the calculation efficiency and the calculation precision are higher.
In another technical scheme provided by the embodiment of the application, the distance map is obtained according to the image data and the radar data of the object area, the distance map provides convenience for determining the distance corresponding to the content to be identified in the image data based on the image data, the accuracy is high, and the problem of low accuracy of distance measurement based on visual images in the prior art is favorably solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 2a is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 2b is a schematic diagram of a specific implementation of the data processing method provided in the embodiment of the present application in a specific application scenario;
fig. 2c is a schematic diagram of an implementation of inputting an object region and content to be recognized by a user through an interactive interface according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to another embodiment of the present application;
FIG. 4 is a diagram of a visual scene provided by an embodiment of the present application;
fig. 5a is a schematic view of an application scenario provided in an embodiment of the present application;
fig. 5b is a schematic view of an application scenario provided in another embodiment of the present application;
fig. 6a is a schematic diagram illustrating a queue length calculation in the data processing method according to an embodiment of the present application;
FIG. 6b is a schematic flowchart illustrating a data processing method for determining a queue length according to an embodiment of the present application;
fig. 7 is a schematic flowchart of a data processing method according to another embodiment of the present application;
fig. 8 is a schematic diagram of a theoretical flow of distance map generation in the data processing method according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a theoretical flow of vehicle queue length determination in a data processing method according to an embodiment of the present application;
fig. 10 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 11 is a block diagram of a monitoring system according to an embodiment of the present application;
fig. 12 is a schematic diagram illustrating an implementation of distance data determination by an unmanned aerial vehicle according to an embodiment of the present application;
fig. 13 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 14 is a block diagram of a data processing apparatus according to an embodiment of the present application;
fig. 15 is a block diagram of a data processing apparatus according to another embodiment of the present application;
fig. 16 is a block diagram of a data processing apparatus according to another embodiment of the present application;
fig. 17 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Although millimeter-wave radar-based methods and millimeter-wave radar-based methods combined with video data have been used in a variety of scenarios, such as traffic situation awareness, it is difficult to effectively distinguish between stationary objects (e.g., vehicles) and backgrounds (e.g., roads) due to the limitations of the accuracy of the millimeter-wave radar range. Therefore, a simple and effective fusion method is still lacked so far to improve the accuracy of distance measurement based on visual images.
To this end, the present application provides the following embodiments to solve or partially solve the above-mentioned existing problems. In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Further, in some flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are used merely to distinguish between the various operations, and do not represent any order of execution per se. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 1 shows a schematic flow chart of a data processing method according to an embodiment of the present application. The execution subject of the method can be a server connected with the image sensor and the radar; but also image sensors with processing capabilities (such as cameras); the system can also be a client connected with an image sensor and a radar; but also a mobile device capable of acquiring image data and radar data. Wherein, the client may include: any terminal equipment such as a mobile phone, a tablet personal computer and intelligent wearable equipment; the server can be a common server, a cloud end or a virtual server and the like; the removable device may be: the present disclosure relates to a system and method for controlling a vehicle, and more particularly, to a system and method for controlling a vehicle. As shown in fig. 1, the method includes:
101. and acquiring image data and radar data of the object area.
102. And obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data.
103. And acquiring the characteristic pixel points of the content to be identified based on the image data.
104. And acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
In the above 101, the image data may be collected by an image sensor, and the radar data may be obtained by measuring the object region by a radar, such as a laser radar. In a specific implementation, the object region may be input by a user through an interactive interface. For example, in a public transport video monitoring system, image data and radars are arranged at a plurality of roads and intersections; a user can input road names or labels and the like through the interactive interface, and image data and radar data acquired by the image sensor and the radar which are arranged on the road side or on the portal frame erected above the road can be acquired. Alternatively, the execution subject of the method provided by this embodiment is a movable device, such as an unmanned aerial vehicle, a robot, etc.; the mobile device is provided with an image sensor or a radar, and a user can control the autonomous mobile device to move to an object area through an interaction mode (such as remote control) provided by a client side and then control the image sensor on the autonomous mobile device and the radar to acquire image data and radar data of the object area.
In an implementation manner, in 102, the step of obtaining the distance map based on the image data and the radar data may specifically include:
1021. converting distance data corresponding to any monitoring object contained in the radar data into an image coordinate system in which the image data is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system;
1022. associating the pixel points with the distance data for addition to the distance map.
At 1021, the monitoring object may be any type of object, such as a person, a vehicle, an object, and so on. The process of projective transformation of the distance data into the image coordinate system will be described further below, and reference is made to the corresponding contents below.
In 103, the content to be identified may be a system default content. For example, in a road monitoring scene, the content to be identified may be a target vehicle, a vehicle at the tail of a queue, a vehicle at the head of the queue, or the like; under the scene of monitoring the flow of people, the content to be identified can be target personnel, personnel at the tail of a queue of people, personnel at the head of the queue of people and the like. Of course, the content to be recognized may also be input by the user, for example, the user wants to know a target object in the image data, such as a person, a vehicle or an object, and the distance data from the image data and the acquisition position of the radar data; the user can input the content to be identified through modes provided on the interactive interface, such as clicking, inputting in an input box and the like; still alternatively, the user wants to know the distance between two persons in the image data; at this time, the user can use the images of the two people as the content to be identified through the interactive interface.
In specific implementation, the step 103 of obtaining the feature pixel point of the content to be identified based on the image data may specifically include: performing image recognition on the image data to identify target content; and extracting characteristic pixel points of the target content from the image data. Wherein the target content comprises at least one of: people, vehicles, objects, people queue, vehicles queue, objects display queue.
In general, the target content identified from the image data is composed of a plurality of pixel points. If the target content is a person or a vehicle, the pixel points of the person or the vehicle in the image data, which are concentrated in the center, can be used as the characteristic pixel points of the target content.
In one technical scheme provided by the embodiment of the application, a distance map is obtained by fusing image data and radar data of an object area; after the characteristic pixel points of the content to be identified in the image data are obtained, the distance data which has a mapping relation with the characteristic pixel points can be obtained by using the distance map.
Fig. 2a is a schematic flow chart illustrating a data processing method according to another embodiment of the present application. As shown in fig. 2a, the method includes:
201. receiving the object region input by a user based on an interactive interface.
202. And acquiring image data and radar data of the object area.
203. Obtaining a range map based on the image data and the radar data.
The distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data.
204. And acquiring the characteristic pixel points of the content to be identified according to the image data.
It is added here that the content to be recognized may also be input by the user via an interactive interface.
205. And acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
206. And displaying the distance data on a display interface.
Referring to fig. 2b, a user may input an object region through the interactive interface 41 provided by the client device 400. As shown in fig. 2b, the user may input corresponding information, i.e., road, through the input box corresponding to the name or label of the object area on the interactive interface 41. Of course, the user can also click the input box corresponding to the name or label of the object area
Figure BDA0002435366670000091
And the control selects the target object area from the drop-down list.
Further, as shown in fig. 2b, the user can also input the content to be recognized through the interactive interface 41. As shown in fig. 2b, the user may input the corresponding content "vehicle queue tail vehicle" through the input box corresponding to the content to be recognized on the interactive interface 41.
After the steps 202, 203, 204 and 205, the acquired distance data is displayed in the display interface 42 as shown in fig. 2 b.
In another implementation, the above-mentioned interactive interface 41 and display interface 42 can also be implemented by using the style shown in fig. 2 c. For example, in the example shown in fig. 2c, page elements corresponding to a plurality of regions are displayed on the first interface 401 ', but in specific implementation, image data of each region may be directly used as a page element to be displayed on the first interface 401'. The user can select the object region by clicking. Assume that the user selects region 6 as the object region; the image data of the object region is displayed on the second interface 402'; the user may specify the content to be identified by moving a frame, which the user moves to the avatar of a target character in a queue of people, as in the example shown in fig. 2 c; after the above steps 202, 203, 204 and 205, the distance data of the target person from the image capturing position can be obtained and displayed on the second interface 402'.
The technical scheme shown in each embodiment provided by the application is suitable for various scenes for acquiring distance data by utilizing image data. If a user wants to know the distance between one person, a vehicle or an object in the image data and the radar data acquisition position, or wants to know the distance between two persons, two vehicles or two objects in the image data, the method can be implemented by adopting the scheme provided by the embodiment. After the distance data corresponding to the characteristic pixel points are acquired, corresponding information, such as a queue length, can be determined according to the acquired distance data. The following embodiments will provide a scheme of queuing captain implemented based on the methods of the above embodiments, and some steps in the above embodiments will be described in detail in the following embodiments.
Fig. 3 is a schematic flowchart illustrating a data processing method according to an embodiment of the present application. The execution subject of the method can be a server connected with the image sensor and the radar, an image sensor (such as the image sensor) with processing capability, and a client connected with the image sensor and the radar. Wherein, the client may comprise: any terminal equipment such as mobile phones, tablet computers and intelligent wearable equipment. The server may be a common server, a cloud, a virtual server, or the like, which is not specifically limited in this embodiment of the application. Specifically, as shown in fig. 3, the method provided in this embodiment includes:
301. image data and radar data of the object region are acquired.
302. And obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data.
303. And acquiring characteristic pixel points of the queuing team based on the image data.
304. And determining the queuing length according to the distance data in the distance map, which has a mapping relation with the characteristic pixel points of the queuing team.
In 301, image data is acquired by an image sensor. The radar data may be obtained by any one of a millimeter wave radar, a microwave radar, a laser radar, and the like, which is not specifically limited herein. For example, the radar data in the embodiments of the present application may be measured by a millimeter wave radar. The millimeter wave radar is a radar which works in a millimeter wave band for detection, has the characteristics of small size, high resolution, all weather and the like, and is less influenced by illumination conditions and weather environments. Therefore, when a target object (such as a vehicle, a pedestrian, an object) is detected by the millimeter wave radar, based on a millimeter wave signal transmitted to a forward target and a received signal reflected by the target corresponding to the transmitted signal, the signal reflected by the target is subjected to a series of processing operations such as amplification, coherent detection, mixing and the like by using the doppler principle to extract an effective signal, the millimeter wave radar can accurately obtain the motion state information of the target object, such as the relative distance, the relative speed, the direction angle and the like of the millimeter wave radar according to the effective signal, wherein the direction angle may include an azimuth angle and an elevation angle, the azimuth angle is an angle between the projection of the relative distance on the horizontal plane and a starting direction (such as a true north direction) of the target on the horizontal plane, the elevation angle is an included angle between the relative distance and a projection of the relative distance on a horizontal plane on a vertical plane.
In addition, the step of 302 "acquiring a distance map based on the image data and the radar data" may be a step that is continuously performed over time. For example, the updating is performed once at regular intervals (e.g., 1s, 10s, 30s, etc.) to continuously update the distance map. Namely, the method provided by the present embodiment further includes:
and periodically fusing images acquired by the image sensor and radar data measured by the radar to update the distance map.
The radar data measured by the radar can comprise distance data corresponding to at least one monitored object. In one embodiment, as shown in FIG. 4, the distance data may comprise: motion state information such as a range value R (i.e., a relative distance link between the vehicle 11 as a monitoring target and the radar 12), a relative speed, a direction angle, and the like; the direction angle may include an azimuth angle β and an elevation angle α, the azimuth angle β is an angle between a projection of the distance measurement value R on a horizontal plane (a plane where a road is located as shown in fig. 4) and a driving direction of the vehicle 11, and the elevation angle α is an angle between the projection of the distance measurement value R and a horizontal plane. Accordingly, in one implementation, distance data may be directly associated with image pixels to obtain a distance map. That is, in the above 301, "obtaining a distance map based on the image data and the radar data" may be specifically implemented by the following steps:
3011. and converting the distance data corresponding to any monitoring object contained in the radar data into an image coordinate system where the image is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system.
3012. Associating the pixel points with the distance data for addition to the distance map.
3011, a first conversion relationship between a radar coordinate system corresponding to the radar and a sensor coordinate system corresponding to the image sensor (e.g., a camera) may be obtained; acquiring a second conversion relation between a sensor coordinate system and an image coordinate system (also called as a pixel coordinate system) according to an imaging principle of the image sensor; and then, combining the first conversion relation and the second conversion relation to obtain a third conversion relation between the radar coordinate system and the image coordinate system. The transformation relationship of the coordinate system can be understood as a transformation matrix. Based on the obtained third conversion relationship, the projection point (i.e. pixel point) of the monitoring object (i.e. receiving the echo and obtaining the distance data based on the echo) measured by the radar on the image coordinate system can be obtained. It can be understood that: the distance data of any monitored object corresponds to a coordinate value (here, it is assumed to be a first coordinate value) in the radar coordinate system, and a second coordinate value in the image coordinate system can be obtained by using the third conversion relationship, and a pixel point corresponding to the second coordinate value is a mapping point corresponding to the monitored object. Wherein the first conversion relation is obtained according to installation parameters of the radar and the image sensor; the installation parameters may include: the distance difference between the image sensor and the radar in the left-right direction, the up-down direction, the front-back direction, the included angle between the image sensor and the road direction, the included angle between the image sensor and the horizontal plane, the radar installation height and the like. At the same time, the acquired images and radar data need to be aligned in time.
In another implementation, the distance data associated with image pixels in the distance map is a horizontal distance of the monitored object relative to the radar in a set direction parallel to a horizontal plane. That is, the step 3012 "associate the pixel point with the distance data to be added to the distance map" may specifically include:
according to the distance measurement value and the direction angle, calculating the horizontal distance of the monitored object relative to the radar in a set direction parallel to the horizontal plane;
associating the pixel points with the horizontal distance for addition to the distance map.
With continued reference to the schematic diagram of a specific application scenario shown in fig. 4, the radar 12 is installed beside the road through the vertical rod, and the radar data obtained by using the side-mounted radar 12 contains the range value R and the direction angle of the monitored object (i.e., the vehicle 11 in fig. 4) from the radar 12. Wherein the direction angle includes an azimuth angle β and an elevation angle α. Referring to fig. 2, the radar 12, the vehicle 11 and the mounting position a of the upright form a first right triangle; by using the triangle relationship of the right triangle, the height of the vertical rod (i.e. the distance h between the AB point) can be known, the elevation angle alpha and the distance measurement value R are both known, and the distance of the O' A in the figure 2 can be calculated
Figure BDA0002435366670000133
AOO' is a second right triangle; the distance of the side O' A in the second right triangle
Figure BDA0002435366670000134
Knowing the azimuth angle β, the horizontal distance D can be calculated.
Here, it should be noted that: the radar is installed in different ways, and the corresponding horizontal distance is calculated in different ways. For example, the radar extends into the roadway through an extension rod arranged on a vertical rod beside the roadway. In this case, the azimuth β in the direction angle of the range data measured by the radar may be small (e.g., within an error range) or zero, and the horizontal range corresponding to the monitored object (e.g., the vehicle 11 in fig. 4) is the projected range of the range value R.
In the embodiment, the pixel point is associated with the horizontal distance, so that the subsequent queuing length is more convenient to calculate. For example, the distance corresponding to the pixel point at the head of the queuing queue and the distance corresponding to the pixel point at the tail of the queuing queue, which are obtained by using the distance map, are distances to a fixed position (e.g., the position of the O point shown in fig. 4); therefore, when calculating the queue length, only the two distances need to be subtracted to obtain the queue length. However, the distance map needs to be updated by continuously fusing newly acquired image data and radar data with the lapse of time; this tends to increase data throughput; the distance map is used under a specific condition (such as queuing or user designation), and in order to reduce the data processing amount, only the distance data of the monitoring object in the radar data is associated with the pixel point of the image when the distance map is generated; and when the special event occurs, calculating the distance data obtained through the distance map to obtain the queuing length.
In 302, the characteristic pixels of the queuing team may include pixels at the tail of the queuing team; or the characteristic pixel points of the queuing team comprise the queue tail pixel points and the queue head pixel points. The characteristic pixel points of the queuing team only comprise the pixel points at the tail of the queuing team, and the method is suitable for a scene with a fixed position at the head of the queuing team. For example, the image sensor and the radar arranged at the intersection are arranged at the position of the head of the queue, namely the position of the parking line. Under the condition, the queuing head pixel point in the image does not need to be identified, and the distance between the stop line and the image sensor and the radar is a known quantity; the queuing length can be calculated by only identifying the queuing queue tail pixel points in the image and acquiring the corresponding distance of the queuing queue tail pixel points by using the distance map. And the characteristic pixel points of the queue comprise queue tail pixel points and queue head pixel points, and are suitable for scenes with unfixed queue head positions. For example, on a highway, vehicles are queued due to a vehicle accident, and the position of the accident location is not fixed, so that the pixels at the tail of the queuing queue and the pixels at the head of the queuing queue need to be identified at the same time.
In addition, the embodiment can identify the monitoring objects (such as people and vehicles) arranged at the head or the tail of the queue in the image data by using an image identification technology, and then take a pixel point of the monitoring objects at the head or the tail of the queue as a queue tail pixel point. For example, the image recognition technology can be realized based on a neural network model, namely, a trained neural network model is used for recognizing the monitoring objects in a queuing state in the image, and the monitoring objects arranged at the tail of the queue are recognized; and then, determining the queue tail pixel point according to the identification result.
In 303, the queuing length can be determined by directly querying a distance corresponding to the feature pixel point of the queuing group in the distance map, and using the distance.
In one technical solution provided in this embodiment, a distance map is obtained by fusing the image and the radar data in a monitored area, and when a queue of monitored objects occurs in the area, the image is subjected to image recognition to determine characteristic pixel points of the queue; further, according to the distance of the characteristic pixel points of the queuing team in the distance map, the calculation of the queuing length is completed; the scheme is simple and easy to realize, and has higher computational efficiency and precision.
In an implementation solution, the characteristic pixel points of the queuing team include: and queuing the tail pixel points. Correspondingly, the step 302 of performing image recognition on the image to obtain the characteristic pixel points of the queuing team in the image specifically includes the following steps:
3021. identifying a monitoring object in a queuing state in the image;
3022. determining a monitoring object arranged at the tail of the queue as a target object based on the recognition result;
3023. and taking a pixel point on the target object as a pixel point at the tail of the queuing queue.
In 3021, the monitoring objects in the captured image may be identified by using an existing image identification technology, for example, a target detection technology (i.e., a neural network technology) of deep learning is used to identify the monitoring objects in a queue state in the image. Specifically, as shown in fig. 5a, an image a acquired by an image sensor on a road is a vehicle; and inputting the image A into a pre-trained target deep learning network model, and determining each vehicle in the visual scene graph and the corresponding position information of each vehicle according to the output result of the target deep learning network model. And determining the vehicles in a queuing state according to the position information, and meanwhile determining the vehicles arranged at the tail of the queue as target objects.
In an embodiment, in 3023, the "taking a pixel on the target object as the queue tail pixel" may specifically be implemented by the following steps:
s11, acquiring a pixel point set belonging to the target object in the image;
and S12, taking the pixel point in the center of the pixel point set as the tail pixel point of the queue, or taking a pixel point in the pixel point set closest to the upper boundary of the image as the tail pixel point of the queue, or taking a pixel point in the pixel point set closest to the lower boundary of the image as the tail pixel point of the queue.
For example, fig. 5a shows an image a with a vehicle 01 as the determined target object; after the pixel point set belonging to the target object 01 in the image A is obtained, a pixel point a in the center of the pixel point set can be used as the queue tail pixel point, or a pixel point b closest to the upper boundary 1 of the image A in the pixel points can be used as the queue tail pixel point. In another image B shown in fig. 5B, the vehicle 02 is the determined target object; after the pixel point set belonging to the target object 02 in the image B is obtained, the pixel point c in the center of the pixel point set can be used as the queue tail pixel point, or the pixel point d closest to the lower boundary 2 of the image B in the pixel point set can be used as the queue tail pixel point. The image a and the image B are different in the arrangement position of the image sensor. During specific implementation, the vehicles at the tail of the queue and the corresponding pixels at the tail of the queue can be determined according to the running direction of the actual vehicles in the image.
In a realizable scheme, the queuing head position is fixed. Correspondingly, the step 303 of determining the queuing length according to the distance in the distance map that has the mapping relationship with the feature pixel points of the queuing team may specifically include the following steps:
acquiring a first distance corresponding to the position of a preset queuing head;
acquiring a second distance which has a mapping relation with the queue tail pixel point from the distance map;
and determining the queuing length according to the first distance and the second distance.
Wherein the first distance is the distance from the head of line to a reference point (e.g., the O point position shown in fig. 4), which may be a known amount. When the second distance is distance data, the horizontal distance between the tail pixel point of the queuing queue and the position of the O point can be calculated by using the content corresponding to the figure 4; and the difference between the first distance and the calculated horizontal distance is the queuing length. And if the second distance is the horizontal distance, directly calculating the difference between the first distance and the calculated horizontal distance to obtain the queuing length. The calculation of the horizontal distance can be referred to the corresponding content in the above, and is not described herein again.
The characteristic pixels of the queuing queue comprise queuing queue head pixels besides queuing queue tail pixels. Correspondingly, in step 303 of the above embodiment, "determining a queuing length according to distance data in the distance map, where a mapping relationship exists between the distance data and the feature pixel points of the queuing team", may specifically include the following steps:
3031. acquiring a queuing head pixel point;
3032. acquiring a first distance having a mapping relation with the pixels at the head of the queue and a second distance having a mapping relation with the pixels at the tail of the queue from the distance map;
3033. and calculating the queuing length according to the first distance and the second distance.
In 3031, the queuing head pixel point may be obtained by one or more of the following methods:
the first method is as follows: and acquiring a pre-configured queuing head pixel point.
For example, in a scenario, such as the scenario shown in fig. 6a, a certain pixel on the stop line 500 may be pre-configured as the pixel at the head of the queue. And storing the pixel point at the head of the queue as a fixed value, and calling and taking out the pixel point from the corresponding storage area when needed.
The second method comprises the following steps: and identifying the monitoring object arranged at the head of the queue in the image, and taking a pixel point on the monitoring object arranged at the head of the queue as the pixel point at the head of the queue.
For example, in the image a shown in fig. 5a, the vehicle 03 aligned in the image a can be identified by using an image identification technology, and similarly, the pixel point set corresponding to the vehicle 03 in the image a is obtained, and one pixel point in the pixel point set corresponding to the vehicle 03 is used as the head pixel point of the queue.
3032, since the distance map includes the mapping relationship between the image pixel point and the distance, after the queuing head pixel point and the queuing tail pixel point are determined, the first distance having the mapping relationship with the queuing head pixel point and the second distance having the mapping relationship with the queuing tail pixel point can be directly obtained from the distance map through the mapping relationship.
3033 "calculating the queue length according to the first distance and the second distance" will be described in detail with reference to an application scenario shown in fig. 6 a. As shown in fig. 6a, a pixel point F on the monitoring object arranged at the head of the queue is used as the queuing head pixel point; and taking a pixel point E closest to the upper boundary of the image in the pixel point set at the tail of the queuing queue as a pixel point at the tail of the queuing queue. The first distance which is obtained from the distance map and has a mapping relation with the queuing head pixel point F is a first distance value R2 and a first direction angle (including an elevation angle alpha 2 and an azimuth angle beta 2); a second distance which is obtained from the distance map and has a mapping relation with the queuing queue tail pixel point E is a second distance value R1 and a second direction angle (including an elevation angle alpha 1 and an azimuth angle beta 1); according to the first distance measurement value R2, the first direction angle, the second distance measurement value R1 and the second direction angle, the queuing length of the vehicle can be calculated
Figure BDA0002435366670000171
The other realization scheme is that radar data are processed in the distance map generation process, so that the distance related to image pixel points contained in the distance map is a horizontal distance; the difference between the first distance and the second distance may be directly taken as the queue length.
The image sensor can perform multiple times of acquisition on the region in an acquisition period, and the acquired video stream information contains multiple frames of images. And judging whether the monitored object is in a queuing condition is realized based on continuous frame images. That is, the method provided in this embodiment may further include:
304a, performing behavior tracking on a monitored object appearing in the area based on the multi-frame image acquired by the image sensor aiming at the area to obtain tracking data;
304b, determining whether the queuing of the monitoring objects occurs in the area or not according to the tracking data.
In 304a, after the image sensor performs real-time analysis on the multi-frame image acquired by the area to obtain the image area where the monitored object is located, correlation analysis may be performed on adjacent frames or several adjacent frames of the monitored object appearing in the area by using tracking methods such as particle filtering and kalman filtering, so as to achieve continuous tracking of the monitored object. Specifically, in step 304a, "performing behavior tracking on the monitored object appearing in the region based on the multiple frames of images collected by the image sensor for the region to obtain tracking data" may specifically be implemented by adopting the following steps:
304a1, detecting a monitoring object appearing in at least some frames of the multi-frame image;
304a2, determining the motion parameter of the monitored object based on the detected position of the monitored object in the corresponding frame image; wherein the motion parameter comprises at least one of: the moving speed and the moving distance.
In 304a1, for the process of detecting and identifying the monitoring object appearing in at least some frames of the multi-frame image, reference may be made to corresponding contents of the image identification in the foregoing embodiments, which is not described herein again.
304a2, after detecting the position of the monitoring object in the corresponding frame image, based on the position of the same monitoring object in the adjacent frame image, the moving distance of the monitoring object in the adjacent frame time interval can be determined, and the moving speed of the monitoring object can be obtained according to the moving distance and the time interval.
304b, the trace data may include, but is not limited to: at least one of the monitoring objects moves at a speed and/or distance. When the motion parameters corresponding to the monitoring objects with the number exceeding the preset number in the at least one monitoring object meet the preset conditions, the situation that the monitoring objects queue in the area can be determined. For example: and determining the monitoring object with the motion parameter meeting the preset condition according to the monitoring object with the motion distance smaller than the preset distance threshold or the motion speed smaller than the preset speed threshold.
In practice, in a multi-frame image acquired by an image sensor, a monitored object is inevitably shielded under the influence of various factors, and therefore, the condition of missed detection exists when the monitored object in the multi-frame image is detected and identified by using the existing detection technology. For example, in a queue of a queued fleet, the vehicle in front of the queue is a large truck and the vehicle in the rear of the queue is a small car. In the image acquired by the image sensor, the car only leaks a small part of the tail of the car. When image recognition is performed, since the vehicle feature information is small, there is a high possibility that the car is not recognized. In order to improve the accuracy of the identification of the monitored object, when the monitored object is identified, besides an image identification technology, radar data can be fused to identify a detection object sheltered in an image, so that the detection precision of the monitored object is improved. Specifically, the method may further include:
305a, detecting an occluded monitored object which is not detected in at least partial frames of the multi-frame image based on radar data measured by the radar for the area during the multi-frame image acquisition;
305b, under the condition that the shielded monitoring object exists, combining the multi-frame images and radar data measured by the radar for the area during the acquisition of the multi-frame images, and determining the position of the shielded monitoring object in the corresponding frame image.
Taking an actual application example, the radar data comprises position information of the monitored object, when the shielded monitored object is detected, the radar data can be projected onto an image corresponding to the radar data actually based on a mapping relation between the radar data and image pixel points, and a first target area where the monitored object exists and obtained by utilizing radar detection is determined; performing target identification on an image corresponding to the radar data to obtain a second target area where the monitored object exists, wherein the second target area is obtained by using an image identification method; based on the degree of association between the first target region and the second target region, a first monitored object (i.e., a detected object detected by both radar and image), a second monitored object (i.e., a monitored object detected by image but not detected by radar), and a third monitored object (i.e., a monitored object detected by radar but not detected by image) are determined, respectively. For the second monitored object, performing feature similarity measurement by using a second target region corresponding to the second monitored object and the extracted first target region (or second target region) corresponding to the first monitored object, and determining whether the second monitored object is an occluded monitored object; for the third monitored object, a basic probability assignment of the third monitored object can be obtained by using the velocity of the monitored object contained in the radar data, and the basic probability assignment can be obtained based on the following equations (1) and (2):
Figure BDA0002435366670000191
Figure BDA0002435366670000201
wherein V in the formulae (1) and (2) x (k)、V y (k)、V r (k) X speed, y speed and radial speed of a kth target detected by a radar respectively, wherein n is the number of monitoring objects detected by the radar; m is a unit of v And assigning a basic probability.
And comparing the basic probability assignment with an average basic probability assignment corresponding to the first monitored object to determine whether the third monitored object is an occluded monitored object, wherein the average basic probability assignment can also be obtained by the formulas (1) and (2). For example, if the base probability assignment is greater than the average base probability assignment and less than 1, it is determined that the third monitored object is an occluded monitored object.
Further, the method may further include:
306a, counting the number of the monitoring objects in queue and outputting the number when the monitoring objects in queue appear in the area.
Further, the method may further include:
307. sending the queuing length to a client so as to display the queuing length corresponding to the area and/or a highlighted element corresponding to the queuing length on a map displayed by the client; wherein the salient elements comprise at least one of: the color and the dynamic effect are highlighted.
For example, the prominent color may be a relatively prominent yellow, red, or the like color; the prominent dynamic effect can be a breathing dynamic effect and an instant bright and dark effect.
In summary, the data processing method provided in this embodiment can be summarized as the process shown in fig. 6 b. That is, the image processor 200 (e.g., image sensor) is the same region 100 as the data acquisition region of the radar 300. A distance map can be obtained by fusing the image acquired by the image processor 200 and the radar data acquired by the radar 300; tracking a monitored object (such as a vehicle) based on the images acquired by the image processor 200 or based on the images and radar data; when the queuing condition of the monitored object in the area 100 is judged according to the tracking data, identifying the image collected in the queuing state to obtain the characteristic pixel points of the queuing team; and then determining the queuing length by using the distance map and the characteristic pixel points of the queuing team.
Fig. 7 is a flowchart illustrating a data processing method according to another embodiment of the present application, where the data processing method is used to generate the distance map. As shown in fig. 7, the data processing method includes:
401. acquiring image data and radar data of an object area;
402. obtaining a distance map based on the image and the radar data to provide data support when distance data needs to be determined based on image data;
the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data.
For specific implementation of the above steps 401 and 402, reference may be made to corresponding contents of "obtaining a distance map based on the image and the radar data" in the above embodiments, which are not described herein again.
Further, the method may further include:
403. and regularly fusing images acquired by the image sensor and radar data measured by the radar, and updating the distance map.
According to the technical scheme, the image data and the radar data in the object area are fused to construct the high-precision distance map containing the mapping relation between the image pixel points and the distance data, a basis is provided for inquiring the distance data corresponding to each pixel point in the image, and the inquiring efficiency is improved. Meanwhile, the distance map is updated regularly, and timeliness and accuracy of data information are guaranteed.
Here, it should be noted that: the content of each step in the method provided by the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described in detail herein. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
Referring to fig. 8, the technical solution provided by this embodiment can be briefly described as the following process:
regularly (such as at equal time intervals) acquiring radar data measured by a radar aiming at an area, and calculating the movement distance and the movement speed corresponding to each monitored object; and fusing image data (namely the visual scene graph in the figure 8) acquired by the image sensor during radar measurement to obtain the distance map.
Here, it should be noted that: besides the spatial fusion, the radar data and the image data need to be synchronized in time to achieve the temporal fusion. For example, the sampling period of the radar is 50ms, i.e., the sampling frame rate is 20 frames/second, and the sampling frame rate of the image sensor (e.g., image sensor) is 25 frames/second. In order to ensure the reliability of data, the sampling rate of the camera can be taken as a reference, and data on a corresponding frame of the radar is selected when the camera acquires one frame, so as to ensure the synchronization of the radar data and a sampling image of the camera on time.
Referring to fig. 9, the technical solution of determining the queuing length based on the distance map obtained in fig. 8 can be briefly described as the following process:
completing the detection and tracking of the target based on the collected video and radar data, and judging the queuing condition of the target; in the event that a queue condition is determined to be present, a distance map is incorporated to calculate the length of the queue of vehicles.
Fig. 10 is a flowchart illustrating a data processing method according to another embodiment of the present application. As shown in fig. 10, the data processing method includes:
501. acquiring an image acquired by an image sensor aiming at a region and radar data containing a distance measured by a radar aiming at the region;
502. fusing the image and the radar data to obtain a distance map; the distance map contains a mapping relation between image pixel points and distances, and the distances are determined by the radar data;
503. and when an acquisition event of information related to the distance exists, determining response information aiming at the acquisition event according to the pixel point of the monitoring object indicated in the acquisition event, which corresponds to the image, and the distance map.
For the specific implementation of steps 501 and 502, reference may be made to corresponding contents in the foregoing embodiments, which are not described herein again.
In 503, the obtaining event may be a request sent by the user through the client to obtain the distance between the user and the radar, or a request to obtain the distance between the user and a certain vehicle. For example, for a traffic guidance department, radar data and a video image corresponding to the monitoring area and a distance map obtained by fusing the radar data and the video image data are stored in a service end of the traffic guidance department; when a user triggers to obtain the request information of the distance from a certain vehicle from the server through the client, after the server receives the request information of the user, the server can determine the response information corresponding to the request information according to the pixel points of the user in the image and the distance map and feed the response information back to the user.
Further, the method can also comprise the following steps:
504. when a distance acquisition request sent by a client aiming at a monitoring object in the image is received, triggering a distance acquisition event; or
505. And triggering a queuing length acquisition event under the condition that the monitored objects are queued in the area.
According to the technical scheme provided by the embodiment of the application, when the distance-related information acquisition event is received, the occurrence of the error rate can be reduced and the feedback efficiency of the response information can be improved according to the pixel points of the monitoring object indicated in the acquisition event, which correspond to the image, and the distance map.
Here, it should be noted that: the content of each step in the method provided by the embodiment of the present application, which is not described in detail in the foregoing embodiments, may be referred to corresponding content in the foregoing embodiments, and is not described in detail herein. In addition, the method provided in the embodiment of the present application may further include, in addition to the above steps, other parts or all of the steps in the above embodiments, and specific reference may be made to corresponding contents in the above embodiments, which is not described herein again.
Fig. 11 shows a block diagram of a monitoring system according to an embodiment of the present application. As shown in fig. 11, the monitoring system includes: radar 601, image sensor 602, and processing means 603, wherein:
the radar 601 is used for measuring an object area to obtain radar data;
the image sensor 602 is configured to acquire image data of the object region;
the processing device 603 is configured to obtain a distance map according to the image data and the radar data; the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data; acquiring characteristic pixel points of the content to be identified based on the image data; and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
In the above, the radar 601 and the image sensor 602 may be installed at a position of a roadside portal frame, a street lamp, or a rod-shaped facility, and jointly calibrated, so that the radar scanning range and the camera shooting range are in the same region. The processing means 603 may be arranged on the image sensor or at the service side.
Further, the monitoring system may further include: a client 604, the client 604 being communicatively connected to the processing device for providing an interactive interface for a user; receiving an object area input by a user and/or the content to be identified based on the interactive interface; and acquiring the distance data from the processing device, and displaying the distance data on a display interface. In specific implementation, the client may be a desktop computer, a smart phone, a display, an intelligent wearable device, and the like.
In the technical scheme provided by this embodiment, a distance map is obtained by fusing the image and the radar data in the monitored area, and under the condition that the monitored objects are queued in the area, the image is subjected to image recognition to determine characteristic pixel points of a queue; further, according to the distance of the characteristic pixel points of the queuing team in the distance map, the calculation of the queuing length is completed; the scheme is simple and easy to realize, and has higher computational efficiency and precision.
Yet another embodiment of the present application provides an electronic device. The electronic device may be a drone (as shown in the example of fig. 12), an unmanned vehicle or robot, or the like. The electronic device includes: the device comprises a device body, an image sensor, a radar and a processor. The device body can move to a corresponding position under the control of an external instruction (such as a remote control instruction of a user or a control instruction of a server) or according to autonomous navigation information, so that the image sensor acquires image data of the object area, and the radar can measure radar data of the object area. Specifically, the image sensor is arranged on the device body and used for acquiring image data of the object area; the radar is arranged on the equipment body and used for measuring the object area to obtain radar data; the processor is arranged in the equipment body and used for acquiring the image data and the radar data; obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data; acquiring characteristic pixel points of the content to be identified based on the image data; and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
Specifically, in a vehicle queuing scenario, the processor may be configured to: identifying characteristic pixel points of a queuing team in the image data; then, distance data corresponding to the characteristic pixel points are obtained by using a distance map; the queue length may then be determined based on the obtained distance data.
Taking the unmanned aerial vehicle as an example, referring to fig. 12, an image sensor 200 and a radar (not shown in the figure) are provided on a body 1 (i.e., an apparatus body) of the unmanned aerial vehicle. The unmanned aerial vehicle moves to a target position, then controls the image sensor 200 to collect image data in a lower visual field range, and controls the radar to measure the visual field range area of the image sensor to obtain radar data. A processor of the unmanned aerial vehicle obtains a distance map according to the image data and the radar data; then, the contents to be recognized (for example, vehicles/people at the head of the queue in the queuing group and vehicles/people at the tail of the queue in the queuing group) in the image data are recognized, and the characteristic pixel points of the contents to be recognized are obtained. Supposing that the unmanned aerial vehicle needs to obtain the vehicle queue length, the content to be identified can be two target objects at this moment, and therefore the obtained characteristic pixel points also include two, which are respectively: the first characteristic pixel points corresponding to the vehicles/people at the head of the queue and the second characteristic pixel points corresponding to the vehicles/people at the tail of the queue. And respectively acquiring first distance data corresponding to the first characteristic pixel point and second distance data corresponding to the second characteristic pixel point by using a distance map. After the two distance data are obtained, the length of the queuing team can be calculated based on the first distance data and the second distance data. The drone may send the length of the queue to the client 400 (e.g., the user's cell phone) for display.
Referring to yet another embodiment shown in fig. 13, a data processing method is provided. The present embodiment focuses on the human-computer interaction aspect. Specifically, the method provided by this embodiment includes:
s01, displaying an interactive interface;
s02, responding to an object area designated by a user through an interactive interface, and acquiring image data and radar data of the object area;
s03, determining distance data corresponding to the content to be identified in the image data according to the image data and the radar data; wherein the range data is determined from the radar data;
and S04, displaying the distance data.
In the above S01, the interactive interface may be the interface corresponding to the reference numeral 41 in fig. 2b, and may of course be an interface in other forms, and the interface design of the interactive interface is not specifically limited in this embodiment. The interactive interface displays page elements prompting the user for input, selection or voice input. The user can select to input the name or label of the object area by using the keyboard under the prompt of the page element; the object area can also be selected by clicking a pull-down list box; the object area and the like can also be input by means of speech.
In the above S03, the "determining the distance data corresponding to the content to be identified in the image data according to the image data and the radar data" may specifically include:
obtaining a distance map according to the image data and the radar data; the distance map comprises a mapping relation between image pixel points and distance data;
acquiring characteristic pixel points of the content to be identified based on the image data;
and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
For a more detailed description of the above steps, reference may be made to the corresponding contents in the above embodiments, which are not repeated herein.
As mentioned above at S04, the distance data may be implemented in the display interface 42 as shown in fig. 2b, and may be presented together with the image data. Alternatively, the distance data may also be displayed in the interactive interface 41 as shown in fig. 2b, which is not specifically limited in this embodiment.
In this embodiment, the content to be identified may be preset default content or content configured by a user in advance, such as a vehicle, a person, a robot, a logistics vehicle, and the like. For this case, the user does not need to specify the content to be recognized any more. In other embodiments, the user needs to specify the content to be identified; in order to meet the requirement, the method provided by this embodiment may further include:
s05, responding to the content to be identified input by the user through the interactive interface, and triggering the step of determining the distance data corresponding to the content to be identified in the image data according to the image data and the radar data.
As shown in the interactive interface 41 in fig. 2b, the user is provided with an entry for specifying the content to be recognized, such as an input box, a drop-down list box, or a voice input control, etc. in the interactive interface 41. The user can input the content to be identified through the input mode provided by the interactive interface. Wherein, the content to be identified may be: a person, a vehicle, a robot, etc.
The technical scheme provided by the embodiments of the application can be applied to various application scenes, such as: traffic monitoring scenes related to people, such as tourist attraction people flow monitoring scenes, sales promotion queuing monitoring scenes such as markets/supermarkets, traffic flow monitoring scenes, airport people flow monitoring scenes, railway station people flow monitoring scenes and the like. In addition, the technical solutions provided in the embodiments of the present application can also be applied to machine monitoring scenarios, such as: a logistics robot queuing monitoring scene, an intelligent factory/intelligent warehouse transfer vehicle (such as an AGV) queuing monitoring scene and the like. In addition, the technical scheme provided by the embodiment of the application can be applied to a product intelligent storage scene, for example, based on image data and radar data of a product display area in a product warehouse, distance data corresponding to the outer contour feature pixel points of the product display area can be obtained, further, size information (such as length, width and height) of the product display area can be obtained, and the product storage amount in the product display area can be estimated according to the size information.
The above embodiments have been described correspondingly in conjunction with the queuing of transportation vehicles. The technical solution provided in the embodiment of the present application is described below with reference to two specific application scenarios.
Scene 1, airport security check queuing
The camera and the radar which are arranged around each security inspection opening can acquire image data and radar data at each security inspection opening. Taking one of the security inspection ports as an example, a distance map can be obtained according to the image data and the radar data of the security inspection port; then, identifying a person queuing team in the image data, and then extracting characteristic pixel points of the person queuing team from the image data, such as pixel points of a person at the tail of the team; and then, acquiring distance data with a mapping relation with the characteristic pixel point by using the distance map. Because the head-to-head position of the security inspection port is fixed, the length of the queuing team can be obtained after the distance data corresponding to the people who are arranged at the tail of the queue is known.
Airport staff can adjust and see the length of the queue corresponding to each security inspection port at any time through client equipment (such as a computer in an airport master control room or handheld equipment). Then, people can be guided to move to the security inspection port with a short queuing team as much as possible by means of broadcasting or field guidance. In addition, the flow of security check people can be determined according to the queuing length of each security check port, so that corresponding adjustment can be conveniently made at any time, for example, the number of the security check ports is increased, case personnel are increased, and the like.
Except for airport security inspection ports, a consignment queuing team, an airport overseas entry queuing team and the like can be monitored by adopting the scheme provided by the embodiment.
Scene 2, queuing of logistics robot
The logistics robot can be applied to the scenes of warehouses, sorting centers, transportation and the like, and can be used for carrying out goods transfer, carrying and other operations; such as AGV robots, palletizing robots, sorting robots, etc. In the following, a warehouse AGV robot is taken as an example, and the AGV robot is used for transferring goods in the warehouse. Cameras and radars are arranged at a plurality of positions in the warehouse. Each pair of camera and radar collects data of the same area. Based on the image data of the area acquired by the camera and the radar data of the area acquired by the radar, a distance map can be obtained; then, identifying the AGV robots in the image data, and when the queuing condition of the AGV in the area is identified, obtaining characteristic pixel points of the queuing team of the AGV robots according to the image data, such as a first pixel point of the AGV robot at the head of the queuing team and a second pixel point of the AGV robot at the tail of the queuing team; then, by using the distance map, first distance data having a mapping relation with the first pixel point and second distance data having a mapping relation with the second pixel point are obtained; and obtaining the queuing length of the AGV robot according to the first distance data and the second distance data.
The warehouse manager can check the operation conditions of the AGV robots in each area in the warehouse through client devices, such as computers or handheld devices in a control room. When the AGV robot queues in a certain area, the length of the queue can be obtained in time.
Fig. 14 shows a block diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 14, the data processing apparatus includes: a first obtaining module 131, a generating module 132, a second obtaining module 133 and a third obtaining module 134. The first obtaining module 131 is configured to obtain image data and radar data of an object region. The generating module 132 is configured to obtain a distance map according to the image data and the radar data. The distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data. The second obtaining module 133 is configured to obtain a feature pixel point of the content to be identified based on the image data. The third obtaining module 134 is configured to obtain distance data having a mapping relationship with the feature pixel according to the distance map.
Further, the data processing apparatus provided in this embodiment may further include a receiving module and a display module. The receiving module is used for receiving the object area input by a user through an interactive interface; and the display module is used for displaying the distance data on a display interface.
Still further, the receiving module may be further configured to: and receiving the object area and the content to be identified input by a user through an interactive interface.
Further, the generating module 132 is specifically configured to: converting distance data corresponding to any monitoring object contained in the radar data into an image coordinate system in which the image data is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system; associating the pixel points with the distance data for addition to the distance map.
Further, the second obtaining module 133 is specifically configured to: performing image recognition on the image data to identify target content; and extracting characteristic pixel points of the target content from the image data.
Here, it should be noted that: the data processing apparatus provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, which is not described herein again.
Fig. 15 shows a block diagram of a data processing apparatus according to an embodiment of the present application. As shown in fig. 15, the data processing apparatus includes: a first obtaining module 141, a generating module 142, a second obtaining module 143, and a determining module 144. The first acquisition module is used for acquiring image data and radar data of an object area; the generating module is used for obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data; the second acquisition module is used for acquiring characteristic pixel points of a queuing team based on the image data; the determining module is used for determining the queuing length according to the distance data in the distance map, wherein the distance data has a mapping relation with the characteristic pixel points of the queuing team.
In the technical scheme provided by this embodiment, a distance map is obtained by fusing the image and the radar data in the monitored area, and under the condition that the monitored objects are queued in the area, the image is subjected to image recognition to determine characteristic pixel points of a queue; further, according to the distance of the characteristic pixel points of the queuing team in the distance map, the calculation of the queuing length is completed; the scheme is simple and easy to realize, and has higher computational efficiency and precision.
Further, the generating module 142 is specifically configured to: converting distance data corresponding to any monitoring object contained in the radar data into an image coordinate system where the image is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system; associating the pixel points with the distance data for addition to the distance map.
Further, the distance data includes: a distance measurement value and a direction angle. Correspondingly, when the generating module 142 associates the pixel point with the distance data to add to the distance map, it is specifically configured to: according to the distance measurement value and the direction angle, calculating the horizontal distance of the monitored object in a set direction parallel to the horizontal plane; associating the pixel point with the horizontal distance for addition to the distance map.
Further, the characteristic pixel points of the queuing team include: and queuing the tail pixel points. Correspondingly, the second obtaining module 143 is specifically configured to: identifying a monitoring object in a queuing state in the image; determining a monitoring object arranged at the tail of the queue as a target object based on the recognition result; and taking a pixel point of the target object as a tail pixel point of the queuing queue.
Still further, when the second obtaining module 143 takes a pixel point of the target object as the queue tail pixel point, it is specifically configured to: acquiring a pixel point set belonging to the target object in the image; and taking the pixel point in the center of the pixel point set as the queue tail pixel point, or taking a pixel point in the pixel point set closest to the upper boundary of the image as the queue tail pixel point, or taking a pixel point in the pixel point set closest to the lower boundary of the image as the queue tail pixel point.
Further, the characteristic pixel points of the queuing team further include: queuing head pixel points; accordingly, the determining module 144 is specifically configured to: respectively acquiring a first distance having a mapping relation with the pixels at the head of the queue and a second distance having a mapping relation with the pixels at the tail of the queue from the distance map; and calculating the queuing length according to the first distance and the second distance.
Wherein, the acquisition mode of the head of line pixel of lining up includes: acquiring a pre-configured queuing head pixel point; or identifying the monitoring object arranged at the head of the queue in the image, and taking a pixel point on the monitoring object arranged at the head of the queue as the pixel point at the head of the queue.
Further, the data processing apparatus provided in this embodiment further includes an update module. The updating module is used for periodically fusing image data acquired by the image sensor and radar data measured by the radar so as to update the distance map.
Further, the data processing apparatus may further include: a tracking module and a detection module, wherein: the tracking module is used for performing behavior tracking on a monitored object appearing in the object area based on a plurality of frames of images acquired by the image sensor aiming at the object area to obtain tracking data; the detection module is used for detecting whether the queuing of the monitoring objects occurs in the area or not according to the tracking data; and under the condition that the monitored object queuing is detected to occur in the object area, triggering a step of acquiring characteristic pixel points of the queuing team based on the image data.
Further, the tracking module is specifically configured to, when performing behavior tracking on the monitored object appearing in the object region based on the multi-frame image acquired by the image sensor for the object region to obtain tracking data: detecting a monitoring object appearing in at least a part of frames of the multi-frame image; determining a motion parameter of the monitoring object based on the detected position of the monitoring object in the corresponding frame image; wherein the motion parameter comprises at least one of: the moving speed and the moving distance.
Further, the detection module in this embodiment is further configured to: detecting occluded monitored objects that are not detected in at least some frames of the multi-frame image based on radar data measured by the radar for the object region during the acquisition of the multi-frame image; and under the condition that the shielded monitoring object exists, combining the multi-frame images and radar data measured by the radar for the object area during the acquisition of the multi-frame images, and determining the position of the shielded monitoring object in the corresponding frame image.
Further, the tracking data contains the movement speed and/or the movement distance of at least one monitoring object. Correspondingly, when determining whether the monitored object is queued in the area according to the tracking data, the tracking module is specifically configured to: and when the motion parameters corresponding to the monitoring objects with the number exceeding the preset number in the at least one monitoring object meet the preset condition, determining that the queuing of the monitoring objects occurs in the area.
Further, the data processing apparatus provided in this embodiment may further include: an output module, wherein: and the output module is used for counting the number of the queued monitoring objects and outputting the counted number under the condition that the queuing of the monitoring objects occurs in the area, and outputting the counted number to display equipment for displaying. Still further, the output module is further configured to send the queuing length to a client, so that the queuing length corresponding to the area and/or a highlighted element corresponding to the queuing length are displayed on a map displayed by the client; wherein the salient elements comprise at least one of: the color and the dynamic effect are highlighted.
Here, it should be noted that: the data processing apparatus provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, which is not described herein again.
Fig. 16 is a block diagram illustrating a data processing apparatus according to another embodiment of the present application. As shown in fig. 16, the apparatus includes: an acquisition module 151 and a generation module 152. Wherein: the acquisition module 151 acquires image data and radar data of an object area; the generating module 152 is configured to obtain a distance map based on the image and the radar data, so as to provide data support when distance data needs to be determined based on the image data, where the distance map includes a mapping relationship between image pixel points and the distance data, and the distance data is determined by the radar data.
According to the technical scheme, the image data and the radar data are fused to construct the high-precision distance map containing the mapping relation between the image pixel points and the distance, the basis is provided for inquiring the distance corresponding to each pixel point in the image, and the inquiring efficiency is improved conveniently.
Further, the generating module 152 is specifically configured to, based on the image data and the radar data, obtain distance data: converting the distance data corresponding to any monitoring object contained in the radar data into an image coordinate system where the image is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system; associating the pixel points with the distance data for addition to the distance map.
Further, the distance data includes: a distance measurement value and a direction angle. Correspondingly, the generating module 152 is specifically configured to, when associating the pixel point with the distance data to add to the distance map: according to the distance measurement value and the direction angle, calculating the horizontal distance of the monitored object in a set direction parallel to the horizontal plane; associating the pixel point with the horizontal distance for addition to the distance map.
Further, the data processing apparatus provided in this embodiment may further include an update module, where the update module is configured to periodically fuse image data acquired by the image sensor with respect to the object area and radar data measured by the radar with respect to the object area, so as to update the distance map.
Here, it should be noted that: the data processing apparatus provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, which is not described herein again.
Fig. 17 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 17, the electronic apparatus includes: a memory 801 and a processor 802. The memory 801 may be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device. The memory 802 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 802, coupled with the memory 801, is configured to execute the program stored in the memory 801 to:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of the content to be identified based on the image data;
and acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map.
When the processor 802 executes the program in the memory 801, other functions may be implemented in addition to the above functions, which may be specifically referred to in the description of the foregoing embodiments.
Further, as shown in fig. 17, the electronic apparatus further includes: communication component 803, display 804, power component 805, and the like. Only some of the components are schematically shown in fig. 17, and the electronic device is not meant to include only the components shown in fig. 17.
Another embodiment of the present application provides an electronic device, which has the same structure as fig. 17. Specifically, the electronic device includes a memory and a processor. The memory may be configured to store other various data to support operations on the electronic device. The processor, coupled with the memory, to execute the program stored in the memory to:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image data and the radar data, wherein the distance map contains a mapping relation between image pixel points and the distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of a queuing team based on the image data;
and determining the queuing length according to the distance data in the distance map, which has a mapping relation with the characteristic pixel points of the queuing team.
When the processor executes the program in the memory, the processor may implement other functions in addition to the above functions, which may be specifically referred to the description of the foregoing embodiments.
Another embodiment of the present application provides an electronic device, which has the same structure as fig. 17. Specifically, the electronic device includes a memory and a processor. The memory may be configured to store other various data to support operations on the electronic device. The processor, coupled with the memory, to execute the program stored in the memory to:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image and the radar data to provide data support when distance data needs to be determined based on image data;
the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data.
Further, when the processor executes the program in the memory, other functions may be implemented in addition to the above functions, which may be specifically referred to in the description of the foregoing embodiments.
Another embodiment of the present application provides a display device having the same structure as that of fig. 17. The display device includes: a memory, a processor and a display; wherein the memory is used for storing programs; the processor, coupled with the memory, to execute the program stored in the memory to:
controlling the display to display an interactive interface;
responding to an object area designated by a user through an interactive interface, and acquiring image data and radar data of the object area;
determining distance data corresponding to contents to be identified in the image data according to the image data and the radar data; wherein the range data is determined from the radar data;
and controlling the display to display the distance data.
Further, when the processor executes the program in the memory, other functions may be implemented in addition to the above functions, which may be specifically referred to in the description of the foregoing embodiments.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the steps or functions of the data processing method provided in the foregoing embodiments when executed by a computer.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (33)

1. A data processing method, comprising:
periodically fusing image data acquired by an image sensor aiming at an object area and radar data measured by a radar aiming at the object area to update a distance map, wherein the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of the content to be identified based on image data acquired by the image sensor aiming at the object region;
acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map;
if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
2. The data processing method of claim 1, further comprising:
receiving the object region input by a user based on an interactive interface; and
and displaying the distance data on a display interface.
3. The data processing method of claim 1, further comprising:
receiving the object area and the content to be identified input by a user based on an interactive interface; and
and displaying the distance data on a display interface.
4. The method of any one of claims 1 to 3, wherein obtaining a range map based on the image data and radar data comprises:
converting distance data corresponding to any monitoring object contained in the radar data into an image coordinate system in which the image data is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system;
associating the pixel points with the distance data for addition to the distance map.
5. The method according to any one of claims 1 to 3, wherein obtaining feature pixel points of the content to be identified based on the image data comprises:
performing image recognition on the image data to recognize target content;
extracting characteristic pixel points of the target content from the image data;
wherein the target content comprises at least one of: people, vehicles, objects, people queue, vehicles queue, objects display queue.
6. A data processing method, comprising:
periodically fusing image data acquired by an image sensor aiming at an object area and radar data measured by a radar aiming at the object area to update a distance map, wherein the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of a queuing team based on the image data acquired by the image sensor aiming at the object area;
determining a queuing length according to distance data in the distance map, wherein the distance data has a mapping relation with the characteristic pixel points of the queuing team;
if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the blocked object is judged, determining the position of the blocked monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
7. The method of claim 6, wherein obtaining a range map based on the image data and the radar data comprises:
converting the distance data corresponding to any monitoring object contained in the radar data into an image coordinate system where the image is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system;
associating the pixel points with the distance data for addition to the distance map.
8. The method of claim 7, wherein the distance data comprises: a distance measurement value and a direction angle; and
associating the pixel points with the distance data for addition to the distance map, including:
according to the distance measurement value and the direction angle, calculating the horizontal distance of the monitored object in a set direction parallel to the horizontal plane;
associating the pixel point with the horizontal distance for addition to the distance map.
9. The method of any one of claims 6 to 8, wherein the characteristic pixel points of the queuing team comprise: queuing queue tail pixel points; and
based on the image data, acquiring characteristic pixel points of a queuing team, comprising:
identifying a monitoring object in a queuing state in the image;
determining a monitoring object arranged at the tail of the queue as a target object based on the recognition result;
and taking a pixel point of the target object as a tail pixel point of the queuing queue.
10. The method of claim 9, wherein using a pixel of the target object as the queue tail pixel comprises:
acquiring a pixel point set belonging to the target object in the image;
and taking the pixel point which is positioned at the center in the pixel point set as the queue tail pixel point, or taking a pixel point which is positioned at the center of the pixel point set and is closest to the upper boundary of the image as the queue tail pixel point, or taking a pixel point which is positioned at the center of the pixel point set and is closest to the lower boundary of the image as the queue tail pixel point.
11. The method of claim 9, wherein the queuing team's characteristic pixel points further comprise: queuing head pixel points; and
determining the queuing length according to the distance data in the distance map, which has a mapping relation with the characteristic pixel points of the queuing team, and the determining comprises the following steps:
respectively acquiring a first distance having a mapping relation with the pixels at the head of the queue and a second distance having a mapping relation with the pixels at the tail of the queue from the distance map;
and calculating the queuing length according to the first distance and the second distance.
12. The method according to claim 11, wherein the obtaining of the head-of-line pixel comprises:
acquiring a pre-configured queuing head pixel point; or
And identifying the monitoring object arranged at the head of the queue in the image, and taking a pixel point of the monitoring object arranged at the head of the queue as the pixel point at the head of the queue.
13. The method of any of claims 6 to 8, further comprising:
performing behavior tracking on a monitored object appearing in the object area based on a multi-frame image acquired by the image sensor aiming at the object area to obtain tracking data;
and determining whether the monitored object queues in the object area or not according to the tracking data.
14. The method of claim 13, wherein performing behavior tracking on the monitored object appearing in the object region based on a plurality of frames of images collected by the image sensor for the object region to obtain tracking data comprises:
detecting a monitoring object appearing in at least a part of frames of the multi-frame image;
determining a motion parameter of the monitoring object based on the detected position of the monitoring object in the corresponding frame image;
wherein the motion parameter comprises at least one of: the moving speed and the moving distance.
15. The method of claim 14, further comprising:
detecting occluded monitored objects that are not detected in at least some frames of the multi-frame image based on radar data measured by the radar for the object region during the acquisition of the multi-frame image;
and under the condition that the shielded monitoring object exists, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image and radar data measured by the radar for the object area during the acquisition of the multi-frame image.
16. The method according to claim 14, wherein the tracking data comprises a moving speed and/or a moving distance of at least one monitored object; and
according to the tracking data, determining whether the queuing of the monitoring objects occurs in the area or not, wherein the determining comprises the following steps:
and when the motion parameters corresponding to the monitoring objects with the number exceeding the preset number in the at least one monitoring object meet the preset condition, determining that the queuing of the monitoring objects occurs in the area.
17. The method of claim 16, further comprising:
and under the condition that the monitoring objects are queued in the region, counting the number of the queued monitoring objects and outputting the counted number.
18. The method of any of claims 6 to 8, further comprising:
sending the queuing length to a client so as to display the queuing length corresponding to the area and/or a highlighted element corresponding to the queuing length on a map displayed by the client;
wherein the salient elements comprise at least one of: the color and the dynamic effect are highlighted.
19. A data processing method, comprising:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image and the radar data;
periodically fusing image data acquired by an image sensor for the object area and radar data measured by a radar for the object area to update the range map to provide data support when range data needs to be determined based on the image data acquired by the image sensor for the object area; the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data;
if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
20. The method of claim 19, wherein obtaining a range map based on the image data and the radar data comprises:
converting distance data corresponding to any monitoring object contained in the radar data into an image coordinate system where the image is located through projection to obtain pixel points of the monitoring object mapped in the image coordinate system;
associating the pixel points with the distance data for addition to the distance map.
21. The method of claim 20, wherein the distance data comprises: a distance measurement value and a direction angle; and
associating the pixel points with the distance data for addition to the distance map, including:
according to the distance measurement value and the direction angle, calculating the horizontal distance of the monitored object in a set direction parallel to the horizontal plane;
associating the pixel point with the horizontal distance for addition to the distance map.
22. A monitoring system, comprising:
the radar is used for measuring the object area to obtain radar data;
an image sensor for acquiring image data of the object region;
the processing device is used for obtaining a distance map according to the radar data and the image data; the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data; periodically fusing image data collected by an image sensor aiming at the object area and radar data measured by a radar aiming at the object area to update the distance map; acquiring characteristic pixel points of the content to be identified based on image data acquired by the image sensor aiming at the object region; acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map; if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time; and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
23. The system according to claim 22, characterized in that the processing means are arranged on the image sensor or on the service side.
24. The system of claim 22, further comprising:
the client is in communication connection with the processing device and used for providing an interactive interface for a user; receiving an object area input by a user and/or the content to be identified based on the interactive interface; and acquiring the distance data from the processing device, and displaying the distance data on a display interface.
25. The system of claim 22,
the radar and the image sensor are arranged at the positions of the roadside portal frame, the street lamp or the rod-shaped facility at the same height and are subjected to combined calibration, so that the scanning range of the radar and the shooting range of the camera are in the same area.
26. A data processing method, comprising:
displaying an interactive interface;
responding to an object area designated by a user through an interactive interface, and acquiring image data and radar data of the object area;
obtaining a distance map according to the image data and the radar data; the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data; the distance map is updated by periodically fusing image data acquired by the image sensor aiming at the object area and radar data measured by the radar aiming at the object area;
acquiring characteristic pixel points of the content to be identified based on the image data;
acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map;
displaying the distance data;
if a third monitored object which is detected by the radar and not detected by the image is determined to exist based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
27. The method of claim 26, further comprising:
and responding to the content to be identified input by the user through the interactive interface, and triggering the step of determining the distance data corresponding to the content to be identified in the image data according to the image data and the radar data.
28. An electronic device, comprising:
an apparatus body;
the image sensor is arranged on the equipment body and used for acquiring image data of the object area;
the radar is arranged on the equipment body and used for measuring the object area to obtain radar data;
the processor is arranged in the equipment body and used for periodically fusing image data acquired by the image sensor aiming at the object area and radar data measured by the radar aiming at the object area so as to update a distance map, wherein the distance map contains the mapping relation between image pixel points and distance data, and the distance data is determined by the radar data; acquiring characteristic pixel points of the content to be identified based on image data acquired by the image sensor aiming at the object region; acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map; if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time; and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
29. The electronic device of claim 28, wherein the electronic device is an unmanned aerial vehicle, an autonomous mobile robot.
30. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled to the memory, to execute the program stored in the memory to:
periodically fusing image data acquired by an image sensor aiming at an object area and radar data measured by a radar aiming at the object area to update a distance map, wherein the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of the content to be identified based on image data acquired by the image sensor aiming at the object area;
acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map;
if a third monitored object which is detected by the radar and not detected by the image is determined to exist based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
31. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
periodically fusing image data acquired by an image sensor aiming at an object area and radar data measured by a radar aiming at the object area to update a distance map, wherein the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data;
acquiring characteristic pixel points of a queuing team based on image data acquired by the image sensor aiming at the object area;
determining a queuing length according to distance data in the distance map, wherein the distance data has a mapping relation with the characteristic pixel points of the queuing team;
if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
32. An electronic device, comprising: a memory and a processor, wherein,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
acquiring image data and radar data of an object area;
obtaining a distance map based on the image and the radar data;
periodically fusing image data acquired by an image sensor for the object area and radar data measured by a radar for the object area to update the distance map to provide data support when distance data needs to be determined based on the image data acquired by the image sensor for the object area; the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data;
if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging that the third monitored object is an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
33. A display device, comprising: a memory, a processor and a display; wherein the content of the first and second substances,
the memory is used for storing programs;
the processor, coupled with the memory, to execute the program stored in the memory to:
controlling the display to display an interactive interface;
responding to an object area designated by a user through an interactive interface, and acquiring image data and radar data of the object area;
obtaining a distance map according to the image data and the radar data; the distance map contains a mapping relation between image pixel points and distance data, and the distance data is determined by the radar data; the distance map is updated by periodically fusing image data acquired by the image sensor aiming at the object area and radar data measured by the radar aiming at the object area;
acquiring characteristic pixel points of the content to be identified based on the image data;
acquiring distance data with a mapping relation with the characteristic pixel points according to the distance map;
controlling the display to display the distance data;
if it is determined that a third monitored object detected by a radar but not detected by an image exists based on the image data and the radar data, obtaining a basic probability assignment corresponding to the third monitored object by using the speed of the monitored object contained in the radar data; comparing the basic probability assignment corresponding to the third monitored object with the average basic probability assignment corresponding to the first monitored object, and judging the third monitored object as an occluded monitored object when the basic probability assignment is greater than the average basic probability assignment and less than 1; the first monitoring object is a monitoring object which is determined based on the image data and the radar data and is detected by a radar and an image at the same time;
and when the existence of the shielded object is judged, determining the position of the shielded monitoring object in the corresponding frame image by combining the multi-frame image collected by the image sensor and the radar data measured by the radar during the multi-frame image collection.
CN202010250778.3A 2020-04-01 2020-04-01 Data processing method, monitoring system, electronic equipment and display equipment Active CN113496514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010250778.3A CN113496514B (en) 2020-04-01 2020-04-01 Data processing method, monitoring system, electronic equipment and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010250778.3A CN113496514B (en) 2020-04-01 2020-04-01 Data processing method, monitoring system, electronic equipment and display equipment

Publications (2)

Publication Number Publication Date
CN113496514A CN113496514A (en) 2021-10-12
CN113496514B true CN113496514B (en) 2022-09-20

Family

ID=77993880

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010250778.3A Active CN113496514B (en) 2020-04-01 2020-04-01 Data processing method, monitoring system, electronic equipment and display equipment

Country Status (1)

Country Link
CN (1) CN113496514B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202472944U (en) * 2011-12-28 2012-10-03 百年金海安防科技有限公司 Urban traffic information acquisition and processing system based on integration of data of multiple sensors
CN107564285A (en) * 2017-08-29 2018-01-09 南京慧尔视智能科技有限公司 Vehicle queue length detection method and system based on microwave
CN207367369U (en) * 2017-08-29 2018-05-15 南京慧尔视智能科技有限公司 Vehicle queue length detecting system based on microwave
CN108445496A (en) * 2018-01-02 2018-08-24 北京汽车集团有限公司 Ranging caliberating device and method, distance-measuring equipment and distance measuring method
CN109117691A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN209729032U (en) * 2018-12-25 2019-12-03 深圳市新创中天信息科技发展有限公司 A kind of fusion vehicle detecting system based on binocular video and radar
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9221461B2 (en) * 2012-09-05 2015-12-29 Google Inc. Construction zone detection using a plurality of information sources

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202472944U (en) * 2011-12-28 2012-10-03 百年金海安防科技有限公司 Urban traffic information acquisition and processing system based on integration of data of multiple sensors
CN109117691A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Drivable region detection method, device, equipment and storage medium
CN107564285A (en) * 2017-08-29 2018-01-09 南京慧尔视智能科技有限公司 Vehicle queue length detection method and system based on microwave
CN207367369U (en) * 2017-08-29 2018-05-15 南京慧尔视智能科技有限公司 Vehicle queue length detecting system based on microwave
CN108445496A (en) * 2018-01-02 2018-08-24 北京汽车集团有限公司 Ranging caliberating device and method, distance-measuring equipment and distance measuring method
CN109146929A (en) * 2018-07-05 2019-01-04 中山大学 A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN209729032U (en) * 2018-12-25 2019-12-03 深圳市新创中天信息科技发展有限公司 A kind of fusion vehicle detecting system based on binocular video and radar
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion

Also Published As

Publication number Publication date
CN113496514A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN111554088B (en) Multifunctional V2X intelligent roadside base station system
EP4109331A1 (en) Obstacle detection method and apparatus, computer device, and storage medium
KR101534056B1 (en) Traffic signal mapping and detection
CN105793669B (en) Vehicle position estimation system, device, method, and camera device
CN103941746B (en) Image processing system and method is patrolled and examined without man-machine
CN111045000A (en) Monitoring system and method
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
CN110738251B (en) Image processing method, image processing apparatus, electronic device, and storage medium
US20210133495A1 (en) Model providing system, method and program
US20220044558A1 (en) Method and device for generating a digital representation of traffic on a road
CN112949782A (en) Target detection method, device, equipment and storage medium
EP3961155A1 (en) Pose calculation method, device and program
US20210403053A1 (en) Method for calling a vehicle to user's current location
CN115965655A (en) Traffic target tracking method based on radar-vision integration
Wang et al. A roadside camera-radar sensing fusion system for intelligent transportation
US20220234588A1 (en) Data Recording for Advanced Driving Assistance System Testing and Validation
US20210232862A1 (en) Data providing system and data collection system
EP3940666A1 (en) Digital reconstruction method, apparatus, and system for traffic road
CN113496514B (en) Data processing method, monitoring system, electronic equipment and display equipment
CN109708659B (en) Distributed intelligent photoelectric low-altitude protection system
CN112613668A (en) Scenic spot dangerous area management and control method based on artificial intelligence
Marques et al. An evaluation of machine learning methods for speed-bump detection on a GoPro dataset
CN114565906A (en) Obstacle detection method, obstacle detection device, electronic device, and storage medium
WO2022041212A1 (en) Fire source location indication method, and related device and apparatus
CN115035490A (en) Target detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant