CN112738394A - Linkage method and device of radar and camera equipment and storage medium - Google Patents
Linkage method and device of radar and camera equipment and storage medium Download PDFInfo
- Publication number
- CN112738394A CN112738394A CN202011567665.2A CN202011567665A CN112738394A CN 112738394 A CN112738394 A CN 112738394A CN 202011567665 A CN202011567665 A CN 202011567665A CN 112738394 A CN112738394 A CN 112738394A
- Authority
- CN
- China
- Prior art keywords
- information
- target object
- radar
- acquisition device
- video acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000012544 monitoring process Methods 0.000 claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 52
- 238000013461 design Methods 0.000 claims abstract description 19
- 230000009977 dual effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 3
- 230000015654 memory Effects 0.000 description 16
- 238000004590 computer program Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000007405 data analysis Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- General Physics & Mathematics (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The invention discloses a linkage method and a linkage device of radar and camera equipment and a storage medium. Wherein, the method comprises the following steps: acquiring first radar data obtained by acquiring a first area by a first laser radar; sending a first control instruction to a video acquisition device under the condition that a target object is detected according to first radar data, wherein the video acquisition device adopts a double-tripod head design and comprises a camera for monitoring panorama and a camera for monitoring details; the video acquisition device shoots a first area and a target object according to a control instruction, the effect complementation of the first laser radar and the video acquisition device is achieved, data processing is carried out to control the video acquisition device in a linkage manner, the problems of front and back target tracking and marking of the laser radar and the video acquisition device are solved, the purpose of monitoring information is enriched, and the technical problem that the monitoring information of the target object is single in the prior art is solved.
Description
Technical Field
The invention relates to the field of video monitoring, in particular to a linkage method and a linkage device of radar and camera equipment and a storage medium.
Background
Under the scene with a wider visual field, the visual field angle of one camera lens is limited, the monitoring range of one radar is limited, but the cost of matching multiple radars with multiple lenses is very high.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a linkage method and device of radar and camera equipment and a storage medium, which at least solve the technical problem that monitoring information of a target object is single in the prior art.
According to an aspect of the embodiments of the present invention, there is provided a linkage method of a radar and a video capture device, including: acquiring first radar data acquired by a first laser radar in a first area; sending a first control instruction to a video acquisition device under the condition that a target object is detected according to the first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring; and the video acquisition device shoots the first area and the target object according to the control instruction.
Optionally, after the video capturing device captures the first area and the target object according to the control instruction, the method further includes: acquiring second radar data acquired by a second laser radar acquiring a second area, wherein the second laser radar and the first laser radar are positioned at two sides of the video acquisition device, and the second area and the first area are different areas; sending a second control instruction to a video acquisition device under the condition that the target object is detected according to the second radar data; and the video acquisition device shoots the second area and the target object according to the second control instruction.
Optionally, after the video capturing device captures the first area and the target object according to the control instruction, the method further includes: obtaining first relevant information of the target object according to the first radar data; acquiring first video data acquired by the video acquisition device in the first area, and analyzing the first video data to acquire second related information of the target object; and determining first target information of the target object according to the first relevant information and the second relevant information.
Optionally, the method further includes: obtaining contour information of the vehicle according to the first radar data under the condition that the target object is the vehicle, wherein the first relevant information comprises the contour information; obtaining color information and license plate number information of the vehicle according to the first video data, wherein the second relevant information comprises the color information and the license plate number information; and determining the contour information, the color information and the license plate number information as target information of the vehicle.
Optionally, after the video capturing device captures the second area and the target object according to the control instruction, the method further includes: obtaining third relevant information of the target object according to the second radar data; acquiring second video data acquired by the video acquisition device through acquiring the second area, and analyzing the second video data to acquire fourth related information of the target object; and determining second target information of the target object according to the third relevant information and the fourth relevant information.
Optionally, after determining the target information of the target object according to the third relevant information and the fourth relevant information, the method further includes: and tracking the target object according to the first target information and the second target information under the condition that the first target information and the second target information are matched.
According to another aspect of the embodiments of the present invention, there is also provided a linkage device of a radar and a video capture device, which is used in a linkage method of a radar and a video capture device, and includes: the first laser radar is positioned on one side of the video acquisition module and used for acquiring first radar data from a first area; the second laser radar is positioned on the other side of the video acquisition module and used for acquiring a second area to obtain second radar data; the processing unit is used for sending a first control instruction to the video acquisition device under the condition that a target object is detected according to the first radar data; the second radar module is further used for sending a second control instruction to the video acquisition device under the condition that the target object is detected according to the second radar data; the video acquisition device is positioned between the first laser radar and the second laser radar and is used for controlling shooting of the first area according to a first control instruction or shooting of the second area according to a second control instruction.
Optionally, the video acquisition device adopts a double-pan-tilt design including a camera for monitoring panorama and a camera for detail monitoring.
According to another aspect of the embodiments of the present invention, there is also provided a linkage of a radar and a video capture device, including: the first acquisition unit is used for acquiring first radar data acquired by a first laser radar in a first area; the first sending unit is used for sending a first control instruction to a video acquisition device under the condition that a target object is detected according to the first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring; and the first control unit is used for shooting the first area and the target object by the video acquisition device according to the control instruction.
Optionally, the apparatus further comprises: the second acquisition unit is used for acquiring second radar data acquired by a second laser radar acquiring a second area, wherein the second laser radar and the first laser radar are positioned at two sides of the video acquisition device, and the second area and the first area are different areas; the second sending unit is used for sending a second control instruction to the video acquisition device under the condition that the target object is detected according to the second radar data; and the second control unit is used for shooting the second area and the target object by the video acquisition device according to the second control instruction.
Optionally, the apparatus further comprises: the first obtaining unit is used for obtaining first related information of the target object according to the first radar data after the video acquisition device shoots the first area and the target object according to the control instruction; the third acquisition unit is used for acquiring first video data acquired by the video acquisition device in the first area and analyzing the first video data to acquire second related information of the target object; a first determining unit, configured to determine first target information of the target object according to the first relevant information and the second relevant information.
Optionally, the apparatus includes: a second obtaining unit, configured to obtain, when the target object is a vehicle, profile information of the vehicle according to the first radar data, where the first related information includes the profile information; a third obtaining unit, configured to obtain color information and license plate number information of the vehicle according to the first video data, where the second related information includes the color information and the license plate number information; a second determination unit configured to determine the contour information, the color information, and the license plate number information as target information of the vehicle.
Optionally, the apparatus further comprises: a fourth obtaining unit, configured to obtain third relevant information of the target object according to the second radar data after the video capture device captures the second area and the target object according to the control instruction; a fifth obtaining unit, configured to obtain second video data obtained by acquiring the second area by the video acquisition device, and analyze the second video data to obtain fourth related information of the target object; a third determining unit, configured to determine second target information of the target object according to the third relevant information and the fourth relevant information.
Optionally, the apparatus further comprises: and the tracking unit is used for tracking the target object according to the first target information and the second target information under the condition that the first target information is matched with the second target information after the target information of the target object is determined according to the third relevant information and the fourth relevant information.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the above-mentioned linkage method of the radar and the video capture device when the computer program is executed.
According to another aspect of the embodiments of the present invention, there is also provided an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the method for linking the radar and the video capture device through the computer program.
In the embodiment of the invention, first radar data obtained by acquiring a first area by a first laser radar; sending a first control instruction to a video acquisition device under the condition that a target object is detected according to first radar data, wherein the video acquisition device adopts a double-tripod head design and comprises a camera for monitoring panorama and a camera for monitoring details; the video acquisition device shoots a first area and a target object according to a control instruction, the effect complementation of the first laser radar and the video acquisition device is achieved, data processing is carried out to control the video acquisition device in a linkage manner, the problems of front and back target tracking and marking of the laser radar and the video acquisition device are solved, the purpose of monitoring information is enriched, and the technical problem that the monitoring information of the target object is single in the prior art is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow chart of an alternative method of linking a radar and a video capture device according to an embodiment of the present invention;
FIG. 2 is an alternative linkage of a dual radar and camera device in accordance with an embodiment of the present invention;
FIG. 3 is a block diagram of an alternative dual radar and camera device coordinated target object tracking system in accordance with an embodiment of the present invention;
FIG. 4 is a flow chart of an alternative camera front processing data in conjunction with rear radar monitored targets according to embodiments of the present invention;
FIG. 5 is a flow chart of an alternative camera behind processing data in conjunction with a front radar monitored target in accordance with embodiments of the present invention;
FIG. 6 is a schematic structural diagram of an alternative linkage of a radar and video capture device in accordance with an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an alternative electronic device according to an embodiment of the invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to an aspect of the embodiments of the present invention, there is provided a method for linking a radar and a video capture device, as shown in fig. 1, the method for linking a radar and a video capture device includes:
step S102, first radar data obtained by a first laser radar collecting a first area are obtained.
And S104, sending a first control instruction to a video acquisition device under the condition that the target object is detected according to the first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring.
And S106, shooting the first area and the target object by the video acquisition device according to the control instruction.
Optionally, the linkage method of the radar and the video capture device in this embodiment may include, but is not limited to, being applied to a road, a garage entrance, a parking lot, and the like, where a bidirectional wide field of view exists, where the target object may be a running vehicle, a pedestrian, or the like.
It should be noted that the above video capture device may include, but is not limited to, a video camera, where the video camera may be a PTZ pan-tilt camera, the PTZ pan-tilt camera adopts a double pan-tilt design, the upper and lower channels may both realize pan-tilt rotation, the upper channel is a fixed focus or zoom lens for panoramic monitoring, and the lower channel is a zoom lens for detailed monitoring. The panoramic camera determines to monitor the front or the back according to the radar detection condition, and the detail camera captures details according to the target detected by the panoramic camera.
According to the embodiment provided by the application, first radar data obtained by acquiring a first area through a first laser radar is acquired; sending a first control instruction to a video acquisition device under the condition that a target object is detected according to the first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring; the video acquisition device shoots the first area and the target object according to the control instruction, so that the purposes of effectively complementing the first laser radar and the video acquisition device, performing data processing on the video acquisition device in a linkage manner, tracking and marking targets before and after the laser radar and the video acquisition device are achieved, monitoring information is enriched, and the technical problem that the monitoring information of the target object is single in the prior art is solved.
Optionally, in this embodiment, after the video capture device captures the first area and the target object according to the control instruction, the method may further include: acquiring second radar data acquired by a second laser radar acquiring a second area, wherein the second laser radar and the first laser radar are positioned at two sides of the video acquisition device, and the second area and the first area are different areas; under the condition that the target object is detected according to the second radar data, sending a second control instruction to the video acquisition device; and the video acquisition device shoots the second area and the target object according to the second control instruction.
In this embodiment, can realize the linkage between two radars and the video acquisition device, video acquisition device can gather the video data in the first region according to first control command control video acquisition device promptly, can gather the video data in the second region according to second control command control video acquisition device, and then richen the monitoring information who gathers the object.
Optionally, in this embodiment, after the video capture device captures the first area and the target object according to the control instruction, the method further includes: obtaining first relevant information of a target object according to the first radar data; acquiring first video data acquired by a video acquisition device in a first area, and analyzing the first video data to acquire second related information of a target object; first target information of the target object is determined according to the first relevant information and the second relevant information.
Optionally, in this embodiment, the method may further include: under the condition that the target object is a vehicle, obtaining the contour information of the vehicle according to the first radar data, wherein the first relevant information comprises the contour information; obtaining color information and license plate number information of the vehicle according to the first video data, wherein the second relevant information comprises the color information and the license plate number information; and determining the contour information, the color information and the license plate number information as the target information of the vehicle.
Wherein, in the case that the target object is a pedestrian, the first related information may include contour information that is the pedestrian, and the second related information may be the dressing of the pedestrian and identity information of the pedestrian.
Optionally, after the video capture device captures the second area and the target object according to the control instruction, the method further includes: obtaining third relevant information of the target object according to the second radar data; acquiring second video data acquired by the video acquisition device in the second area, and analyzing the second video data to acquire fourth related information of the target object; and determining second target information of the target object according to the third relevant information and the fourth relevant information.
Optionally, in this embodiment, after determining the target information of the target object according to the third relevant information and the fourth relevant information, the method may further include: and tracking the target object according to the first target information and the second target information under the condition that the first target information and the second target information are matched.
In this embodiment, the target object may be tracked by the first laser radar, the second laser radar, and the video capture device, and when an object tracking the same target travels from the first area to the second area, the target object may be tracked by the first target information and the second target information.
As an optional embodiment, the application further provides a target object tracking method through linkage of the dual radars and the camera device. As shown in fig. 2, a linkage device of a dual radar and camera device. Two radar modules are deployed in tandem and integrated with a camera to form an integrated device.
It should be noted that, the device adopts a double-pan-tilt design, the upper and lower channels can both realize pan-tilt rotation, the upper channel is a fixed focus or zoom lens for panoramic monitoring, and the lower channel is a zoom lens for detailed monitoring. The panoramic camera determines to monitor the front or the back according to the radar detection condition, and the detail camera captures details according to the target detected by the panoramic camera.
As shown in fig. 3, a block diagram of a dual radar and camera device coordinated tracking target object system.
Front radar module (equivalent to first lidar): the radar module is arranged in front of the camera equipment and used for collecting front radar data;
rear radar module (equivalent to second lidar): the radar module is arranged behind the camera equipment and used for acquiring rear radar data;
a radar data processing unit: processing radar dot matrix data collected by a front radar or a rear radar to form information (equivalent to first related information or third related information) such as target size, speed, distance, angle and the like;
a video data processing unit: processing the camera video, and extracting target information (corresponding to the second relevant information or the fourth relevant information) (identifying a motor vehicle/non-motor vehicle/person, and the like);
the data analysis and processing unit: comprehensively processing the output information of the front radar data processing unit, the rear radar data processing unit and the video data processing unit, and selecting camera linkage PTZ information for analysis processing according to the priority of the linkage control camera set by a user;
a camera control unit: and receiving data of the data analysis and processing unit for controlling the camera to track in a linkage manner and identifying the target.
As shown in fig. 4, the camera processes data in the front direction and links with the radar monitoring target in the rear direction.
Step S41, starting to capture and mark the front target by the camera, and monitoring the target by the rear laser radar (equivalent to the second laser radar);
step S42, processing data in front of the camera;
a step S43 of determining whether the rear laser radar detects the target object, and if so, executing a step S44, and if not, executing a step S47;
a step S44 of determining whether or not the front camera data processing is completed, and if yes, executing a step S45, and if no, executing a step S46;
step S45, the camera is linked to the rear to process data;
a step S46 of determining whether the data processing priority is higher than the rear, and if not, executing a step S45, and if yes, executing a step S47;
step S47, the cameras are not linked, and data processing is continued;
step S48 ends.
That is, when the camera captures and marks the front target, the rear radar monitors the target and first judges whether the data processing such as the front capture and the mark is finished, if so, the camera is linked to the rear to perform the capture and the mark data processing; if the data processing is not finished, the data processing priority needs to be judged, when the front data processing priority set by the user is greater than the rear data processing priority, the camera does not perform linkage and continues to process the front data, and when the front data processing priority set by the user is less than the rear data processing priority, the camera is linked to the rear for data processing.
When the camera captures and marks the front target and collects data, the rear radar does not monitor the target, the camera is not linked, and data processing is continuously carried out on the front target.
As shown in fig. 5, the camera processes data in the rear and links with the front radar monitoring target.
Step S51, starting to monitor the target by the front radar when the camera captures, marks and other data acquisition of the rear target;
step S52, the camera processes the data in the rear;
a step S53 of determining whether the front laser radar has detected the target object, and if so, executing a step S54, and if not, executing a step S57;
a step S54 of determining whether the rear camera data processing is completed, and if so, executing a step S55, and if not, executing a step S56;
step S55, the camera is linked to the front for data processing;
a step S56 of determining whether the data processing priority is higher than the rear, and if so, executing a step S55, and if not, executing a step S57;
step S57, the cameras are not linked, and data processing is continued;
step S58 ends.
That is, when the camera captures and marks data of a rear target, the front radar monitors the target and needs to first judge whether data processing such as rear capture and marking is finished, and if the data processing is finished, the camera is linked to the front to perform capture and marking data processing; if the data processing is not finished, the data processing priority needs to be judged, when the rear data processing priority set by the user is greater than the front data processing priority, the camera does not perform linkage and continues to process the rear data, and when the front data processing priority set by the user is greater than the rear data processing priority, the camera is linked to the front for data processing.
When the camera captures, marks and other data acquisition of the rear target, the front radar does not monitor the target, the camera is not linked, and data processing is continuously carried out at the rear.
According to the embodiment provided by the application, through the adoption of the integrated design of the front radar, the rear radar and the camera device, compared with the traditional radar and camera linkage device, the camera is controlled in a linkage manner through the detection of the front radar and the rear radar, the intelligent data analysis and the linkage, so that the panorama and the details can be acquired, and the monitoring information is enriched; compared with a multi-radar and multi-lens linkage mode, the cost is greatly reduced;
through the characteristics of high radar detection precision and strong environmental practicability, the system is effectively complementary with a camera, double-radar detection is realized, data processing is used for linkage control of a dome camera, and front and back target tracking and marking of the radar and the camera are achieved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for implementing the above-described linkage of a radar and a video capture device, the apparatus comprising:
the first laser radar is located on one side of the video acquisition module and used for acquiring first radar data obtained in the first area.
And the second laser radar is positioned on the other side of the video acquisition module and used for acquiring second radar data from a second area.
The processing unit is used for sending a first control instruction to the video acquisition device under the condition that the target object is detected according to the first radar data; and the second control instruction is also used for sending a second control instruction to the video acquisition device under the condition that the target object is detected according to the second radar data.
And the video acquisition device is positioned between the first laser radar and the second laser radar and is used for controlling shooting of the first area according to the first control instruction or shooting of the second area according to the second control instruction.
Optionally, in this embodiment, the video capture device adopts a double-pan-tilt design including a camera for monitoring a panorama and a camera for detail monitoring.
The video acquisition module adopts a double-holder design, one path is a fixed focus or zoom lens and is used for panoramic monitoring, and the other path is a zoom lens and is used for detail monitoring.
According to another aspect of the embodiment of the invention, a linkage device of the radar and the video acquisition device for implementing the linkage method of the radar and the video acquisition device is further provided. As shown in fig. 6, the linkage of the radar and the video capture device includes: the first acquisition unit 61, the first transmission unit 63 and the first control unit 65.
The first obtaining unit 61 is configured to obtain first radar data obtained by acquiring the first area by the first laser radar.
And a first sending unit 63, configured to send a first control instruction to the video capture device according to the condition that the first radar data detects the target object, where the video capture device adopts a double-pan-tilt design and includes a camera for monitoring a panorama and a camera for monitoring details.
And the first control unit 65 is used for shooting the first area and the target object by the video acquisition device according to the control instruction.
According to the embodiment provided by the application, the first obtaining unit 61 obtains first radar data obtained by the first laser radar collecting the first area; the first sending unit 63 sends a first control instruction to the video acquisition device under the condition that the target object is detected according to the first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring; the first control unit 65 shoots the first area and the target object according to the control instruction. The method has the advantages that the effect complementation of the first laser radar and the video acquisition device is achieved, the data processing is carried out to carry out linkage control on the video acquisition device, the front and back target tracking and marking of the laser radar and the video acquisition device are achieved, the purpose of monitoring information is enriched, and the technical problem that the monitoring information of a target object is single in the prior art is solved.
Optionally, in this embodiment, the apparatus may further include: the second acquisition unit is used for acquiring second radar data acquired by a second laser radar acquiring a second area, wherein the second laser radar and the first laser radar are positioned at two sides of the video acquisition device, and the second area and the first area are different areas; the second sending unit is used for sending a second control instruction to the video acquisition device under the condition that the target object is detected according to the second radar data; and the second control unit is used for shooting the second area and the target object by the video acquisition device according to the second control instruction.
Optionally, the apparatus may further include: the first obtaining unit is used for obtaining first related information of the target object according to the first radar data after the video acquisition device shoots the first area and the target object according to the control instruction; the third acquisition unit is used for acquiring first video data acquired by the video acquisition device in the first area and analyzing the first video data to acquire second related information of the target object; a first determination unit for determining first target information of the target object according to the first correlation information and the second correlation information.
Optionally, the apparatus may further include: a second obtaining unit, configured to obtain, when the target object is a vehicle, profile information of the vehicle according to the first radar data, where the first related information includes the profile information; the third obtaining unit is used for obtaining color information and license plate number information of the vehicle according to the first video data, and the second relevant information comprises the color information and the license plate number information; and a second determination unit for determining the contour information, the color information, and the license plate number information as target information of the vehicle.
Optionally, the apparatus may further include: the fourth obtaining unit is used for obtaining third relevant information of the target object according to the second radar data after the video acquisition device shoots the second area and the target object according to the control instruction; the fifth obtaining unit is used for obtaining second video data obtained by the video acquisition device acquiring the second area and analyzing the second video data to obtain fourth related information of the target object; a third determining unit for determining second target information of the target object according to the third related information and the fourth related information.
Optionally, the apparatus may further include: and the tracking unit is used for tracking the target object according to the first target information and the second target information under the condition that the first target information is matched with the second target information after determining the target information of the target object according to the third relevant information and the fourth relevant information.
According to a further aspect of the embodiments of the present invention, there is also provided an electronic device for implementing the method for linking a radar and a video capture device, as shown in fig. 7, the electronic device includes a memory 702 and a processor 704, the memory 702 stores a computer program, and the processor 704 is configured to execute the steps in any one of the method embodiments through the computer program.
Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, acquiring first radar data acquired by a first laser radar in a first area;
s2, sending a first control instruction to a video acquisition device under the condition that a target object is detected according to first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring;
and S3, the video acquisition device shoots the first area and the target object according to the control instruction.
Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 7 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
The memory 702 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for linking a radar and a video capture device in the embodiment of the present invention, and the processor 704 executes various functional applications and data processing by running the software programs and modules stored in the memory 702, so as to implement the method for linking a radar and a video capture device. The memory 702 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 702 can further include memory located remotely from the processor 704, which can be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 702 may be, but not limited to, specifically configured to store video data information collected by the first radar data video collecting device. As an example, as shown in fig. 7, the memory 702 may include, but is not limited to, the first acquiring unit 61, the first transmitting unit 63, and the first control unit 65 in the linkage of the radar and the video capturing device. In addition, the link device may further include, but is not limited to, other module units in the linkage device of the radar and the video capture device, which is not described in detail in this example.
Optionally, the transmitting device 706 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 706 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 706 is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In addition, the electronic device further includes: a display 708 for displaying the target object; and a connection bus 710 for connecting the respective module parts in the above-described electronic apparatus.
According to a further aspect of an embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, acquiring first radar data acquired by a first laser radar in a first area;
s2, sending a first control instruction to a video acquisition device under the condition that a target object is detected according to first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring;
and S3, the video acquisition device shoots the first area and the target object according to the control instruction.
Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing one or more computer devices (which may be personal computers, servers, network devices, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (15)
1. A linkage method of a radar and a video acquisition device is characterized by comprising the following steps:
acquiring first radar data acquired by a first laser radar in a first area;
sending a first control instruction to a video acquisition device under the condition that a target object is detected according to the first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring;
and the video acquisition device shoots the first area and the target object according to the control instruction.
2. The method of claim 1, wherein after the video capture device captures the first region and the target object according to the control instruction, the method further comprises:
acquiring second radar data acquired by a second laser radar acquiring a second area, wherein the second laser radar and the first laser radar are positioned at two sides of the video acquisition device, and the second area and the first area are different areas;
sending a second control instruction to a video acquisition device under the condition that the target object is detected according to the second radar data;
and the video acquisition device shoots the second area and the target object according to the second control instruction.
3. The method of claim 1, wherein after the video capture device captures the first region and the target object according to the control instruction, the method further comprises:
obtaining first relevant information of the target object according to the first radar data;
acquiring first video data acquired by the video acquisition device in the first area, and analyzing the first video data to acquire second related information of the target object;
and determining first target information of the target object according to the first relevant information and the second relevant information.
4. The method of claim 3, comprising:
obtaining contour information of the vehicle according to the first radar data under the condition that the target object is the vehicle, wherein the first relevant information comprises the contour information;
obtaining color information and license plate number information of the vehicle according to the first video data, wherein the second relevant information comprises the color information and the license plate number information;
and determining the contour information, the color information and the license plate number information as target information of the vehicle.
5. The method of claim 2, wherein after the video capture device captures the second region and the target object according to the control instruction, the method further comprises:
obtaining third relevant information of the target object according to the second radar data;
acquiring second video data acquired by the video acquisition device through acquiring the second area, and analyzing the second video data to acquire fourth related information of the target object;
and determining second target information of the target object according to the third relevant information and the fourth relevant information.
6. The method of claim 5, wherein after determining the target information of the target object according to the third relevant information and the fourth relevant information, the method further comprises:
and tracking the target object according to the first target information and the second target information under the condition that the first target information and the second target information are matched.
7. A linkage equipment of a radar and a video acquisition device is used for a linkage method of the radar and the video acquisition device, and is characterized by comprising the following steps:
the first laser radar is positioned on one side of the video acquisition module and used for acquiring first radar data from a first area;
the second laser radar is positioned on the other side of the video acquisition module and used for acquiring a second area to obtain second radar data;
the processing unit is used for sending a first control instruction to the video acquisition device under the condition that a target object is detected according to the first radar data; the second radar module is further used for sending a second control instruction to the video acquisition device under the condition that the target object is detected according to the second radar data;
the video acquisition device is positioned between the first laser radar and the second laser radar and is used for controlling shooting of the first area according to a first control instruction or shooting of the second area according to a second control instruction.
8. The apparatus of claim 7, wherein the video capture device comprises a camera for panoramic monitoring and a camera for detail monitoring in a dual pan-tilt design.
9. A linkage of a radar and a video acquisition device, comprising:
the first acquisition unit is used for acquiring first radar data acquired by a first laser radar in a first area;
the first sending unit is used for sending a first control instruction to a video acquisition device under the condition that a target object is detected according to the first radar data, wherein the video acquisition device adopts a double-holder design and comprises a camera for monitoring panorama and a camera for detail monitoring;
and the first control unit is used for shooting the first area and the target object by the video acquisition device according to the control instruction.
10. The apparatus of claim 9, further comprising:
the second acquisition unit is used for acquiring second radar data acquired by a second laser radar acquiring a second area, wherein the second laser radar and the first laser radar are positioned at two sides of the video acquisition device, and the second area and the first area are different areas;
the second sending unit is used for sending a second control instruction to the video acquisition device under the condition that the target object is detected according to the second radar data;
and the second control unit is used for shooting the second area and the target object by the video acquisition device according to the second control instruction.
11. The apparatus of claim 9, further comprising:
the first obtaining unit is used for obtaining first related information of the target object according to the first radar data after the video acquisition device shoots the first area and the target object according to the control instruction;
the third acquisition unit is used for acquiring first video data acquired by the video acquisition device in the first area and analyzing the first video data to acquire second related information of the target object;
a first determining unit, configured to determine first target information of the target object according to the first relevant information and the second relevant information.
12. The apparatus of claim 11, wherein the apparatus comprises:
a second obtaining unit, configured to obtain, when the target object is a vehicle, profile information of the vehicle according to the first radar data, where the first related information includes the profile information;
a third obtaining unit, configured to obtain color information and license plate number information of the vehicle according to the first video data, where the second related information includes the color information and the license plate number information;
a second determination unit configured to determine the contour information, the color information, and the license plate number information as target information of the vehicle.
13. The apparatus of claim 10, further comprising:
a fourth obtaining unit, configured to obtain third relevant information of the target object according to the second radar data after the video capture device captures the second area and the target object according to the control instruction;
a fifth obtaining unit, configured to obtain second video data obtained by acquiring the second area by the video acquisition device, and analyze the second video data to obtain fourth related information of the target object;
a third determining unit, configured to determine second target information of the target object according to the third relevant information and the fourth relevant information.
14. The apparatus of claim 13, further comprising:
and the tracking unit is used for tracking the target object according to the first target information and the second target information under the condition that the first target information is matched with the second target information after the target information of the target object is determined according to the third relevant information and the fourth relevant information.
15. A computer-readable storage medium, comprising a stored program, wherein the program is operable to perform the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011567665.2A CN112738394B (en) | 2020-12-25 | 2020-12-25 | Linkage method and device of radar and camera equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011567665.2A CN112738394B (en) | 2020-12-25 | 2020-12-25 | Linkage method and device of radar and camera equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112738394A true CN112738394A (en) | 2021-04-30 |
CN112738394B CN112738394B (en) | 2023-04-18 |
Family
ID=75616722
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011567665.2A Active CN112738394B (en) | 2020-12-25 | 2020-12-25 | Linkage method and device of radar and camera equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112738394B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115174767A (en) * | 2022-05-27 | 2022-10-11 | 青岛海尔科技有限公司 | Video recording method, edge device, monitoring system and storage medium |
WO2022217809A1 (en) * | 2021-04-13 | 2022-10-20 | 华为技术有限公司 | Camera, and photographing method, system and apparatus |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014006188A (en) * | 2012-06-26 | 2014-01-16 | Toshiba Denpa Products Kk | Radar monitoring system, image acquiring method, image acquiring program |
CN104796612A (en) * | 2015-04-20 | 2015-07-22 | 河南弘金电子科技有限公司 | High-definition radar linkage tracking control camera shooting system and linkage tracking method |
CN105611121A (en) * | 2015-12-17 | 2016-05-25 | 天津中安视通科技有限公司 | Novel binocular pan-tilt camera |
CN108965809A (en) * | 2018-07-20 | 2018-12-07 | 长安大学 | The video linkage monitoring system and control method of radar vectoring |
CN208754441U (en) * | 2018-06-15 | 2019-04-16 | 常州市维多视频科技有限公司 | Monitoring integral type radar holder |
CN110097726A (en) * | 2018-01-30 | 2019-08-06 | 保定市天河电子技术有限公司 | A kind of prevention regional aim monitoring method and system |
CN110719442A (en) * | 2019-10-12 | 2020-01-21 | 深圳市镭神智能系统有限公司 | Security monitoring system |
CN110765823A (en) * | 2018-07-27 | 2020-02-07 | 杭州海康威视系统技术有限公司 | Target identification method and device |
CN111582256A (en) * | 2020-04-26 | 2020-08-25 | 智慧互通科技有限公司 | Parking management method and device based on radar and visual information |
CN111679271A (en) * | 2020-06-15 | 2020-09-18 | 杭州海康威视数字技术股份有限公司 | Target tracking method, target tracking device, monitoring system and storage medium |
CN211982024U (en) * | 2020-06-05 | 2020-11-20 | 山东飞天光电科技股份有限公司 | Integral type radar ball machine is used in control |
-
2020
- 2020-12-25 CN CN202011567665.2A patent/CN112738394B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014006188A (en) * | 2012-06-26 | 2014-01-16 | Toshiba Denpa Products Kk | Radar monitoring system, image acquiring method, image acquiring program |
CN104796612A (en) * | 2015-04-20 | 2015-07-22 | 河南弘金电子科技有限公司 | High-definition radar linkage tracking control camera shooting system and linkage tracking method |
CN105611121A (en) * | 2015-12-17 | 2016-05-25 | 天津中安视通科技有限公司 | Novel binocular pan-tilt camera |
CN110097726A (en) * | 2018-01-30 | 2019-08-06 | 保定市天河电子技术有限公司 | A kind of prevention regional aim monitoring method and system |
CN208754441U (en) * | 2018-06-15 | 2019-04-16 | 常州市维多视频科技有限公司 | Monitoring integral type radar holder |
CN108965809A (en) * | 2018-07-20 | 2018-12-07 | 长安大学 | The video linkage monitoring system and control method of radar vectoring |
CN110765823A (en) * | 2018-07-27 | 2020-02-07 | 杭州海康威视系统技术有限公司 | Target identification method and device |
CN110719442A (en) * | 2019-10-12 | 2020-01-21 | 深圳市镭神智能系统有限公司 | Security monitoring system |
CN111582256A (en) * | 2020-04-26 | 2020-08-25 | 智慧互通科技有限公司 | Parking management method and device based on radar and visual information |
CN211982024U (en) * | 2020-06-05 | 2020-11-20 | 山东飞天光电科技股份有限公司 | Integral type radar ball machine is used in control |
CN111679271A (en) * | 2020-06-15 | 2020-09-18 | 杭州海康威视数字技术股份有限公司 | Target tracking method, target tracking device, monitoring system and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022217809A1 (en) * | 2021-04-13 | 2022-10-20 | 华为技术有限公司 | Camera, and photographing method, system and apparatus |
CN115174767A (en) * | 2022-05-27 | 2022-10-11 | 青岛海尔科技有限公司 | Video recording method, edge device, monitoring system and storage medium |
CN115174767B (en) * | 2022-05-27 | 2024-03-26 | 青岛海尔科技有限公司 | Video recording method, edge equipment, monitoring system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112738394B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7018462B2 (en) | Target object monitoring methods, devices and systems | |
CN109040709B (en) | Video monitoring method and device, monitoring server and video monitoring system | |
CN110830756B (en) | Monitoring method and device | |
CN104200671B (en) | A kind of virtual bayonet socket management method based on large data platform and system | |
EP2903261B1 (en) | Apparatus and method for detecting event from plurality of photographed images | |
CN112738394B (en) | Linkage method and device of radar and camera equipment and storage medium | |
JP6088541B2 (en) | Cloud-based video surveillance management system | |
CN103686131A (en) | Monitoring apparatus and system using 3d information of images and monitoring method using the same | |
CN106559645B (en) | Monitoring method, system and device based on camera | |
CN108399782A (en) | Method, apparatus, system, equipment and the storage medium of outdoor reversed guide-car | |
JP2007158421A (en) | Monitoring camera system and face image tracing recording method | |
KR102183473B1 (en) | Method for monitoring images and apparatus for the same | |
CN111815672B (en) | Dynamic tracking control method, device and control equipment | |
KR102297217B1 (en) | Method and apparatus for identifying object and object location equality between images | |
KR101832274B1 (en) | System for crime prevention of intelligent type by video photographing and method for acting thereof | |
KR102242694B1 (en) | Monitoring method and apparatus using video wall | |
CN111340016A (en) | Image exposure method and apparatus, storage medium, and electronic apparatus | |
CN110909691A (en) | Motion detection method and device, computer readable storage medium and computer equipment | |
CN111860431B (en) | Method and device for identifying object in image, storage medium and electronic device | |
CN114120165A (en) | Gun and ball linked target tracking method and device, electronic device and storage medium | |
CN110855947B (en) | Image snapshot processing method and device | |
CN110930437B (en) | Target tracking method and device | |
CN112102623A (en) | Traffic violation identification method and device and intelligent wearable device | |
CN108540759B (en) | Video monitoring method, device and system | |
CN116246200A (en) | Screen display information candid photographing detection method and system based on visual identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |