CN113379830A - Anti-collision method and device, storage medium and electronic equipment - Google Patents

Anti-collision method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN113379830A
CN113379830A CN202110654517.2A CN202110654517A CN113379830A CN 113379830 A CN113379830 A CN 113379830A CN 202110654517 A CN202110654517 A CN 202110654517A CN 113379830 A CN113379830 A CN 113379830A
Authority
CN
China
Prior art keywords
target
detected
image
collision
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110654517.2A
Other languages
Chinese (zh)
Inventor
朱莎
吴国栋
邓海燕
谭龙田
陈彦宇
马雅奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Zhuhai Lianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of Zhuhai, Zhuhai Lianyun Technology Co Ltd filed Critical Gree Electric Appliances Inc of Zhuhai
Priority to CN202110654517.2A priority Critical patent/CN113379830A/en
Publication of CN113379830A publication Critical patent/CN113379830A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to the technical field of safety early warning, in particular to an anti-collision method, an anti-collision device and electronic equipment, wherein the method comprises the following steps: acquiring image data of a target area; performing target detection on the image data, and accurately acquiring information of a first target to be detected and information of a second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected at least comprise a target type and a target position of the target to be detected; whether collision risk exists between the first target to be detected and the second target to be detected can be judged according to the target position of the first target to be detected and the target position of the second target to be detected; and if the collision risk between the first target to be detected and the second target to be detected is determined, performing anti-collision early warning according to the target type of the second target to be detected, thereby ensuring that effective anti-collision early warning can be realized.

Description

Anti-collision method and device, storage medium and electronic equipment
Technical Field
The present application relates to the field of security early warning technologies, and in particular, to an anti-collision method, an anti-collision device, a storage medium, and an electronic device.
Background
In the current industrial production scene and even the daily life scene, more and more unmanned trolleys adopting the unmanned technology are used for carrying articles, and the unmanned trolleys generally move according to a route planned by an operator in advance.
At present common unmanned dolly in the in-process of traveling, in order to avoid the collision, unmanned dolly can send the prompt tone always, ensures that the pedestrian around notices that there is unmanned dolly to be close to, nevertheless can cause the noise pollution scheduling problem that exists under this scene through this kind of method of sending sound always and avoiding the collision, and noise pollution is very big influences producers' operating condition to influence production efficiency.
Disclosure of Invention
In order to solve the above problems, the present application provides an anti-collision method, an anti-collision device, a storage medium, and an electronic apparatus.
In a first aspect, the present application provides a collision avoidance method, including:
acquiring image data of a target area;
performing target detection on the image data to acquire information of a first target to be detected and information of at least one second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected at least comprise a target type and a target position of the target to be detected;
judging whether a collision risk exists between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected;
and if the collision risk between the first target to be detected and the second target to be detected is determined, performing anti-collision early warning according to the target type of the second target to be detected.
In the above embodiment, the target detection is performed on the image data of the target area, the information of the first target to be detected and the information of the second target to be detected can be accurately obtained, and whether a collision risk exists between the first target to be detected and the second target to be detected can be determined according to the target position in the information. When collision risks exist between the first target to be detected and the second target to be detected, collision prevention early warning is carried out according to the target type of the second target to be detected, and therefore more effective collision prevention early warning is achieved.
According to an embodiment of the present application, optionally, in the anti-collision method, acquiring image data of the target area includes:
acquiring different images shot by at least two camera devices on a target area; wherein the positions of the at least two image capturing apparatuses are different.
According to an embodiment of the present application, optionally, in the above anti-collision method, performing target detection on the image data to obtain information of a first target to be detected and information of at least one second target to be detected includes:
acquiring two images shot by any two different camera devices in the image data;
and respectively carrying out target detection on the two images, and respectively acquiring the information of a first target to be detected and the information of at least one second target to be detected in the two images.
In the above embodiment, any two images captured by different image capturing apparatuses are captured from different capturing points, so that the captured images have different description angles for the target area, and the target detection is performed on the two different images, so that the target to be detected can be determined more accurately.
According to an embodiment of the present application, optionally, in the above anti-collision method, performing target detection on the two images respectively, and acquiring information of a first target to be detected and information of at least one second target to be detected in the two images respectively includes:
acquiring a transformation matrix between the two images, wherein the two images comprise a first image and a second image;
acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the first image; acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the second image;
transforming the target position of the first target to be detected and the target position of the second target to be detected in the first image according to the transformation matrix to obtain a first expected target position and a second expected target position of the first target to be detected and the second target to be detected in the second image;
comparing the first expected target position and the target type of the first target to be detected in the first image with the target position and the target type of the first target to be detected in the second image to judge whether the first targets to be detected in the two images are the same target to be detected
And comparing the second expected target position and the target type of the second target to be detected in the first image with the target position and the target type of the second target to be detected in the second image to judge whether the second target to be detected in the two images is the same target to be detected.
According to an embodiment of the present application, optionally, in the above anti-collision method, the target position of the first target to be detected and the target position of the second target to be detected in the first image are obtained; acquiring the target position of the first target to be detected and the target position of the second target to be detected in the second image, including:
performing directional surrounding frame marking on each target to be detected identified in the two images to obtain a surrounding frame corresponding to each target to be detected;
and acquiring coordinate information of the bounding box corresponding to each target to be detected as the target position of each target to be detected.
According to an embodiment of the present application, optionally, in the above anti-collision method, determining whether there is a collision risk between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected includes:
determining a first distance according to the target position of the first target to be detected and the target position of the second target to be detected in the first image;
judging whether the first distance is smaller than a first preset threshold value, if so, determining a second distance according to the target position of the first target to be detected in the second image and the target position of the second target to be detected, wherein the first target to be detected in the second image and the first target to be detected in the first image are the same target to be detected, and the second target to be detected in the second image and the second target to be detected in the first image are the same target to be detected;
and judging whether the second distance is smaller than a second preset threshold value, if so, determining that the first target to be detected and the second target to be detected have a collision risk.
According to an embodiment of the application, optionally, in the above anti-collision method, the target type of the first target to be detected includes an inanimate moving target, and the target type of the second target to be detected includes an inanimate moving target, an animate target, and an inanimate fixed target.
According to an embodiment of the present application, optionally, in the above anti-collision method, performing anti-collision early warning according to the target type of the second target to be detected includes:
if the target type of the second target to be detected is a life target, controlling the first target to be detected to decelerate and sending a collision early warning prompt;
and if the target type of the second target to be detected is a non-living body moving target or a non-living body fixed target, controlling the first target to be detected to stop running, and sending an anti-collision processing prompt to a management end.
According to an embodiment of the present application, optionally, in the above anti-collision method, after performing the anti-collision warning according to the target type of the second target to be detected, the method further includes:
storing the record data corresponding to each anti-collision early warning;
and analyzing the recorded data to generate a management strategy.
In a second aspect, the present application further provides a collision avoidance device, including:
the image data acquisition module is used for acquiring image data of the target area;
the target detection module is used for carrying out target detection on the image data and acquiring information of a first target to be detected and information of a second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected at least comprise a target type and a target position of the target to be detected;
the risk judgment module is used for judging whether collision risk exists between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected;
and the anti-collision early warning module is used for carrying out anti-collision early warning according to the target type of the second target to be detected if the collision risk between the first target to be detected and the second target to be detected is determined.
According to an embodiment of the application, optionally, in the above anti-collision device, the image data obtaining module includes:
the image acquisition unit is used for acquiring different images of a target area shot by at least two pieces of camera equipment; wherein the positions of the at least two image capturing apparatuses are different.
According to an embodiment of the present application, optionally, in the above anti-collision device, the object detection module includes:
the image acquisition unit is used for acquiring two images shot by any two different camera devices in the image data;
and the target to be detected determining unit is used for respectively carrying out target detection on the two images and respectively acquiring the information of the first target to be detected and the information of at least one second target to be detected in the two images.
According to an embodiment of the present application, optionally, in the above anti-collision device, the to-be-detected object determining unit includes:
a transformation matrix obtaining subunit, configured to obtain a transformation matrix between the two images, where the two images include a first image and a second image;
the information acquisition subunit is configured to acquire a target position and a target type of the first target to be detected and a target position and a target type of the second target to be detected in the first image; acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the second image;
a position change subunit, configured to transform, according to the transformation matrix, a target position of the first target to be detected and a target position of the second target to be detected in the first image, so as to obtain a first expected target position and a second expected target position of the first target to be detected and the second target to be detected in the second image;
a first target information to be detected acquiring subunit, configured to compare the first expected target position and the target type of the first target to be detected in the first image with the target position and the target type of the first target to be detected in the second image, so as to determine whether the first targets to be detected in the two images are the same target to be detected;
and the second target information acquisition subunit is configured to compare the second expected target position and the target type of the second target to be detected in the first image with the target position and the target type of the second target to be detected in the second image, so as to determine whether the second target to be detected in the two images is the same target to be detected.
According to an embodiment of the application, optionally, in the above anti-collision device, the information obtaining subunit includes:
the surrounding frame marking subunit is used for performing directional surrounding frame marking on each target to be detected identified in the two images to obtain a surrounding frame corresponding to each target to be detected;
and the target position determining subunit is used for acquiring the coordinate information of the bounding box corresponding to each target to be detected as the target position of each target to be detected.
According to an embodiment of the application, optionally, in the above anti-collision device, the risk determining module includes:
a first distance determining unit, configured to determine a first distance according to a target position of the first target to be detected in the first image and a target position of the second target to be detected;
a second distance determining unit, configured to determine whether the first distance is smaller than a first preset threshold, and if so, determine a second distance according to a target position of the first target to be detected in the second image and a target position of the second target to be detected, where the first target to be detected in the second image and the first target to be detected in the first image are the same target to be detected, and the second target to be detected in the second image and the second target to be detected in the first image are the same target to be detected;
and the collision risk determining unit is used for judging whether the second distance is smaller than a second preset threshold value, and if so, determining that the collision risk exists between the first target to be detected and the second target to be detected.
According to an embodiment of the application, optionally, in the above anti-collision device, the target type of the first target to be detected includes an inanimate moving target, and the target type of the second target to be detected includes an inanimate moving target, an animate target, and an inanimate fixed target.
According to an embodiment of the present application, optionally, in the above anti-collision device, the anti-collision warning module includes:
the first early warning unit is used for controlling the first target to be detected to decelerate and sending a collision early warning prompt if the target type of the second target to be detected is a life target;
and the second early warning unit is used for controlling the first target to be detected to stop running and sending an anti-collision processing prompt to a management end if the target type of the second target to be detected is a non-living body moving target or a non-living body fixed target.
According to an embodiment of the application, optionally, in the above anti-collision device, the device further includes:
the recording module is used for storing the recording data corresponding to each anti-collision early warning;
and the analysis module is used for analyzing the recorded data and generating a management strategy.
In a third aspect, the present application provides a storage medium storing a computer program, which is executable by one or more processors, and is operable to implement the collision avoidance method as described above.
In a fourth aspect, the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the computer program is executed by the processor to perform the above-mentioned collision avoidance method.
Compared with the prior art, one or more embodiments in the above scheme can have the following advantages or beneficial effects:
the application provides an anti-collision method, an anti-collision device, a storage medium and electronic equipment, wherein the method comprises the steps of obtaining image data of a target area; performing target detection on the image data to acquire information of a first target to be detected and information of at least a second target to be detected, wherein the information of the first target to be detected and the information of the at least second target to be detected both at least comprise a target type and a target position of the target to be detected; judging whether a collision risk exists between the first target to be detected and the at least one second target to be detected according to the target position of the first target to be detected and the target position of the at least one second target to be detected; and if the collision risk between the first target to be detected and the second target to be detected is determined, performing anti-collision early warning according to the target type of the second target to be detected. The target detection is carried out on the image data of the target area, the information of the first target to be detected and the information of the second target to be detected can be accurately obtained, and whether collision risks exist between the first target to be detected and the second target to be detected can be judged according to the target positions in the information. When collision risks exist between the first target to be detected and the second target to be detected, collision prevention early warning is carried out according to the target type of the second target to be detected, and therefore more effective collision prevention early warning is achieved.
Drawings
The present application will be described in more detail below on the basis of embodiments and with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an anti-collision method according to an embodiment of the present disclosure.
Fig. 2 is a block diagram of a collision avoidance device according to a fifth embodiment of the present application.
Fig. 3 is a connection block diagram of an electronic device according to a seventh embodiment of the present application.
In the drawings, like parts are designated with like reference numerals, and the drawings are not drawn to scale.
Detailed Description
The following detailed description will be provided with reference to the accompanying drawings and embodiments, so that how to apply the technical means to solve the technical problems and achieve the corresponding technical effects can be fully understood and implemented. The embodiments and various features in the embodiments of the present application can be combined with each other without conflict, and the formed technical solutions are all within the scope of protection of the present application.
Example one
The invention provides an anti-collision method, please refer to fig. 1, which includes the following steps:
step S110: image data of a target area is acquired.
When acquiring image data of a target area, different images of the target area shot by at least two pieces of camera equipment can be acquired; wherein the positions of the at least two image capturing apparatuses are different.
Wherein one of the at least two image pickup apparatuses can photograph in one direction of the target area, and the other image pickup apparatus can photograph in the other direction of the target area, the two photographing directions being different. The specific shooting direction of the image pickup apparatus may be determined according to the situation of the actual target area. For example, one image pickup apparatus may be disposed on the righteast side of the target area and take a picture in the rightwest direction, and the other image pickup apparatus may be disposed on the rightsouth side of the target area and take a picture in the rightnorth direction.
Step S120: and performing target detection on the image data to acquire information of a first target to be detected and information of at least one second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected at least comprise a target type and a target position of the target to be detected.
When the image data is subjected to target detection, target recognition can be performed on each image in the image data based on depth learning so as to recognize all targets to be detected in each image, and a target position and a target type corresponding to each target to be detected are determined. For example, the obtained image data is subjected to YOLOv3 deep learning to perform target detection, so that unmanned vehicles, pedestrians, temporary materials and the like in the target area are distinguished, and fixed materials in the target area can be treated as a background. It is to be understood that other manners may be adopted when performing object recognition on each image in the image data, which is not limited herein.
The target type of the first target to be detected comprises a non-living body moving target, and the target type of the second target to be detected comprises a non-living body moving target, a living body target and a non-living body fixed target. The inanimate object movement target may be an inanimate object and may be a target that moves according to some rule or command. The inanimate object moving target may be different in different scenes, for example, in a warehouse where an unmanned vehicle is used for carrying goods, the inanimate object moving target is the unmanned vehicle. In an area where the automatic cleaning robot is used to clean the floor, the moving target of the inanimate object is the automatic cleaning robot. In addition, it is understood that a plurality of different inanimate object moving objects may appear in the same scene, for example, in a warehouse for carrying goods by using an unmanned trolley, an automatic cleaning robot may clean the floor of the warehouse, and the inanimate object moving objects in this case refer to the unmanned trolley and the automatic cleaning robot. The inanimate object is an object that is inanimate and cannot move according to a certain rule or command, for example, in a warehouse, the inanimate object may be a cargo that needs to be transported, a cargo dropped during transportation, an article temporarily placed in the warehouse, or the like. The life target can be an individual target which has a life form and can correspondingly respond to external stimuli.
Step S130: and judging whether a collision risk exists between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected.
Step S140: and if the collision risk between the first target to be detected and the second target to be detected is determined, performing anti-collision early warning according to the target type of the second target to be detected.
After the image data is subjected to target detection, all targets to be detected and related information in the image data can be acquired, wherein all the targets to be detected comprise a first target to be detected and a second target to be detected, and the related information of the targets to be detected comprises a target type and a target position. After the target types and the target positions of the first target to be detected and the second target to be detected are identified, whether collision risks exist between the first target to be detected and the second target to be detected can be judged according to the target position of the first target to be detected and the target position of the second target to be detected. Specifically, the distance between the first target to be detected and the second target to be detected may be calculated according to the target position of the first target to be detected and the target position of the second target to be detected, and then it is determined whether there is a collision risk between the first target to be detected and the second target to be detected according to the distance.
When the collision risk between the first target to be detected and the second target to be detected is determined, anti-collision early warning can be performed according to the target type of the first target to be detected. If the target type of the second target to be detected is a life target, the first target to be detected is controlled to decelerate and a collision early warning prompt is sent out, and the life target can avoid when hearing the collision early warning prompt sent out by the first target to be detected, so that the anti-collision is realized. For example, in a certain application scenario, if the first target to be detected is an unmanned vehicle and the second target to be detected is a pedestrian, after it is determined that there is a risk of collision between the unmanned vehicle and the pedestrian, since the target type of the second target to be detected is a living body target, a deceleration signal can be sent to the unmanned vehicle, and the unmanned vehicle is made to send a warning sound such as "please notice" to remind the pedestrian of safety. The first detected target can effectively avoid collision caused by the fact that the second detected target is not in time to avoid when the first detected target runs at a reduced speed. And if the target type of the second target to be detected is an inanimate moving target or a fixed obstacle target, controlling the first target to be detected to stop running, and sending an anti-collision processing prompt to a management end. The non-living body moving target or the fixed obstacle target cannot react to external stimulation, so that the first target to be detected can be directly controlled to stop running to avoid collision, and meanwhile, an anti-collision processing prompt is sent to the management end to prompt corresponding workers to move the second target to be detected away as soon as possible or prompt the corresponding workers to move the first target to be detected away again. For example, in an application scenario, if the first target to be detected is an unmanned vehicle and the second target to be detected is another unmanned vehicle, when it is determined that there is a risk of collision between the unmanned vehicle and the another unmanned vehicle, an instruction to stop operation may be sent to the unmanned vehicle, and a system administrator is notified to process the unmanned vehicle, and the administrator may re-plan a route of the unmanned vehicle or re-plan a route of the another unmanned vehicle according to an actual situation. And if the first target to be detected in the application scene is an unmanned trolley and the second target to be detected is materials temporarily placed by a worker, after the danger of collision between the unmanned trolley and the temporarily placed materials is judged, an instruction for stopping running can be sent to the unmanned trolley, a system administrator is informed to process the unmanned trolley, and the administrator can plan the unmanned trolley again according to the actual conditions or move the temporarily placed materials away.
In summary, the present application provides an anti-collision method, including obtaining image data of a target area; performing target detection on the image data to acquire information of a first target to be detected and information of at least one second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected at least comprise a target type and a target position of the target to be detected; judging whether a collision risk exists between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected; and if the collision risk between the first target to be detected and the second target to be detected is determined, performing anti-collision early warning according to the target type of the second target to be detected. The target detection is carried out on the image data of the target area, the information of the first target to be detected and the information of the second target to be detected can be accurately obtained, and whether collision risks exist between the first target to be detected and the second target to be detected can be judged according to the target positions in the information. When collision risks exist between the first target to be detected and the second target to be detected, collision prevention early warning is carried out according to the target type of the second target to be detected, and therefore more effective collision prevention early warning is achieved.
Example two
On the basis of the first embodiment, the present embodiment explains the method in the first embodiment through a specific implementation case.
When performing target detection on the image data and acquiring information of a first target to be detected and information of at least one second target to be detected, two images shot by any two different camera devices in the image data can be acquired first; and then, respectively carrying out target detection on the two images, and respectively acquiring the information of a first target to be detected and the information of at least one second target to be detected in the two images.
In the above embodiment, any two images captured by different image capturing apparatuses are captured from different capturing points, so that the captured images have different description angles for the target area, and the target detection is performed on the two different images, so that the target to be detected can be determined more accurately.
Specifically, when the two images are respectively subjected to target detection, and information of a first target to be detected and information of at least one second target to be detected in the two images are respectively acquired, the following process is included. Firstly, acquiring a transformation matrix between the two images, wherein the two images comprise a first image and a second image; then, acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the first image; and acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the second image. And then, transforming the target position of the first target to be detected and the target position of the second target to be detected in the first image according to the transformation matrix to obtain a first expected target position and a second expected target position of the first target to be detected and the second target to be detected in the second image. And comparing the first expected target position and the target type of the first target to be detected in the first image with the target position and the target type of the first target to be detected in the second image to judge whether the first targets to be detected in the two images are the same target to be detected. And comparing the second expected target position and the target type of the second target to be detected in the first image with the target position and the target type of the second target to be detected in the second image to judge whether the second target to be detected in the two images is the same target to be detected.
In the above embodiment, the target position to be detected and the target type of the target to be detected in one image of the target area may be identified, and then the target position to be detected is converted into the desired target position according to the transformation matrix between two different images. And then comparing the expected target position with the actual target position of the target to be detected in another image to judge that the target to be detected in the images shot by the two different camera devices is the same target to be detected, thereby ensuring that the target to be detected is accurately identified.
The transformation matrix may be obtained by analyzing a change rule of a known target between positions in the two images. For example, if there is an unmanned vehicle, a pedestrian, and an obstacle in the actual scene, one camera is C1, and the other camera is C2. The camera C1 detects that the position of the unmanned vehicle is a1, the position of the pedestrian is B1, and the position of the obstacle is D1. The camera C2 detects that the position of the unmanned vehicle is a2, the position of the pedestrian is B2, and the position of the obstacle is D2. According to the transformation matrix between the two images, the corresponding relationship between the two images shot by the camera C1 and the camera C2 can be determined, if the coordinates of a1 are { a11, a12, a13, a14}, then according to the known transformation matrix H (the transformation matrix H can be found in advance by the position coordinates of a marker in the C1 shot image and the position coordinates of a marker in the C2 shot image), the position coordinates { a15, a16, a17, a18} of a1 in C2 can be obtained, since they both represent the same unmanned vehicle in C1 and C2, the transformed { a15, a16, a17, a18} and the coordinates { a21, a22, a23, a24} of a2 are mostly overlapped, so that it can be determined whether the position of the target image shot by the transformation of the unmanned vehicle in the image shot by C1 is the position of the unmanned vehicle is converted into the position of the target image shot by C2, and then the position of the target image shot by the target shot by the transformation is determined whether the target image shot by the transformation matrix C6853 is overlapped with the target image shot by the position of the target shot by the transformation matrix C2 Is the same target to be detected. When the position of the unmanned vehicle in the image captured by the C1 is converted into the degree of overlapping between the expected position of the unmanned vehicle in the image captured by the C2 and the actual target position of the unmanned vehicle in the image captured by the C2, which is greater than the preset threshold, it can be continuously determined whether the target type of the unmanned vehicle in the image captured by the C1 is consistent with the target type of the unmanned vehicle in the image captured by the C2. When the expected target position of the target to be detected in the image shot by the C2 and the target type of the target to be detected in the image shot by the C1 are the same as the actual target position and the target type of the target to be detected in the image shot by the C2, the target to be detected in the images shot by the two different camera devices can be determined to be the same target to be detected.
EXAMPLE III
On the basis of the second embodiment, the present embodiment explains the method in the first embodiment through a specific implementation case.
When the target position of the first target to be detected and the target position of the second target to be detected in the first image are obtained, or when the target position of the first target to be detected and the target position of the second target to be detected in the second image are obtained, the directional bounding box marking may be performed on each target to be detected identified in the two images to obtain a bounding box corresponding to each target to be detected, and then the coordinate information of the bounding box corresponding to each target to be detected is obtained as the target position of each target to be detected.
If an unmanned trolley, a pedestrian and an obstacle exist in the actual scene, the target detected by deep learning in the two cameras is marked with the oriented bounding box, namely the bounding box is always kept as the minimum circumscribed rectangle of the target area. And marking the oriented bounding box of the unmanned trolley detected by the camera C1, wherein the coordinates of the bounding box corresponding to the unmanned trolley are the target position A1 of the unmanned trolley. The pedestrian detected by the camera C1 is marked with a directional bounding box, and the coordinates of the bounding box corresponding to the pedestrian are the target position B1 of the pedestrian. And marking the obstacle detected by the camera C1 with a directed bounding box, wherein the coordinates of the bounding box corresponding to the obstacle are the target position D1 of the obstacle. And marking the oriented bounding box of the unmanned trolley detected by the camera C2, wherein the coordinates of the bounding box corresponding to the unmanned trolley are the target position A2 of the unmanned trolley. The pedestrian detected by the camera C2 is marked with a directional bounding box, and the coordinates of the bounding box corresponding to the pedestrian are the target position B2 of the pedestrian. And marking the obstacle detected by the camera C2 with a directed bounding box, wherein the coordinates of the bounding box corresponding to the obstacle are the target position D2 of the obstacle.
The step of determining whether there is a risk of collision between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected may include the following processes. Firstly, determining a first distance according to the target position of the first target to be detected and the target position of the second target to be detected in the first image; and judging whether the first distance is smaller than a first preset threshold value, if so, determining a second distance according to the target position of the first target to be detected in the second image and the target position of the second target to be detected, wherein the first target to be detected in the second image and the first target to be detected in the first image are the same target to be detected, and the second target to be detected in the second image and the second target to be detected in the first image are the same target to be detected. And judging whether the second distance is smaller than a second preset threshold value, if so, determining that the first target to be detected and the second target to be detected have a collision risk.
When the first distance between the first target to be detected and the second target to be detected in the first image is smaller than a first preset threshold value, it is indicated that the first image displays that a collision risk exists between the first target to be detected and the second target to be detected. And at this time, a second distance is continuously determined according to the target position of the first target to be detected and the target position of the second target to be detected in the second image, and if the second distance is still smaller than a second preset threshold, it is indicated that a collision risk exists between the first target to be detected and the second target to be detected. And if the second distance is greater than the second preset threshold, it indicates that the first target to be detected and the second target to be detected are only caused by visual misalignment in the first image, and there is substantially no risk of collision. It is understood that the first preset threshold and the second preset threshold may be the same or different, and the specific value needs to be set according to the actual situation.
Specifically, when two objects collide, their directional bounding boxes overlap, for example, if the bounding box of the unmanned vehicle position a1 and the bounding box of the pedestrian position B1 in the image obtained by the camera C1 overlap, one may be that the unmanned vehicle and the pedestrian collide, and the other may be that there is a barrier between the unmanned vehicle and the pedestrian, and there is no actual contact. In order to further determine whether the two objects are in contact, whether surrounding frames corresponding to the position A2 of the unmanned vehicle and the position B2 of the pedestrian in an image shot by the camera C2 are overlapped or not can be checked, if the surrounding frame A2 is also overlapped with the surrounding frame B2, the unmanned vehicle and the pedestrian can be judged to be in contact collision, if the surrounding frame A2 is not overlapped with the surrounding frame B2, the unmanned vehicle and the pedestrian can be judged to be just shielded and not actually contacted, and therefore whether the risk of collision exists between the two objects to be detected can be judged according to the distance between the image shot by the camera C1 and the object to be detected in the image shot by the camera C2. When the distance between two targets to be detected in the image shot by the camera C1 is very close, and the distance between the same two targets to be detected in the image shot by the camera C2 is also very close, the danger that the two targets to be detected are contacted can be judged, and early warning reminding can be given.
Example four
On the basis of the first embodiment, the present embodiment explains the method in the first embodiment through a specific implementation case.
After anti-collision early warning is carried out according to the target type of the second target to be detected, the record data corresponding to each anti-collision early warning can be stored; and analyzing the recorded data to generate a management strategy.
The data corresponding to each anti-collision early warning is recorded and stored, and then analyzed, for example, information such as the nature, time, location and the like of the event occurring during each anti-collision early warning can be analyzed, and it can be analyzed that the number of pedestrians at the positions in which time period is large, the number of obstacles at the positions in which the pedestrians are frequently occupied by the obstacles, and the like can be analyzed. According to the analysis result, the production management strategy can effectively help a system administrator to manage the target area. For example, the route of the unmanned vehicle is re-planned, and the transportation safety of the unmanned vehicle is improved by measures of adjusting the speed of the unmanned vehicle to be low in a time period with more pedestrians and the like. In addition, management personnel can customize corresponding management strategies according to accident conditions conveniently. It is understood that the applied scenario of the collision avoidance method may include AGV production system, conventional logistics transportation, transportation management, and the like.
EXAMPLE five
Referring to fig. 2, the present application provides a collision preventing device 200, which includes:
an image data acquiring module 210, configured to acquire image data of a target area;
the target detection module 220 is configured to perform target detection on the image data, and acquire information of a first target to be detected and information of a second target to be detected, where the information of the first target to be detected and the information of the second target to be detected both at least include a target type and a target position of the target to be detected;
a risk determining module 230, configured to determine whether there is a collision risk between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected;
and the anti-collision early warning module 240 is configured to perform anti-collision early warning according to the target type of the second target to be detected if it is determined that there is a collision risk between the first target to be detected and the second target to be detected.
According to an embodiment of the application, optionally, in the above anti-collision device, the image data obtaining module 210 includes:
the image acquisition unit is used for acquiring different images of a target area shot by at least two pieces of camera equipment; wherein the positions of the at least two image capturing apparatuses are different.
According to an embodiment of the present application, optionally, in the above-mentioned collision avoidance device, the object detection module 220 includes:
the image acquisition unit is used for acquiring two images shot by any two different camera devices in the image data;
and the target to be detected determining unit is used for respectively carrying out target detection on the two images and respectively acquiring the information of the first target to be detected and the information of at least one second target to be detected in the two images.
According to an embodiment of the present application, optionally, in the above anti-collision device, the to-be-detected object determining unit includes:
a transformation matrix obtaining subunit, configured to obtain a transformation matrix between the two images, where the two images include a first image and a second image;
the information acquisition subunit is configured to acquire a target position and a target type of the first target to be detected and a target position and a target type of the second target to be detected in the first image; acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the second image;
a position change subunit, configured to transform, according to the transformation matrix, a target position of the first target to be detected and a target position of the second target to be detected in the first image, so as to obtain a first expected target position and a second expected target position of the first target to be detected and the second target to be detected in the second image;
a first target information to be detected acquiring subunit, configured to compare the first expected target position and the target type of the first target to be detected in the first image with the target position and the target type of the first target to be detected in the second image, so as to determine whether the first targets to be detected in the two images are the same target to be detected;
and the second target information acquisition subunit is configured to compare the second expected target position and the target type of the second target to be detected in the first image with the target position and the target type of the second target to be detected in the second image, so as to determine whether the second target to be detected in the two images is the same target to be detected.
According to an embodiment of the application, optionally, in the above anti-collision device, the information obtaining subunit includes:
the surrounding frame marking subunit is used for performing directional surrounding frame marking on each target to be detected identified in the two images to obtain a surrounding frame corresponding to each target to be detected;
and the target position determining subunit is used for acquiring the coordinate information of the bounding box corresponding to each target to be detected as the target position of each target to be detected.
According to an embodiment of the present application, optionally, in the above anti-collision device, the risk determining module 230 includes:
a first distance determining unit, configured to determine a first distance according to a target position of the first target to be detected in the first image and a target position of the second target to be detected;
a second distance determining unit, configured to determine whether the first distance is smaller than a first preset threshold, and if so, determine a second distance according to a target position of the first target to be detected in the second image and a target position of the second target to be detected, where the first target to be detected in the second image and the first target to be detected in the first image are the same target to be detected, and the second target to be detected in the second image and the second target to be detected in the first image are the same target to be detected;
and the collision risk determining unit is used for judging whether the second distance is smaller than a second preset threshold value, and if so, determining that the collision risk exists between the first target to be detected and the second target to be detected.
According to an embodiment of the application, optionally, in the above anti-collision device, the target type of the first target to be detected includes an inanimate moving target, and the target type of the second target to be detected includes an inanimate moving target, an animate target, and an inanimate fixed target.
According to an embodiment of the present application, optionally, in the above anti-collision device, the anti-collision warning module 240 includes:
the first early warning unit is used for controlling the first target to be detected to decelerate and sending a collision early warning prompt if the target type of the second target to be detected is a life target;
and the second early warning unit is used for controlling the first target to be detected to stop running and sending an anti-collision processing prompt to a management end if the target type of the second target to be detected is a non-living body moving target or a non-living body fixed target.
According to an embodiment of the application, optionally, in the above anti-collision device, the device further includes:
the recording module is used for storing the recording data corresponding to each anti-collision early warning;
and the analysis module is used for analyzing the recorded data and generating a management strategy.
To sum up, the present application provides an anti-collision device, including: an image data acquiring module 210, configured to acquire image data of a target area; the target detection module 220 is configured to perform target detection on the image data, and acquire information of a first target to be detected and information of a second target to be detected, where the information of the first target to be detected and the information of the second target to be detected both at least include a target type and a target position of the target to be detected; a risk determining module 230, configured to determine whether there is a collision risk between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected; and the anti-collision early warning module 240 is configured to perform anti-collision early warning according to the target type of the second target to be detected if it is determined that there is a collision risk between the first target to be detected and the second target to be detected. The target detection is carried out on the image data of the target area, the information of the first target to be detected and the information of the second target to be detected can be accurately obtained, and whether collision risks exist between the first target to be detected and the second target to be detected can be judged according to the target positions in the information. When collision risks exist between the first target to be detected and the second target to be detected, collision prevention early warning is carried out according to the target type of the second target to be detected, and therefore more effective collision prevention early warning is achieved.
EXAMPLE six
The present embodiment further provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, wherein the computer program, when executed by a processor, may implement the method steps in the above embodiments, and the description of the embodiment is omitted herein.
EXAMPLE seven
The embodiment of the present application provides an electronic device, which may be a mobile phone, a computer, or a tablet computer, and the like, and includes a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, implements the anti-collision method as described in the first embodiment. It is understood that, as shown in fig. 3, the electronic device 300 may further include: a processor 301, a memory 302, a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to execute all or part of the steps in the collision avoidance method according to the first embodiment. The memory 302 is used to store various types of data, which may include, for example, instructions for any application or method in the electronic device, as well as application-related data.
The Processor 301 may be implemented by an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and is configured to perform the anti-collision method in the first embodiment.
The Memory 302 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk.
The multimedia component 303 may include a screen, which may be a touch screen, and an audio component for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in a memory or transmitted through a communication component. The audio assembly also includes at least one speaker for outputting audio signals.
The I/O interface 304 provides an interface between the processor 301 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons.
The communication component 305 is used for wired or wireless communication between the electronic device 300 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding Communication component 305 may include: Wi-Fi module, bluetooth module, NFC module.
In summary, the present application provides an anti-collision method, an anti-collision device, a storage medium, and an electronic device, where the method includes acquiring image data of a target area; performing target detection on the image data to acquire information of a first target to be detected and information of a second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected at least comprise a target type and a target position of the target to be detected; judging whether a collision risk exists between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected; and if the collision risk between the first target to be detected and the second target to be detected is determined, performing anti-collision early warning according to the target type of the second target to be detected. The target detection is carried out on the image data of the target area, the information of the first target to be detected and the information of the second target to be detected can be accurately obtained, and whether collision risks exist between the first target to be detected and the second target to be detected can be judged according to the target positions in the information. When collision risks exist between the first target to be detected and the second target to be detected, collision prevention early warning is carried out according to the target type of the second target to be detected, and therefore more effective collision prevention early warning is achieved.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed system and method may be implemented in other ways. The system and method embodiments described above are merely illustrative.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Although the embodiments disclosed in the present application are described above, the descriptions are only for the convenience of understanding the present application, and are not intended to limit the present application. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims.

Claims (12)

1. A method of collision avoidance, the method comprising:
acquiring image data of a target area;
performing target detection on the image data to acquire information of a first target to be detected and information of at least one second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected at least comprise a target type and a target position of the target to be detected;
judging whether a collision risk exists between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected;
and if the collision risk between the first target to be detected and the second target to be detected is determined, performing anti-collision early warning according to the target type of the second target to be detected.
2. The method of claim 1, wherein acquiring image data of a target region comprises:
acquiring different images shot by at least two camera devices on a target area; wherein the positions of the at least two image capturing apparatuses are different.
3. The method according to claim 1, wherein performing object detection on the image data to obtain information of a first object to be detected and information of at least one second object to be detected comprises:
acquiring two images shot by any two different camera devices in the image data;
and respectively carrying out target detection on the two images, and respectively acquiring the information of a first target to be detected and the information of at least one second target to be detected in the two images.
4. The method according to claim 3, wherein the performing the object detection on the two images respectively to obtain information of a first object to be detected and information of at least one second object to be detected in the two images respectively comprises:
acquiring a transformation matrix between the two images, wherein the two images comprise a first image and a second image;
acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the first image; acquiring the target position and the target type of the first target to be detected and the target position and the target type of the second target to be detected in the second image;
transforming the target position of the first target to be detected and the target position of the second target to be detected in the first image according to the transformation matrix to obtain a first expected target position and a second expected target position of the first target to be detected and the second target to be detected in the second image;
comparing the first expected target position and the target type of the first target to be detected in the first image with the target position and the target type of the first target to be detected in the second image to judge whether the first targets to be detected in the two images are the same target to be detected;
and comparing the second expected target position and the target type of the second target to be detected in the first image with the target position and the target type of the second target to be detected in the second image to judge whether the second target to be detected in the two images is the same target to be detected.
5. The method according to claim 4, wherein the target position of the first target to be detected and the target position of the second target to be detected in the first image are obtained; acquiring the target position of the first target to be detected and the target position of the second target to be detected in the second image, including:
performing directional surrounding frame marking on each target to be detected identified in the two images to obtain a surrounding frame corresponding to each target to be detected;
and acquiring coordinate information of the bounding box corresponding to each target to be detected as the target position of each target to be detected.
6. The method according to claim 4, wherein judging whether there is a collision risk between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected comprises:
determining a first distance according to the target position of the first target to be detected and the target position of the second target to be detected in the first image;
judging whether the first distance is smaller than a first preset threshold value, if so, determining a second distance according to the target position of the first target to be detected in the second image and the target position of the second target to be detected, wherein the first target to be detected in the second image and the first target to be detected in the first image are the same target to be detected, and the second target to be detected in the second image and the second target to be detected in the first image are the same target to be detected;
and judging whether the second distance is smaller than a second preset threshold value, if so, determining that the first target to be detected and the second target to be detected have a collision risk.
7. The method according to claim 1, wherein the target type of the first target to be detected comprises an inanimate moving target, and the target type of the second target to be detected comprises an inanimate moving target, a animate target and an inanimate fixed target.
8. The method according to claim 1, wherein performing anti-collision warning according to the target type of the second target to be detected comprises:
if the target type of the second target to be detected is a life target, controlling the first target to be detected to decelerate and sending a collision early warning prompt;
and if the target type of the second target to be detected is a non-living body moving target or a non-living body fixed target, controlling the first target to be detected to stop running, and sending an anti-collision processing prompt to a management end.
9. The method according to claim 1, wherein after performing the anti-collision warning according to the target type of the second target to be detected, the method further comprises:
storing the record data corresponding to each anti-collision early warning;
and analyzing the recorded data to generate a management strategy.
10. A collision avoidance apparatus, characterized in that the apparatus comprises:
the image data acquisition module is used for acquiring image data of the target area;
the target detection module is used for carrying out target detection on the image data and acquiring information of a first target to be detected and information of at least one second target to be detected, wherein the information of the first target to be detected and the information of the second target to be detected both at least comprise a target type and a target position of the target to be detected;
the risk judgment module is used for judging whether collision risk exists between the first target to be detected and the second target to be detected according to the target position of the first target to be detected and the target position of the second target to be detected;
and the anti-collision early warning module is used for carrying out anti-collision early warning according to the target type of the second target to be detected if the collision risk between the first target to be detected and the second target to be detected is determined.
11. A storage medium storing a computer program which, when executed by one or more processors, is adapted to carry out the method of any one of claims 1 to 9.
12. An electronic device, comprising a memory and a processor, the memory having stored thereon a computer program which, when executed by the processor, performs the method of any one of claims 1-9.
CN202110654517.2A 2021-06-11 2021-06-11 Anti-collision method and device, storage medium and electronic equipment Pending CN113379830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110654517.2A CN113379830A (en) 2021-06-11 2021-06-11 Anti-collision method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110654517.2A CN113379830A (en) 2021-06-11 2021-06-11 Anti-collision method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN113379830A true CN113379830A (en) 2021-09-10

Family

ID=77574009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110654517.2A Pending CN113379830A (en) 2021-06-11 2021-06-11 Anti-collision method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN113379830A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023221443A1 (en) * 2022-05-20 2023-11-23 劢微机器人科技(深圳)有限公司 2d camera-based safety early warning method, apparatus and device, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023221443A1 (en) * 2022-05-20 2023-11-23 劢微机器人科技(深圳)有限公司 2d camera-based safety early warning method, apparatus and device, and storage medium

Similar Documents

Publication Publication Date Title
EP3315268B1 (en) Monitoring device and monitoring method
KR102347015B1 (en) Vehicle tracking in a warehouse environment
CN111674817B (en) Storage robot control method, device, equipment and readable storage medium
US10512941B2 (en) Projection instruction device, parcel sorting system, and projection instruction method
JP4066168B2 (en) Intruder monitoring device
US10860855B2 (en) Instruction projecting device, package sorting system and instruction projecting method
US10675659B2 (en) Instruction projecting device, package sorting system and instruction projecting method
JP6833354B2 (en) Information processing equipment, information processing methods and programs
US10471474B2 (en) Projection indicator, cargo assortment system, and projection indicating method
CN113379830A (en) Anti-collision method and device, storage medium and electronic equipment
CN115147587A (en) Obstacle detection method and device and electronic equipment
CN115565058A (en) Robot, obstacle avoidance method, device and storage medium
CN114648233A (en) Dynamic station cargo carrying method and system
US11783597B2 (en) Image semantic segmentation for parking space detection
JP5674933B2 (en) Method and apparatus for locating an object in a warehouse
US10589319B2 (en) Projection instruction device, parcel sorting system, and projection instruction method
US20190026566A1 (en) Method and system for detecting a free area inside a parking lot
JP6670351B2 (en) Map updating device and map updating method
CN117593703B (en) Control management method and system for parking lot barrier gate
CN116740890A (en) Monitoring and early warning method and device for port unmanned operation area
US20240005247A1 (en) System for detecting accident risk in working place
US10635869B2 (en) Projection instruction device, parcel sorting system, and projection instruction method
CN116834012A (en) Robot control method, apparatus, electronic device, and storage medium
CN114604199A (en) Vehicle protection system and method
TW202147271A (en) An electronic fence system and an electronic fence monitoring device adaptive for a building

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination