Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout. In addition, the embodiments of the present application described below in conjunction with the accompanying drawings are exemplary and are only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the present application.
When shooting different scenes, generally, a user actively adjusts shooting parameters to achieve the best shooting effect in the current scene, but the method needs strong professional ability and is only suitable for professional persons in the shooting industry, and the learning cost of the common user is high; or a plurality of shooting modes are arranged in the camera and selected by a user according to scenes, however, shooting parameters of the shooting modes are fixed, the shooting modes are difficult to adapt to more complex shooting scenes, and the shooting effect is still poor.
Referring to fig. 1 to 3, a control method according to an embodiment of the present disclosure includes the following steps:
011: acquiring a first position, a first moving direction and a first moving speed of a target object;
012: determining whether the current vehicle collides with the target object according to the first position, the first moving direction and the first moving speed, and the second position, the second moving direction and the second moving speed of the current vehicle; and
013: if so, adjusting the second moving speed so that the current vehicle and the target object do not collide with each other.
The control device 10 of the embodiment of the present application includes an acquisition module 11, a determination module 12, and an adjustment module 13. The obtaining module 11, the determining module 12 and the adjusting module 13 are configured to perform step 011, step 012 and step 013, respectively. Namely, the obtaining module 11 is configured to obtain a first position, a first moving direction and a first moving speed of the target object; the determining module 12 is configured to determine whether the current vehicle and the target object collide with each other according to the first position, the first moving direction and the first moving speed, and the second position, the second moving direction and the second moving speed of the current vehicle; the adjusting module 13 is configured to adjust the second moving speed when the current vehicle collides with the target object, so that the current vehicle does not collide with the target object.
The vehicle 100 of the embodiment of the present application includes a laser radar 20 and a processor 30, the laser radar 20 is configured to acquire a first position, a first moving direction, and a first moving speed of a target object; the processor 30 is configured to determine whether the current vehicle 100 and the target object collide with each other according to the first position, the first moving direction and the first moving speed, and the second position, the second moving direction and the second moving speed of the current vehicle 100; and adjusting the second moving speed so that the current vehicle 100 and the target object do not collide when the current vehicle 100 and the target object collide. That is, laser radar 20 is configured to perform step 011, and processor 30 is configured to perform step 012 and step 013.
Specifically, the laser radar 20 of the vehicle 100 is disposed at the vehicle head 40, and when the vehicle 100 travels, the laser radar 20 located at the vehicle head 40 may scan an object in a scene in front of the vehicle 100 in real time to identify an object (such as a pedestrian, an electric vehicle, an automobile, and the like) movable in the scene, for example, the laser radar 20 may emit laser light within a field of view thereof, so as to acquire a position, a moving speed, a moving direction, and the like of the object within the field of view. In one embodiment, the laser radar 20 may emit laser light at a preset frequency to obtain multiple frames of point cloud information (or depth images), and may determine the position, moving direction, and moving speed of the target object according to the position change of the target object in the point cloud information (or depth images) and the time interval between different frames of point cloud information (or depth images) (the time interval may be determined according to the preset frequency and the difference between the number of frames in different frames).
The laser radar 20 may have a larger field angle, generally greater than or equal to 120 degrees (°), and when the laser radar 20 is disposed at the vehicle head 40 (e.g., at the middle of the vehicle head 40), a larger field of view than a user located in the vehicle can be obtained, so that a target object that may collide is detected in advance, and the probability of a safety accident is reduced.
Referring to fig. 4, in the present embodiment, the two laser radars 20 are respectively installed at two ends of the vehicle head 40, so as to cover a larger field of view in front of the vehicle 100, improve the overall field of view of the laser radars 20, further improve the probability of detecting a target object in advance, and reduce the probability of safety accidents.
In one example, the field angles of the two lidar 20 are both 120 degrees, and there is an overlap, such that the two lidar 20 together cover the entire field range in front of the vehicle 100, e.g. the combined field angle of the two lidar 20 is 180 degrees, i.e. the field ranges of the two lidar 20 together cover the field range in front of the vehicle 100 of 180 degrees. It is understood that the closer to the central angle of view in the field of view of the laser radar 20, the higher the detection accuracy. For example, the field of view of the laser radar 20 is divided into a region of interest (ROI in fig. 4) and two regions of no interest (non-ROI in fig. 4), generally, the region of interest is within ± 40 degrees of the central field angle, the other two regions are regions of no interest, the detection accuracy of the region of interest is higher than that of the regions of no interest, and when the vehicle 100 is provided with only the single laser radar 20, the region of interest covers the region directly in front of the vehicle 100, so as to preferentially ensure the detection accuracy of the object directly in front of the vehicle 100. The locomotive 40 of the application is provided with the two laser radars 20, and one non-interested region of each laser radar 20 is covered by the interested region of the other laser radar 20, so that only two small non-interested regions of 0-20 degrees and 160-180 degrees exist in the combined view field range of the two laser radars 20, and the 20-160 degrees of the combined view field range are all interested regions, thereby improving the detection accuracy of the two laser radars 20.
Because the two laser radars 20 are respectively arranged at the two ends of the vehicle head 40, the field ranges of the two laser radars are a certain blind area a in front of the vehicle head 40, the offset angle of the two laser radars 20 cannot be set too large, and the offset angle is larger, which results in that the range of the blind area a is larger, so that the vehicle 100 cannot detect a target object which is closer in the blind area a, wherein the offset angle of the laser radars 20 represents the direction corresponding to the central field angle of the laser radars 20 and the included angle (such as 30 ° in fig. 4) in front of the vehicle 100, in the embodiment of the present application, the offset angles of the two laser radars 20 are both 30 degrees, so that the combined field angle of the two laser radars 20 is exactly 180 degrees, generally speaking, target objects which are likely to collide are basically within the field range of 180 degrees of the vehicle 100, and objects which are located outside the field range of 180 degrees of the vehicle 100 are generally the same as the driving direction of the vehicle 100, the probability of collision is low, and therefore, on the basis of ensuring that the combined field angle is large, the offset angle of the two laser radars 20 is small, so that one non-interested region of each laser radar 20 can be covered by the interested region of the other laser radar 20, the proportion of the interested region in the combined field range is maximally improved, and the detection accuracy of the two laser radars 20 is improved.
In other embodiments, if the detection accuracy of the region of non-interest of the laser radar 20 can also meet the requirement, a larger offset angle may be set, for example, the offset angle is set to 45 degrees, so as to achieve coverage of a 215-degree field of view in front of the vehicle 100, and maximally improve the field of view that the laser radar 20 can detect. In another embodiment, the offset angle of the lidar 20 may be adjusted according to different scenarios, for example, when the vehicles 100 are traveling on a highway, the distance between the vehicles 100 is generally large, and there is generally no pedestrian, and at this time, the influence of the blind area a is small, and the offset angle may be set to be large, so as to improve the field range that the lidar 20 can detect as much as possible; when the vehicle 100 runs on an urban road, the distance between the vehicles 100 is generally small, and pedestrians generally exist, at this time, the influence of the blind area a is large, and the offset angle can be set to be small, so that the blind area a is reduced, and the vehicle 100 is prevented from colliding with a target object due to the fact that the target object in the blind area a cannot be detected.
Referring to fig. 5, after the laser radar 20 detects the first position W1, the first moving direction S1 and the first moving speed V1 of the target object, the processor 30 determines whether the current vehicle 100 collides with the target object according to the first position W1, the first moving direction S1 and the first moving speed V1, and the second position W2, the second moving direction S2 and the second moving speed V2 of the current vehicle 100; for example, the processor 30 can determine the position W3 of the collision point between the current vehicle 100 and the target object from the first position W1, the second position W2, the first moving direction S1 and the second moving direction S2, and then determine the time when the two reach the collision point from the second moving speed V2 of the current vehicle 100 and the first moving speed V1 of the target object, and if the two reach the collision point at the same time, determine that the two will collide.
When it is determined that a collision may occur, adjusting the moving speed of the current vehicle 100, for example, increasing the second moving speed of the current vehicle 100, so that the current vehicle 100 accelerates through a collision point, thereby preventing a collision with the target object in advance; or the second moving speed of the current vehicle 100 is reduced so that the target object passes through the collision point first, thereby preventing a collision with the target object in advance.
The control method, control device, vehicle 100, and non-volatile computer-readable storage medium of the embodiments of the present application, by detecting the positions, moving directions, and moving speeds of the target object and the current vehicle 100, it can be determined whether or not both of them will collide, for example, a collision point of both can be determined from the positions and moving directions of both of them, then, the time when the two reach the collision point is determined according to the moving speeds of the two, if the two reach the collision point at the same time, the two collide with each other, so that the moving speed of the current vehicle 100 is adjusted when it is judged that a collision may occur, thereby preventing a collision with the target object in advance, thereby preventing the user from being difficult to find the vehicle 100 which is far away and is approaching quickly because the view field is limited by the windows of the automobile in the vehicle 100, or even if the reaction is found to be delayed, the reaction may be delayed, which may lead to a safety accident. In addition, the vehicle 100 is provided with the double laser radars 20, so that the coverage of a larger view field range in front of the vehicle 100 is realized, the integral view field range of the laser radars 20 is further improved, the probability of detecting a collided target object in advance is further improved, and the probability of safety accidents is reduced; and the offset angles of the two laser radars 20 are both 30 degrees, so that the combined field angle of the two laser radars 20 is just 180 degrees, one non-interested region of each laser radar 20 can be covered by the interested region of the other laser radar 20, and on the basis of ensuring that the combined field angle is large, the offset angles of the two laser radars 20 are small, so that one non-interested region of each laser radar 20 can be covered by the interested region of the other laser radar 20, the proportion of the interested region in the combined field range is maximally improved, and the detection accuracy of the two laser radars 20 is improved.
Referring to fig. 2, 3 and 6, in some embodiments, step 011 includes the steps of:
0111: controlling the current vehicle 100 to emit laser; and
0112: and receiving laser reflected by the target object to acquire a first position, a first moving direction and a first moving speed.
In some embodiments, the obtaining module 11 is further configured to perform step 0111 and step 0112. Namely, the obtaining module 11 is further configured to control the current vehicle 100 to emit laser light; and receiving the laser reflected by the target object to acquire a first position, a first moving direction and a first moving speed.
In some embodiments, the processor 30 is also used to control the current vehicle 100 to emit laser light; and receiving the laser reflected by the target object to acquire a first position, a first moving direction and a first moving speed. That is, step 0111 and step 0112 may be implemented by processor 30.
Specifically, in acquiring the position, the moving direction, and the moving speed of the target object, the vehicle 100 may be controlled to project laser light toward the field of view of the lidar 20, and then the object within the field of view may reflect the laser light and be received by the lidar 20 to acquire the first position, the first moving direction, and the first moving speed. For example, the laser radar 20 may emit a laser point cloud, and after the laser point cloud is reflected by the target object, the laser radar 20 may determine point cloud information (i.e., a first position, such as a three-dimensional position coordinate) of the target object according to the received laser point cloud, and may obtain a position transformation (i.e., a change in the three-dimensional position coordinate) of the target object according to the point cloud information at different times, so as to determine a first moving direction and a first moving speed of the target object. Or, the laser radar 20 determines the distance between the target object and the laser radar 20 according to the transmitting time and the time of receiving the laser reflected by the target object, and can obtain the integral depth image within the field of view according to the laser reflected by the scene within the field of view; determining a first position of a target object in a current scene, and acquiring depth images of continuous multi-frame scenes; according to the position change of the target object in the continuous multi-frame depth images and the time interval between different frame depth images, the first moving direction and the first moving speed of the target object can be determined. Therefore, the position, the moving speed and the moving direction of the target object can be rapidly acquired, and the efficiency of collision judgment is improved.
Referring to fig. 2, 3 and 7, in some embodiments, the control method further includes the following steps:
014: when the distance between the target object and the current vehicle 100 is equal to the blind area distance, controlling the current vehicle 100 to stop, wherein the blind area distance is determined according to the blind area A; and
015: and when the distance is greater than the safe distance, controlling the current vehicle 100 to start, wherein the safe distance is greater than or equal to the blind area distance.
In certain embodiments, the control apparatus 10 further includes a first control module 14 and a second control module 15, the first control module 14 configured to perform step 014 and the second control module 15 configured to perform step 015. That is, the first control module 14 is configured to control the current vehicle 100 to stop when the distance between the target object and the current vehicle 100 is equal to a blind area distance, which is determined according to the blind area a; and the second control module 15 is used for controlling the current vehicle 100 to start when the distance is greater than a safe distance, wherein the safe distance is greater than or equal to the blind area distance.
In some embodiments, the processor 30 is further configured to control the current vehicle 100 to stop when the distance between the target object and the current vehicle 100 is equal to a blind zone distance, the blind zone distance being determined according to the blind zone a; and when the distance is greater than the safe distance, controlling the current vehicle 100 to start, wherein the safe distance is greater than or equal to the blind area distance. That is, steps 014 and 015 may be implemented by the processor 30.
Specifically, referring to fig. 8, since the two laser radars 20 of the present embodiment are respectively disposed at two ends of the vehicle head 40, the field of view ranges of the two laser radars form a blind area a of a predetermined range in front of the vehicle head 40 due to a distance therebetween, the larger the offset angle of the laser radar 20, the larger the predetermined range, and the laser radar 20 cannot detect the target object B within the blind area a, and when the target object B is partially located within the blind area a, the distance between the target object B and the current vehicle 100 may be actually inaccurate, such as the distance D1 between the target object B and the current vehicle 100 (i.e., the distance detected by the laser radar 20, hereinafter referred to as the detection distance D1) may be calculated according to the first position of the target object B detected by the laser radar 20 and the second position of the current vehicle 100, and when the target object B is partially located within the blind area a, the actual distance D2 between the target object B and the current vehicle 100 is less than or equal to the detection distance D1, therefore, when the detected distance D1 of the target object B and the current vehicle 100 is equal to the blind area distance D3, the target object B may be partially located within the blind area a when the vehicle 100 is to stop, preventing collision with the target object B, wherein the blind area distance D3 is determined according to the blind area a, as determined according to the maximum distance from the center of the vehicle head 40 to the edge of the blind area a. When the detected distance D1 is greater than the blind area distance D3, it may be determined that the target object B is not located within the blind area a, and therefore, the vehicle 100 may be started normally at this time without worrying about colliding with the target object B within the blind area a.
Of course, to further ensure safety, a safety distance greater than or equal to the blind distance D3 may be provided, and when the detected distance is greater than the safety distance, the vehicle 100 is allowed to start, thereby ensuring driving safety. As one example, the blind distance D3 is 1m, and the safe distance may be set to 2 m.
Referring again to fig. 2, 3 and 9, in some embodiments, step 012 includes:
0121: when the first moving direction and the second moving direction are intersected, determining the position of the collision point according to the first position, the second position, the first moving direction and the second moving direction;
0122: calculating a first distance between the collision point and the target object and a second distance between the collision point and the current vehicle 100 according to the first position, the second position and the position of the collision point;
0123: calculating a first time length according to the first distance and the first moving speed, and calculating a second time length according to the second distance and the second moving speed;
0124: when the time length difference value between the first time length and the second time length is smaller than a preset threshold value, determining that the current vehicle 100 collides with the target object;
0125: when the difference between the first time period and the second time period is greater than a preset threshold, it is determined that the current vehicle 100 and the target object do not collide.
In certain embodiments, the determining module 12 is further configured to perform step 0121, step 0122, step 0123, step 0124, and step 0125. Namely, the determining module 12 is further configured to determine the position of the collision point according to the first position, the second position, the first moving direction and the second moving direction when the first moving direction and the second moving direction intersect; calculating a first distance between the collision point and the target object and a second distance between the collision point and the current vehicle 100 according to the first position, the second position and the position of the collision point; calculating a first time length according to the first distance and the first moving speed, and calculating a second time length according to the second distance and the second moving speed; when the time length difference value between the first time length and the second time length is smaller than a preset threshold value, determining that the current vehicle 100 collides with the target object; when the difference between the first time period and the second time period is greater than a preset threshold, it is determined that the current vehicle 100 and the target object do not collide.
In some embodiments, the processor 30 is further configured to determine a location of the collision point based on the first location, the second location, the first movement direction, and the second movement direction when the first movement direction and the second movement direction intersect; calculating a first distance between the collision point and the target object and a second distance between the collision point and the current vehicle 100 according to the first position, the second position and the position of the collision point; calculating a first time length according to the first distance and the first moving speed, and calculating a second time length according to the second distance and the second moving speed; when the time length difference value between the first time length and the second time length is smaller than a preset threshold value, determining that the current vehicle 100 collides with the target object; when the difference between the first time period and the second time period is greater than a preset threshold, it is determined that the current vehicle 100 and the target object do not collide. That is, step 0121, step 0122, step 0123, step 0124, and step 0125 may be implemented by the processor 30.
Referring to fig. 5 again, specifically, when determining whether the target object and the current vehicle 100 collide with each other, it is first determined whether the moving directions of the target object and the current vehicle 100 intersect with each other, for example, the first moving direction S1 of the target object is from the first position W1, and the second moving direction S2 of the current vehicle 100 is from the second position W2, and then it is determined whether the first moving direction S1 and the second moving direction S2 converge or diverge, i.e., it is determined whether the first moving direction and the second moving direction intersect with each other, as shown in fig. 9, the first moving direction S1 and the second moving direction S2 converge, i.e., it is determined that the first moving direction and the second moving direction intersect with each other; more specifically, the processor 30 may determine whether the first moving direction S1 and the second moving direction S2 intersect according to a trend of a distance between the first position W1 and the second position W2 at a plurality of consecutive times, for example, when a distance between the first position W1 and the second position W2 at a plurality of consecutive times gradually increases, it indicates that the first moving direction S1 and the second moving direction S2 do not intersect, and when a distance between the first position W1 and the second position W2 gradually decreases, it indicates that the first moving direction S1 and the second moving direction S2 intersect.
When it is determined that the first moving direction S1 and the second moving direction S2 do not intersect with each other, it may be determined that the target object and the current vehicle 100 do not collide with each other, and thus, the second moving speed of the current vehicle 100 may not need to be adjusted.
When it is determined that the first moving direction and the second moving direction intersect, it may be determined that the target object and the current vehicle 100 may collide, and at this time, the processor 30 may accurately determine the position W3 of the collision point according to the first position W1, the second position W2, the first moving direction S1, and the second moving direction S2. The processor 30 then calculates a first distance d1 between the collision point and the target object (i.e., the distance between the position W3 and the first position W1 of the collision point) and a second distance d2 between the collision point and the current vehicle 100 (i.e., the distance between the position W3 and the second position W2 of the collision point), and then the processor 30 calculates a first time period required for the target object to move to the collision point based on the first distance d1 and the first moving speed S1, and calculates a second time period required for the current vehicle 100 to move to the collision point based on the second distance d2 and the second moving speed S2, it being understood that when the first time period and the second time period are equal, it is interpreted that the target object and the current vehicle 100 arrive at the collision point at the same time, the target object and the current vehicle 100 collide, and when the first time period and the second time period are not equal, it is interpreted that the target object and the current vehicle 100 do not arrive at the collision point at the same time, the target object and the current vehicle 100 do not collide.
Of course, since the target object and the current vehicle 100 have a certain length, even if the first duration and the second duration are not equal, the target object and the current vehicle 100 may collide, and therefore, a preset threshold may be set, the processor 30 may determine whether a difference between the durations of the first duration and the second duration is greater than the preset threshold, may determine that the current vehicle 100 and the target object do not collide if the difference between the durations of the first duration and the second duration is greater than the preset threshold, and determine that the current vehicle 100 and the target object collide when the difference between the durations of the first duration and the second duration is less than or equal to the preset threshold; the preset threshold may be determined according to the first moving speed, the second moving speed, and the length of the current vehicle 100, for example, according to the greater of the first moving speed and the second moving speed and the length of the current vehicle 100, so as to determine a more accurate preset threshold in real time, thereby ensuring the accuracy of collision detection.
Referring to fig. 10, a non-volatile computer readable storage medium 300 storing a computer program 302 according to an embodiment of the present disclosure, when the computer program 302 is executed by one or more processors 30, the processor 30 may execute the control method according to any of the above embodiments.
For example, referring to fig. 1, the computer program 302, when executed by the one or more processors 30, causes the processors 30 to perform the steps of:
011: acquiring a first position, a first moving direction and a first moving speed of a target object;
012: determining whether the current vehicle collides with the target object according to the first position, the first moving direction and the first moving speed, and the second position, the second moving direction and the second moving speed of the current vehicle; and
013: if so, adjusting the second moving speed so that the current vehicle and the target object do not collide with each other.
For another example, referring to fig. 6, when the computer program 302 is executed by the one or more processors 30, the processors 30 may further perform the following steps:
0111: controlling the current vehicle 100 to emit laser; and
0112: and receiving laser reflected by the target object to acquire a first position, a first moving direction and a first moving speed.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more program modules for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes additional implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.