CN116331218A - Road information acquisition method, device, equipment and storage medium - Google Patents

Road information acquisition method, device, equipment and storage medium Download PDF

Info

Publication number
CN116331218A
CN116331218A CN202111581063.7A CN202111581063A CN116331218A CN 116331218 A CN116331218 A CN 116331218A CN 202111581063 A CN202111581063 A CN 202111581063A CN 116331218 A CN116331218 A CN 116331218A
Authority
CN
China
Prior art keywords
gradient
value
camera
angle
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111581063.7A
Other languages
Chinese (zh)
Inventor
郭子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pateo Connect Nanjing Co Ltd
Original Assignee
Pateo Connect Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pateo Connect Nanjing Co Ltd filed Critical Pateo Connect Nanjing Co Ltd
Priority to CN202111581063.7A priority Critical patent/CN116331218A/en
Publication of CN116331218A publication Critical patent/CN116331218A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/06Road conditions
    • B60W40/076Slope angle of the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means

Abstract

The embodiment of the application provides a method, a device, equipment and a storage medium for acquiring road information, wherein the method acquires first gradient data and second gradient data; adjusting the view finding angle of the camera according to the difference value of the first gradient data and the second gradient data; and displaying the road information shot by the camera to a user. By implementing the method, the view finding angle of the camera can be reasonably adjusted on roads with complex road conditions, particularly on mountain roads with more slopes, according to the gradient data of the current driving road surface and the gradient data of the historical driving road surface of the driver, so that the view in the blind area of the driver view can be efficiently and reasonably provided for the driver.

Description

Road information acquisition method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of vehicle security, and in particular, to a method, an apparatus, a device, and a storage medium for obtaining road information.
Background
The blind area of the visual field is an area of blind area of the visual field where the visual line of the driver is affected and the driver cannot observe during driving. The blind area of the visual field is a great hidden hazard to the driving safety of the vehicle, is one of the main factors causing traffic accidents, and seriously threatens the life safety of a driver. The blind area of the field of view is unavoidable during the running of the vehicle. Especially when the vehicle runs on a slope, the larger the gradient of the slope, the larger the blind area of the driver's view. And if the driver is mishandled on the ramp, the automobile can slide down on the ramp under the traction of gravity, so that safety accidents are more likely to happen.
At present, some vehicles can acquire scenes in blind areas of a driver's vision through cameras, so that potential safety hazards brought by the blind areas of the vision are avoided. However, on roads with complicated road conditions, particularly on hills with many slopes, the size and range of the blind area of the driver's view are constantly changing. In this case, how to provide the driver with the view in the blind area efficiently and reasonably is a problem to be solved by those skilled in the art.
Disclosure of Invention
An object of the present invention is to provide a method, an apparatus, a device and a storage medium for obtaining road information, which have the advantage of reasonably adjusting a view angle of a camera according to gradient data of a current driving road surface and gradient data of a historical driving road surface of a driver on a road with complex road conditions, especially on a mountain road with a plurality of slopes, so as to efficiently and reasonably provide a view in a blind area of a driver's view for the driver.
Another object of the present invention is to provide a method, an apparatus, a device, and a storage medium for obtaining road information, which can determine whether a gradient value of a current road condition satisfies a preset condition, and accurately analyze a road condition environment where a vehicle is located, so as to more intelligently select a time for turning on or off a front camera of the vehicle, save system resources, and enhance driving safety of the vehicle.
To achieve the above object, in a first aspect, an embodiment of the present application provides a road information obtaining method, including the steps of: acquiring first gradient data and second gradient data, wherein the first gradient data comprises a first gradient value, the second gradient data comprises a second gradient value, the first gradient value is a gradient value of a road surface on which a vehicle runs at a first moment, and the second gradient value is a gradient value of the road surface on which the vehicle runs at a second moment; adjusting the view finding angle of the camera according to the difference value of the first gradient data and the second gradient data; and displaying the road condition information shot by the camera to a user.
According to the method, after the two gradient data are comprehensively analyzed, the view finding angle of the camera is adjusted based on the analysis result, and the image shot by the camera is displayed to the user in real time, so that scenes in the blind areas of the visual field of the driver can be provided for the driver efficiently and reasonably, and the driving safety guarantee of the vehicle is enhanced.
In addition, it should be appreciated that although the ramp sensor reflects at a relatively sensitive rate, typically on the order of milliseconds, the response of the camera is relatively slow, and the camera typically takes several seconds to complete for operations such as opening, closing, or changing the viewing angle. Therefore, even if the gradient sensor can timely detect the change of the gradient value of the running road surface under the condition that the gradient change frequency is too fast, the camera cannot make corresponding adjustment. In addition, since the user needs a certain reaction time from receiving the information to reacting to the information, if the view angle of the camera is adjusted too frequently, the pictures displayed to the user are also replaced frequently, which is likely to bring a larger visual information processing burden to the user.
Therefore, in this embodiment, when the difference between the first gradient value and the second gradient value is greater than the first threshold value, and the interval between the first timestamp and the second timestamp is greater than the second threshold value, the view angle of the camera is controlled to be changed, so that system resources are saved, and meanwhile, the view angle of the camera is prevented from being frequently adjusted, frequent replacement of pictures is avoided, and driving safety guarantee of a vehicle is enhanced.
It will be appreciated that when the gradient value is less than a certain angle, the likelihood of the blind area of view in front of the vehicle is small, and it is likely that the driver will not be affected in safe driving. Therefore, in the present embodiment, when the gradient value of the road surface on which the vehicle is traveling is greater than a certain threshold value (i.e., the third threshold value), the camera is started and starts shooting, and the shot picture is presented to the user. In this way, system resources can be further saved.
In a second aspect, an embodiment of the present application provides a road information obtaining apparatus, including a gradient sensor, a camera, a processor, a display, and a memory, where the gradient sensor is configured to detect a gradient value of a road surface on which a vehicle travels; the camera is used for shooting road condition information in front of the vehicle; the memory is used for storing the gradient value detected by the gradient sensor and the corresponding time stamp when the gradient value is detected; the processor is used for determining the view finding angle of the camera according to the gradient value and the timestamp corresponding to the gradient value, and controlling the camera to adjust to the view finding angle; the display is used for outputting road condition information shot by the camera.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory for storing a program; a processor for executing the program stored by the memory, the processor being for performing the method as in the first aspect and any optional implementation of the first aspect when the program is executed.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when run on one or more processors, performs a method as in the first aspect and any of the alternative embodiments of the first aspect.
Drawings
In order to more clearly describe the technical solutions in the embodiments or the background of the present application, the following will briefly describe the drawings that are required to be used in the embodiments or the background of the present application.
Fig. 1 is a schematic diagram of a correspondence relationship between focal length and field angle according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a user's view in a vehicle according to an embodiment of the present application;
fig. 3 is a schematic diagram of a blind area of a vehicle in a field of view on a flat ground and a ramp according to an embodiment of the present application;
Fig. 4 is a flowchart of a road information obtaining method provided in an embodiment of the present application;
fig. 5 is a flowchart of a road information obtaining method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a vehicle running on different running surfaces according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a camera view angle conversion process according to an embodiment of the present application;
fig. 8 is a working flow chart of controlling the camera to work by the terminal device according to the embodiment of the present application;
fig. 9 is a flowchart of a method for adjusting a view angle of a camera according to an embodiment of the present application;
fig. 10 is a schematic plan view of a camera view direction according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a road information obtaining apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the present application will be further described with reference to the accompanying drawings.
The terms "first" and "second" and the like in the description, claims and drawings of the present application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprising," "including," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion. Such as a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the list of steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
In the present application, "at least one (item)" means one or more, "a plurality" means two or more, and "at least two (items)" means two or three or more, and/or "for describing an association relationship of an association object, three kinds of relationships may exist, for example," a and/or B "may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of (a) or a similar expression thereof means any combination of these items. For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c".
The embodiment of the application provides a method, a system, equipment and a storage medium for acquiring a field of view, and the scheme of the invention is more clearly described. The following describes some knowledge related to methods, systems, devices, and storage media for obtaining a field of view according to embodiments of the present application.
(1) Slope and slope measurement
The slope is used to represent the slope of a slope, and is often used to mark the steepness of hills, roofs, and slopes of roads. With the current rise in the hardware configuration of automobiles, the computing power of the vehicle itself can already cover many complex scenarios. When the vehicle runs on the slope, the current gradient can be calculated in real time according to the inertial acceleration sensor and the acceleration filtering difference values of the front wheel and the rear wheel.
(2) Focal length and angle of view: the focal length is the distance from the center point of the lens to the clear image formed at the focal plane, and is a measure of the concentration or divergence of light in the optical system. The size of the focal length determines the size of the visual angle, the focal length value is small, the visual angle is large, and the observed range is also large; the focal length value is large, the angle of view is small, and the observation range is small. According to whether the focal length can be adjusted, the lens can be divided into a fixed focus lens and a zoom lens. When shooting the same shot object at the same distance, the imaging with long lens focal length is large, and the imaging with short lens focal length is small. In the optical instrument, the lens of the optical instrument is taken as the vertex, and the included angle formed by the two edges of the maximum range of the object image of the measured object can pass through the lens is called the angle of view. The size of the angle of view determines the field of view of the optical instrument, and the larger the angle of view, the larger the field of view and the smaller the optical magnification. Colloquially, the target object beyond this angle will not be caught in the lens. The focal length is inversely proportional to the field angle, i.e. the larger the focal length, the smaller the field angle, and conversely the larger the field angle. Taking fig. 1 as an example, the camera can adjust the focal length of the camera during shooting, which provides focal length steps of wide-angle focal length, 2×, 3×, 4×, 5×, and 6× six steps. It will be appreciated that when the focal length is wide, the angle of view that the camera can capture is greatest, i.e., 180 ° directly in front of the camera. When the focusing distance is 2 x, as shown in fig. 1, the angle of view thereof has become 84 °; if the focal length is adjusted to 6×, as shown in fig. 1, the angle of view remains only 30 ° ahead.
(3) Blind zone of visual field
The blind area of the visual field is the area outside the vehicle, which can not be directly seen, when an automobile driver sits in the cab and the visual line is blocked. The area has four parts of front, back, left and right, and the size of the area is different according to different vehicle types. The obstacles in the blind area of the field of view, whether stationary or moving, are not visible to the driver.
The blind area of the visual field is an area of blind area of the visual field where the visual line of the driver is affected and the driver cannot observe during driving. The blind area of the visual field is a great hidden hazard to the driving safety of the vehicle, is one of the main factors causing traffic accidents, and seriously threatens the life safety of a driver.
The blind area of the field of view is unavoidable during the running of the vehicle. Fig. 2 is a schematic diagram of a user's view in a vehicle according to an embodiment of the present application. As shown in fig. 2, the driver or passenger can acquire a view in front of the vehicle through the window 201 during the running of the vehicle, and in this view, it can be known that there is a vehicle 212 in front of the vehicle that is running on the head, and there is also a pedestrian 211 in front and a pedestrian 213 that is crossing the road. However, as can be seen from fig. 2, due to the shielding of the left side frame 203, the right side frame 204, and the front axle of the vehicle, a driver and a passenger have a certain range of blind areas in the view during the process of acquiring the view in front of the vehicle. As shown in fig. 2, due to the shielding of the right side frame 204 and the front axle of the vehicle, the driver may not be able to obtain the full view of the pedestrians 211 and 213 during driving, which seriously threatens the safety of the vehicle driving and the safety of other individuals on the road.
In addition, when the vehicle runs on the slope, the range of the blind area of the visual field of the driver is larger, and the threat of the blind area of the visual field to the driving safety of the driver is more obvious on the slope. Fig. 3 is a schematic diagram of a blind area of a vehicle in a flat ground and a ramp according to an embodiment of the present application. As shown in fig. 3 (a), the driver's line of sight is horizontal (as shown by driver's line of sight 301 a) while driving on the vehicle. On a flat ground, the front axle 302a of the vehicle is below the driver's line of sight, and when the vehicle is traveling on the flat ground due to the shielding of the front axle 302a of the vehicle, the blind area of the driver's field of view is the blind area of view 303a shown in fig. 3 (a). However, in fig. 3 (B), the driver is driving on a slope with his line of sight still horizontal (as shown by driver line of sight 301B), and it is understood that on a slope, the front axle 302a of the vehicle is above the driver line of sight due to the inclination of the slope to the vehicle. Thus, the range of the blind spot of the view that the front axle of the vehicle causes to the driver will be larger when the vehicle is on a slope than when on a flat ground. As shown in fig. 3 (B), due to the shielding of the front axle 302B of the vehicle, the blind area of the driver's view when the vehicle is traveling on a slope is the blind area of the view 303B shown in fig. 3 (B). Furthermore, when the vehicle is about to leave the ramp on the ramp, the driver on the ramp may not be able to obtain the field of view of the flat ground above the ramp due to the field of view gap on the ramp and the flat ground. As shown in fig. 3 (C), when the vehicle is about to get off the slope, the blind area of the driver's view due to the shielding of the front axle 302C of the vehicle is the blind area of the view 303C shown in fig. 3 (C). In addition, even if other vehicles are driven on the flat ground above the ramp, the vehicles cannot acquire the vision of the other vehicles, and traffic accidents are extremely easy to occur.
With the rise of the hardware configuration of automobiles, the computing power of the automobile itself can already cover many complex scenes. According to the problems, the camera can be arranged at the front end of the vehicle, when the vehicle runs on a slope, the vehicle-mounted equipment can calculate the current gradient in real time according to the inertial acceleration sensor and the acceleration filtering difference values of the front wheel and the rear wheel, then adjust the view angle of the camera according to the current gradient, and the camera is used for shooting a scene in the blind area of the driver's view and displaying the scene to the driver, so that potential safety hazards brought by the blind area of the driver's view are eliminated.
However, on roads with complicated road conditions, particularly on hills with many slopes, the size and range of the blind area of the driver's view are constantly changing. In this case, if the view angle of the camera is adjusted only according to the magnitude of the gradient value, frequent adjustment of the camera is likely to be required. However, when the gradient change is extremely small, for example, from 30 ° to 31 °, although the gradient sensor can detect that the gradient value of the running road surface changes by 1 °, the blind area of the view corresponding to the angle of 1 ° is likely not to affect the safe running of the driver; in addition, although the ramp sensor reflects more sensitively, the response speed of the camera is slower, and it takes a few seconds for the camera to turn on, turn off, or change the view angle. Therefore, even if the gradient sensor can timely detect the change of the gradient value of the running road surface under the condition that the gradient change frequency is too fast, the camera cannot make corresponding adjustment. In addition, since the user needs a certain reaction time from receiving the information to reacting to the information, if the view angle of the camera is adjusted too frequently, the pictures displayed to the user are also replaced frequently, which is likely to bring a larger visual information processing burden to the user.
Aiming at the defects in the method, the embodiment of the application provides a control method of the camera. The method can reasonably adjust the view angle of the camera according to the gradient data of the current driving road surface and the gradient data of the historical driving road surface of the driver on roads with complex road conditions, especially on hills with more slopes, so that scenes in the blind areas of the driver's vision are efficiently and reasonably provided for the driver, the driving safety of the driver is better ensured, and meanwhile, system resources are saved, and the specific situation is shown in fig. 4.
Fig. 4 is a flowchart of a road information obtaining method according to an embodiment of the present application. As shown in fig. 4, the method comprises the steps of:
401. first gradient data and second gradient data are acquired.
And the terminal equipment acquires the first gradient data and the second gradient data.
Specifically, the terminal device may be a mobile phone, an On Board Unit (OBU), a tablet computer, a computer with a data transceiver function (such as a notebook computer, a palm computer, etc.), a mobile internet device, a terminal in industrial control, a wireless terminal in unmanned driving, a terminal in transportation safety, a terminal in a smart city, a terminal in a smart home, a terminal device in a 5G network, or a terminal device in a public land mobile communication network of future evolution, etc. One or more cameras can be arranged on the terminal equipment; alternatively, the terminal device may be communicatively connected to one or more devices with an image capturing function, and the terminal device may acquire an image captured by the camera or the device with an image capturing function. In addition, in order to obtain a larger field of view, the cameras in the embodiments and subsequent embodiments of the present application may be cameras with larger field of view, such as common wide-angle cameras.
Alternatively, when the terminal device is an in-vehicle device in a vehicle, the vehicle may be a general vehicle, a special vehicle (including but not limited to a police car, a tractor, etc.) or a rescue vehicle (including but not limited to an ambulance, a fire truck, a rescue car, etc.).
In addition, the terminal device may also be a device in the internet of things system. IoT (internet of things ) is an important component of future information technology development, and is mainly technically characterized in that objects are connected with a network through a communication technology, so that man-machine interconnection and an intelligent network of the internet of things are realized. Alternatively, ioT technology may enable mass connectivity, deep coverage, and terminal power saving through, for example, narrowband technology. It is understood that the present application is not limited to the specific form of the terminal device. All devices capable of communicating with road side devices, vehicles, vehicle management platforms and the like fall into the protection scope of the terminal device.
The first gradient data comprises a first gradient value, the second gradient data comprises a second gradient value, the first gradient value is a gradient value of a road surface on which the vehicle runs at a first moment, and the second gradient value is a gradient value of the road surface on which the vehicle runs at a second moment. That is, the first gradient data is historically collected gradient data, which may be stored in a memory of the terminal device. The second gradient data is gradient data acquired by the terminal equipment in real time, for example, each integrated element of the terminal equipment can comprise a gradient sensor; alternatively, the terminal device may be communicatively coupled to a grade sensor. In the running process of the vehicle, the gradient sensor can detect the gradient value of the current running road surface in real time and send the gradient value to the terminal equipment. It will be appreciated that after the terminal device collects the second gradient data, the terminal device may also store the second gradient data in the memory, so that the terminal device may obtain the second gradient data as historical data in a subsequent process.
402. And adjusting the view finding angle of the camera according to the difference value of the first gradient data and the second gradient data.
The camera may be disposed at the front end of the vehicle, for example, under the license plate of the vehicle or at the front axle of the vehicle, or elsewhere on the vehicle, as embodiments are not limited in this respect.
It can be appreciated that the gradient of the road surface on which the vehicle is traveling is constantly changing on roads with complex road conditions, particularly on hills with many slopes. The gradient sensor is in the in-process of gathering the road surface gradient in real time, and the slope value that gathers probably rate of change is very fast. In general, the change of the gradient value means that the size and the range of the blind area of the driver may also be changed, and the view angle of the camera should be adjusted in time so that the view angle of the camera can include the view angle corresponding to the blind area of the driver. However, when the gradient change is extremely small, for example, from 30 ° to 31 °, although the gradient sensor can detect that the gradient value of the running road surface changes by 1 °, the blind area of the view corresponding to the angle of 1 ° is likely not to affect the safe running of the driver; that is, even if the camera still maintains the current view angle, the hidden safety trouble caused by the blind area of the view caused by the slope of 31 degrees can be eliminated.
Therefore, in the method, when the difference between the first gradient value and the second gradient value is greater than a first threshold value, the terminal device may control the view angle of the camera to be adjusted from a first angle to a second angle, where the first angle is the view angle of the camera at the first moment, and the second angle is the view angle corresponding to the view blind area in front of the vehicle at the second moment.
In an alternative embodiment, the difference between the first slope value and the second slope value is equal to the difference between the first angle value and the second angle value. Therefore, the adjusting amplitude of the camera can be reduced to the greatest extent, the system resource is saved, and the conversion frequency of pictures shot by the camera is reduced.
403. And displaying the road information shot by the camera to a user.
After acquiring the picture of the camera, the terminal device may display the picture containing the road information to a user.
Specifically, when the terminal device is a vehicle-mounted device, the device for displaying the road information may be a screen with a display function, such as an instrument screen, a central control screen, and a HUD, in the vehicle, which is not limited in this embodiment of the present application.
In order to further explain the above method, the embodiments of the present application provide a flowchart of a more detailed road information obtaining method and a schematic diagram of a vehicle driving on different driving roads, and refer to fig. 5 and 6.
Fig. 5 is a flowchart of a road information obtaining method according to an embodiment of the present application. As shown in fig. 5, the method comprises the steps of:
501. grade data is detected.
The terminal device detects gradient data. The specific form of the terminal device may refer to the foregoing description of the terminal device in fig. 2, and will not be repeated here. In particular, the terminal device may be the terminal device in fig. 2. The terminal equipment can be provided with a camera; or the terminal device may be in communication connection with a camera, the terminal device may acquire an image captured by the camera or the device with the camera function, and the terminal device may control opening or closing of the camera.
The gradient data may include a gradient value of a road surface on which the vehicle is traveling. Specifically, when the terminal device is a vehicle-mounted terminal, each integrated element of the terminal device may include a gradient sensor; alternatively, the terminal device may be communicatively coupled to a grade sensor. In the running process of the vehicle, the gradient sensor can detect the gradient value of the current running road surface in real time according to the acceleration filtering difference value of the inertial acceleration sensor and the front wheel and the rear wheel, and sends the gradient value to the terminal equipment.
502. And judging whether the gradient value is larger than a third threshold value.
And the terminal equipment judges whether the gradient value in the detected gradient data is larger than a third threshold value. It will be appreciated that at slope values less than a certain angle, the likelihood of a blind spot in the field of view in front of the vehicle is small, and it is likely that this will not affect the driver's safe driving. Therefore, in the present embodiment, when the gradient value of the road surface on which the vehicle is traveling is greater than a certain third threshold value, the camera is started and starts shooting, and the shot picture is presented to the user. In this way, system resources can be saved.
In particular, the third threshold may be set to a specific angle value, for example, 15 °, 20 ° or other degrees, and the specific numerical value is not limited in this application. The terminal equipment can control the opening or closing of the camera according to whether the gradient value is larger than a third threshold value. When the gradient value is greater than a third threshold value, executing step 503, namely controlling the camera to be started, and displaying road information shot by the camera to a user; when the grade value is less than or equal to the third threshold value, then step 501 is repeated, i.e., the grade data is re-detected.
Taking the scenario shown in fig. 6 as an example, the vehicle-mounted terminal on the vehicle 601 in fig. 6 may be the above-described terminal device. Here, it is assumed that the third threshold value is θ, and 0 ° < θ < θ1 < θ2 < θ3 < θ4.
As shown in fig. 6 (a), at time t0, the vehicle 601 travels on a flat ground, and the camera is in a closed state. It will be appreciated that at time t0, the terminal device detects fifth grade data, and the grade value detected by the grade sensor on vehicle 601 is 0. Since 0 DEG < θ, that is, the gradient value acquired at time t0 is smaller than the third threshold value. At this time, the camera remains in the closed state. It will be appreciated that on flat ground, the camera is always in the closed state.
As shown in fig. 6 (B), at time t3, the vehicle 601 travels from the flat ground 602 onto the ramp 603, before which the camera is in the closed state. At time t3, the terminal device detects the third gradient data, and the gradient value detected by the gradient sensor on the vehicle 601 is θ1. Since θ < θ1, that is, the gradient value acquired at time t3 is greater than the third threshold. At this time, the camera is turned on. And then, the terminal equipment displays the road information shot by the camera to a user.
503. And opening the camera, and displaying the road information shot by the camera to a user.
Specifically, the device for displaying the road information on the vehicle 601 may be a screen with a display function, such as an instrument screen, a central control screen, and a HUD, which is not limited in the embodiment of the present application.
Optionally, before executing step 503, the terminal device may further determine whether the vehicle is in a reverse state. If the vehicle is in a reversing state, the camera is still kept in a closing state; if the vehicle is not in a reversing state, the camera is opened, and the road information shot by the camera is displayed to a user.
Optionally, after executing step 503, the terminal device may determine whether the camera is turned on successfully. If the camera is not successfully started, the terminal equipment can output prompt information for reminding the user equipment of faults.
504. First gradient data and second gradient data are acquired.
The first gradient data are gradient data acquired by the terminal equipment when the vehicle runs on a road surface with historical running, and the second gradient data are gradient data acquired by the equipment when the vehicle runs on a road surface with current running. The first gradient data comprises a first gradient value, the second gradient data comprises a second gradient value, the first gradient value is a gradient value of a road surface on which the vehicle runs at a first moment, and the second gradient value is a gradient value of the road surface on which the vehicle runs at a second moment; it will be appreciated that these two grade data must differ in the time of acquisition and grade value, which also to some extent reflects the difference in the size and extent of the blind zone in the driver's field of view during vehicle travel. Therefore, after the two gradient data are comprehensively analyzed, the view finding angle of the camera is adjusted based on the analysis result, and the image shot by the camera is displayed to the user in real time, so that the scene in the blind area of the driver can be provided for the driver efficiently and reasonably, and the driving safety guarantee of the vehicle is enhanced. Note that the first gradient data should be the latest gradient data among the historically held gradient data.
Also taking the scene shown in fig. 6 as an example, as shown in (C) in fig. 6, at time t1, the vehicle 601 travels from the ramp 603 onto the ramp 604, before the camera is turned on and off. At time t1, the gradient data detected by the terminal device on the ramp 604 may be used as the second gradient data, where the gradient value is θ2 (i.e., the second gradient value); at time t3, the third gradient data detected by the terminal device may be used as the first gradient data, and at time t3, the gradient value is θ1 (i.e., the first gradient value).
Similarly, as shown in fig. 6 (D), at time t2, the vehicle 601 has traveled from the ramp 605 onto the ramp 606 for a period of t1-t2, with the camera always on. At time t2, the data detected by the terminal device on the ramp 606 may be the second gradient data, where the gradient value is θ4 (i.e., the second gradient value), at time t2, the second gradient data detected by the terminal device may be the first gradient data, and at time t2, the gradient value is θ3 (i.e., the first gradient value).
505. And judging whether the difference value between the first gradient value and the second gradient value is larger than a first threshold value.
It can be appreciated that the gradient of the road surface on which the vehicle is traveling is constantly changing on roads with complex road conditions, particularly on hills with many slopes. The gradient sensor is in the in-process of gathering the road surface gradient in real time, and the slope value that gathers probably rate of change is very fast. In general, the change of the gradient value means that the size and the range of the blind area of the driver may also be changed, and the view angle of the camera should be adjusted in time so that the view angle of the camera can include the view angle corresponding to the blind area of the driver. However, when the gradient change is extremely small, for example, from 30 ° to 31 °, although the gradient sensor can detect that the gradient value of the running road surface changes by 1 °, the blind area of the view corresponding to the angle of 1 ° is likely not to affect the safe running of the driver; that is, even if the camera still maintains the current view angle, the hidden safety trouble caused by the blind area of the view caused by the slope of 31 degrees can be eliminated. Therefore, the method only changes the view angle of the camera when the change of the gradient value is larger than a certain threshold value (namely the first threshold value).
Specifically, the first threshold may be set to a specific angle value, for example, 2 °, 3 ° or other degrees, and the specific numerical value is not limited in this application. The terminal equipment can control the opening or closing of the camera according to whether the gradient value is larger than a third threshold value. When the gradient value is greater than a third threshold value, executing step 503, namely controlling the camera to be started, and displaying road information shot by the camera to a user; when the grade value is less than or equal to the third threshold value, step 504 is repeated, i.e., the change of grade data with time is redetected.
In connection with the foregoing description and the scenario shown in fig. 6, it is assumed here that the difference between θ2 and θ1 is smaller than the first threshold value, and the difference between θ4 and θ3 is larger than the first threshold value. The viewing angle due to the camera remains unchanged at time t1, i.e. in case the difference between the first slope value and the second slope value is smaller than said first threshold value. But in case the time t2, i.e. the difference between the first and second gradient values, is larger than said first threshold value, the terminal device will perform step 506. Otherwise, the terminal device will repeat step 504.
506. And judging whether the interval between the first time stamp and the second time stamp is larger than a second threshold value.
In particular, the second threshold may be set to a time interval value, for example, 100ms, 500ms, or other time interval values, which is not limited in this application.
That is, the first gradient data further includes a first timestamp, the second gradient data further includes a second timestamp, the first timestamp is a timestamp corresponding to the first time, and the second timestamp is a timestamp corresponding to the second time.
It should be appreciated that although the ramp sensor reflects at a relatively sensitive rate, typically on the order of milliseconds, the camera responds relatively slowly, which typically takes several seconds to complete for operations such as opening, closing, or changing the viewing angle. Therefore, even if the gradient sensor can timely detect the change of the gradient value of the running road surface under the condition that the gradient change frequency is too fast, the camera cannot make corresponding adjustment. In addition, since the user needs a certain reaction time from receiving the information to reacting to the information, if the view angle of the camera is adjusted too frequently, the pictures displayed to the user are also replaced frequently, which is likely to bring a larger visual information processing burden to the user.
In connection with the foregoing description and the scenario shown in fig. 6, as shown in (D) of fig. 6, when the difference between θ4 and θ3 is greater than the first threshold, it is further determined whether the time interval between the time t1 and the time t2 is greater than the second threshold. In the case that the difference between θ3 and θ2 is greater than the first threshold and the interval between the time t1 and the time t2 is greater than the second threshold, the terminal device will execute step 507, i.e. control the viewing angle of the camera to change. Otherwise, the terminal device will repeatedly execute step 504, i.e. re-detect the change of the gradient data with time.
507. And adjusting the view finding angle of the camera from the first angle to the second angle.
The first angle is a view angle of the camera at the first moment, and the second angle is a view angle corresponding to a view blind area in front of the vehicle at the second moment.
Optionally, the difference between the first slope value and the second slope value is equal to the difference between the first angle value and the second angle value.
508. Grade data is detected.
And then, the terminal equipment continues to detect gradient data in real time so as to determine whether the view angle of the camera needs to be adjusted or the camera needs to be closed.
509. And judging whether the gradient value is smaller than or equal to the third threshold value.
The gradient of the running road surface is continuously changed during the continuous running of the vehicle. It can be appreciated that, in the state that the camera is turned on, the terminal device will determine in real time whether the detected new gradient value is greater than the third threshold value. If so, the terminal device determines whether the difference between the detected new gradient value and the historically detected gradient value is greater than the first threshold value, and whether the timestamp between the timestamp corresponding to the new gradient value and the historically detected gradient is greater than the second threshold value. If yes, the terminal device adjusts the view angle of the camera. Otherwise, the camera will perform step 510, i.e. turn off the camera.
510. And closing the camera.
At a certain moment (for example, a fourth moment), the gradient value of the road surface on which the vehicle runs is smaller than the third threshold value, and the terminal device can control the camera to be closed under the condition that the terminal device judges whether the detected new gradient value is smaller than the third threshold value.
As shown in fig. 6 (E), at time t4, the vehicle 601 travels from the ramp 606 to the flat ground 607, before the camera is in the on state. At time t4, the terminal device detects fourth gradient data, and the gradient value detected by the gradient sensor on the vehicle 601 is 0 °. Since 0 ° < θ, that is, the gradient value acquired at time t4 is smaller than the third threshold value. At this time, the terminal device controls the camera to be turned off.
Optionally, after the camera is turned off, the terminal device may control the view angle of the camera to return to a horizontal view, so that the camera may be normally used in a subsequent driving process.
Based on the method provided in fig. 5 and the driving scene shown in fig. 6, the embodiment of the application provides a schematic diagram of a camera view angle conversion process. As shown in fig. 7, the camera 701 shown in fig. 7 may be the camera in the foregoing description.
As shown in fig. 7 (a), the camera is in a view angle of the camera in a closed state, corresponding to fig. 6, when the vehicle 601 runs on the flat ground 602, that is, when the gradient value of the road surface on which the vehicle 601 runs is smaller than the third threshold value. At this time, the view angle of the camera is a horizontal forward view, i.e. the lower tilt angle of the lens is 0 °.
As shown in fig. 7 (B), this corresponds to the view angle of the camera in the closed state in fig. 6 when the vehicle 601 is running on the slope 603, that is, when the gradient value of the road surface on which the vehicle 601 is running is θ1. At this time, the view angle of the camera is a horizontal forward view, that is, the lower tilt angle of the lens is α1. Specifically, the α1 is equal to the θ1.
Here again, it is assumed that the difference between θ2 and θ1 in fig. 6 is smaller than the first threshold value, and the difference between θ4 and θ3 is larger than the first threshold value.
Corresponding to time t1 in fig. 6, since the difference between θ2 and θ1 is smaller than the first threshold, the view angle of the camera does not need to be adjusted. Therefore, the view angle of the camera at this time remains α1 as shown in (B) of fig. 7.
Corresponding to time t2 in fig. 6, since the difference between θ4 and θ3 is greater than the first threshold, the view angle of the camera needs to be adjusted. Therefore, the view angle of the camera at this time is adjusted to α2 shown in (C) of fig. 7. Specifically, the value of (α2- α3) is equal to (θ4- θ3). Where α3 is the viewing angle before camera adjustment (not shown in fig. 7).
Corresponding to time t4 in fig. 6, the vehicle 601 is driving from the ramp 606 to the flat ground 607, where the gradient value of the ground is 0 °, i.e. the gradient value of the road surface on which the vehicle 601 is driving is smaller than the third threshold value, and the camera is turned off. Therefore, the view angle of the camera at this time is adjusted to 0 ° as shown in (D) of fig. 7.
Optionally, the camera may set up an x-y axis coordinate system with the center point of the camera as the origin of coordinates, and determine four basic orientations of the camera rotatable with four coordinate quadrants, i.e., up, right, down, left. The four directions are marked by hexadecimal constants, and the view angle direction of the camera can support direction synthesis, such as upper right, lower left and the like.
Based on the foregoing description, the embodiment of the application provides a workflow diagram for controlling the operation of a camera by a terminal device. As shown in fig. 8, the flow includes the steps of:
801. request a data frame and parse the data frame.
The terminal device starts to parse the data after receiving the request instruction data frame, and the terminal device may be a terminal device in the foregoing description.
In particular, the data frame may include an action code and additional data. Wherein: the action code may be used to indicate what action the camera needs to perform, e.g., action code 001 may represent on, action code 002 represent off, etc. The additional data includes a series of identifications that can be used to determine the acquirer and the conveyor of the data generated in the process.
802. It is determined whether the data frame is valid.
If the data frame is invalid, if an unrecognizable action code appears, additional data which cannot be analyzed, such as a request client identifier, a display device identifier and the like, is displayed, and the process is ended; otherwise, the terminal device executes step 803 to perform the action code judgment.
803. And judging whether the action code is closed or not.
If the action code is closed, the terminal device continues to execute a subsequent step 804; otherwise the terminal device continues to execute the subsequent step 810.
804. Judging whether the camera is closed or not.
Under the condition that the action code is closed, the terminal equipment continues to judge the current running state of the camera; if the camera is in the off state, step 809 is executed, and the flow ends. Otherwise, the terminal device performs the next step 805.
805. And closing the camera.
And under the condition that the camera is not closed, the terminal equipment controls the camera to be closed.
806. And judging whether the closing is abnormal or not.
If an abnormality occurs when the camera is closed, executing step 808, namely recording and feeding back the abnormality by the terminal equipment, and ending the flow; otherwise, the terminal device executes step 807, that is, the terminal device sends a transmission interrupt request to the corresponding display device module through the MCU according to the display device identifier, such as the HUD, the instrument panel, the central control panel, etc., resolved by the data frame, and the process ends.
807. And sending a transmission interrupt signal to the display module.
And the terminal equipment sends a transmission interrupt request to the corresponding display equipment module through the MCU according to the display equipment identifier analyzed by the data frame, such as the HUD, the instrument screen, the central control screen and the like, and the process is ended.
808. The anomalies are recorded and fed back.
And under the condition that the abnormality occurs when the camera is closed, the terminal equipment records and feeds back the abnormality, and the process is ended.
809. And executing a redundant instruction processing flow.
And under the condition that the action code is closed and the camera is closed, the terminal equipment executes a redundant instruction processing flow, and the flow is ended.
810. And judging whether the action code is on or not.
If the action code is not closed, executing step 811, i.e. the terminal device determines whether the action code is open; otherwise, step 815 is executed, i.e. the terminal device executes a default instruction flow.
811. Judging whether the camera is started or not.
And under the condition that the action code is on, the terminal equipment judges whether the camera is in an on state or not. If the camera is turned off, the terminal device executes step 812, i.e. turns on the camera; otherwise, the terminal device executes step 809, namely executes the redundant instruction processing flow, and the flow ends.
812. And starting the camera.
And under the condition that the action code is on and the camera is off, the terminal equipment controls the camera to be turned on.
813. And judging whether the opening is abnormal or not.
If an abnormality occurs when the camera is started, the terminal device executes step 808, namely the terminal device records and feeds back the abnormality, and the process is ended; otherwise, the terminal device executes step 814, that is, the terminal device sends a connection request to the corresponding display device module through the MCU according to the display device identifier, such as the HUD, the instrument panel, the central control panel, etc., which is parsed by the data frame, and the process ends.
814. And sending a connection signal to the display module.
And the terminal equipment sends a connection request to the corresponding display equipment module through the MCU according to the display equipment identification analyzed by the data frame, such as the HUD, the instrument screen, the central control screen and the like, and the process is ended.
815. And executing a default instruction flow.
And under the condition that the action code is not opened or closed, for example, the action code can change the view angle of the camera, and the terminal equipment controls the camera to execute the instruction flow corresponding to the action code.
It will be appreciated that the complete steps performed by the workflow of the terminal device in sequence from start to end may specifically include the following 7 cases, namely: step 801, step 802, step 803, step 804, step 805, step 806, and step 807, and the flow ends; or step 801, step 802, step 803, step 804, step 805, step 806, and step 808, and the flow ends; or step 801, step 802, step 803, step 804, step 809, the flow ends; or step 801, step 802, step 803, step 810, step 811, step 812, step 813, and step 808, the flow ends; or step 801, step 802, step 803, step 810, step 811, step 812, step 813, and step 814, the flow ends; or step 801, step 802, step 803, step 810, step 811, and step 809, the flow ends; or step 801, step 802, step 803, step 810, and step 815, the flow ends.
In order to further explain the specific data and operations received by the camera in the foregoing description, the present embodiment provides a method flowchart for adjusting the view angle of the camera. The method may be applied to a road information acquisition system, as shown in fig. 9, which may include a camera control system 90, a driver 91, and a hardware controller 92, and alternatively, the control system 90, the driver 91, and the hardware controller 92 may be integrated on the same electronic device, which may be a terminal device in the foregoing description. The process comprises the following steps:
901: the camera control system 90 acquires instruction data.
Specifically, the instruction data includes two parameters, namely, the rotation direction and rotation angle of the camera. The camera control system 90 receives the instruction data and starts to check the instruction data.
In addition, the instruction data may also include an identification of an opening action, an identification of a ramp system request, an identification of a display device, and so forth. The identification of the opening action is used for indicating the system to open the camera, the identification of the ramp system request is used for indicating the object calling the system, and the identification of the display device is used for indicating which display device the shot image is displayed on.
902: the camera control system 90 checks the instruction data.
The camera control system 90 checks the command data. As shown in FIG. 10, the four base orientations of the camera can be determined by establishing an x-y axis coordinate system with the center point of the camera as the origin of coordinates, up (Top: 0x 1), right (Right: 0x 10), down (Bottom: 0x 100), left (Left: 0x 1000). The four orientations are marked with hexadecimal constants, and the orientation of the rotation of the camera can be synthesized through orientation parameters, such as the upper right. Note that: the azimuth parameters support only adjacent direction synthesis, not interval direction synthesis, and are not synthesizable up and down. The rotation angle is the angle that the camera needs to incline towards the corresponding direction, and the international angle unit is used for marking. Thus, currently supported direction parameters may be listed by enumeration, i.e., up, down, left, right, up right, down right, up left, down left, and initial orientation (0 x 0). If the direction parameters are not the data, namely the parameters are determined to be illegal, the verification is not passed, and the process is finished. The angle effective range is [0, preset value A ], and the preset value A is provided by a camera supplier, namely the maximum angle of the camera which can rotate towards a specific direction. If the preset value A is 60 degrees, when the angle parameter exceeds 60 degrees or is smaller than 0 degrees, the parameter is determined to be illegal, the verification is not passed, and the process is finished. Note that when the direction of the incoming parameter is 0 and the angle is 0 °, the camera is restored to the initial position. Otherwise, the instruction data passes the verification.
903: in the case where the instruction data passes the verification, a control instruction is sent to the driver 91.
In the case that the command data passes the verification, the camera control system 90 sends a control command to the camera software driver through the MCU, where the control command includes the rotational direction and the rotational angle of the camera.
904: the drive 91 converts the control instruction into an electrical signal.
After the camera software driver receives the control request, the camera software driver immediately starts to analyze parameters such as the rotation direction and rotation angle of the camera, and converts the parameters into electrical signals which can be identified by a camera hardware controller according to a hardware protocol.
905: the driver 91 sends an electrical signal to the hardware controller 92.
906: the hardware controller 92 controls the spindle to start operating.
After receiving the electric signal sent by the driver 91, the hardware controller 92 starts to control the rotation shaft to work according to the electric signal, so as to adjust the view angle of the camera.
907: in the event of an abnormality, the hardware controller 92 sends a failure code to the drive 91.
After the hardware controller 92 fails to control the rotation shaft to adjust the view angle of the camera, the hardware controller 92 may send a fault code to the driver 91.
908: the drive 91 records and feeds back abnormality information to the upper layer.
The driver 91 receives the fault code sent by the hardware controller 92, records the fault and feeds back the abnormal information to the upper layer, and the flow ends.
Next, a schematic structure of a road information obtaining apparatus according to an embodiment of the present application is described, and please refer to fig. 11. The road information obtaining apparatus in fig. 11 may perform the flow of the road information obtaining method in fig. 4 or 5, as shown in fig. 11, and may include: a gradient sensor 1101, a camera 1102, a processor 1103, a display 1104 and a memory 1105, wherein the gradient sensor 1101 is used for detecting a gradient value of a road surface on which the vehicle is running; the camera 1102 is used for shooting road condition information in front of the vehicle; the memory 1105 is configured to store the gradient value detected by the gradient sensor 1101 and a timestamp corresponding to the detected gradient value; the processor 1103 is configured to determine a view angle of the camera according to the gradient value and a timestamp corresponding to the gradient value, and control the camera 1102 to adjust to the view angle; the display 1104 is used for outputting road condition information shot by the camera 1102.
It should be understood that the above division of the respective elements of the road information acquisition apparatus is merely a division of a logic function, and may be integrated into one physical entity in whole or in part in actual implementation, or may be physically separated. For example, the above elements may be individually set up processing elements, may be integrated into the same chip, or may be stored in a memory element of the controller in the form of program codes, and may be called by a certain processing element of the processor to execute the functions of the above elements. In addition, the various elements may be integrated together or may be implemented separately. The processing element here may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the method or elements above may be performed by integrated logic circuitry in hardware in a processor element or by instructions in software. The processing element may be a general purpose processor, such as a CPU, or may be one or more integrated circuits configured to implement the above methods, such as: one or more ASICs (application-specific integrated circuit, application specific integrated circuits), or one or more DSPs (digital signal processor, microprocessors), or one or more FPGAs (field-programmable gate array, field programmable gate arrays), etc.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 12, the electronic device 120 includes a processor 1201, a memory 1202, and a communication interface 1203, and a display; the processor 1201, the memory 1202, the communication interface 1203, and the display 1204 are connected to each other via a bus. The electronic device may be the road information acquiring apparatus in the foregoing description.
Memory 1202 includes, but is not limited to, RAM (random access memory ), ROM (read-only memory), EPROM (erasable programmableread only memory, erasable programmable read-only memory), or CDROM (compact disc read-only memory), with memory 1202 for associated instructions and data. The communication interface 1203 is used to receive and transmit data.
The processor 1201 may be one or more CPUs (central processing unit ), and in the case where the processor 1201 is one CPU, the CPU may be a single-core CPU or a multi-core CPU. The steps performed by the road information obtaining means in the above-described embodiment may be based on the structure of the electronic device shown in fig. 12. In particular, the processor 1201 may implement the functions of the processor 1103 in fig. 11.
The display 1204 may be a screen having a display function for a meter screen, a center control screen, a HUD, or the like in a vehicle. In particular, the display 1204 may implement the functionality of the display 1104 in FIG. 11.
The processor 1201 in the electronic device 120 is configured to read the program code stored in the memory 1202 and execute the road information acquiring method in the foregoing embodiment.
Another computer-readable storage medium storing a computer program that when executed by a processor realizes: the first gradient data comprises a first gradient value, the second gradient data comprises a second gradient value, the first gradient value is a gradient value of a road surface on which the vehicle runs at a first moment, and the second gradient value is a gradient value of the road surface on which the vehicle runs at a second moment; adjusting the view finding angle of the camera according to the difference value of the first gradient data and the second gradient data; and displaying the road information shot by the camera to a user.
The present application also provides a computer program product containing instructions, which when run on a computer, cause the computer to perform the road information acquisition method provided in the foregoing embodiment.
It will be apparent to those skilled in the art that embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described in terms of flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A road information acquisition method characterized by comprising the steps of:
acquiring first gradient data and second gradient data, wherein the first gradient data comprises a first gradient value, the second gradient data comprises a second gradient value, the first gradient value is a gradient value of a road surface on which a vehicle runs at a first moment, and the second gradient value is a gradient value of the road surface on which the vehicle runs at a second moment;
adjusting the view finding angle of the camera according to the difference value of the first gradient data and the second gradient data;
and displaying the road information shot by the camera to a user.
2. The method of claim 1, adjusting the view angle of the camera based on the difference between the first grade data and the second grade data, comprising:
and under the condition that the difference value of the first gradient value and the second gradient value is larger than a first threshold value, adjusting the view finding angle of the camera according to the difference value of the first gradient data and the second gradient data.
3. The method of claim 2, wherein adjusting the view angle of the camera based on the difference between the first slope data and the second slope data if the difference between the first slope value and the second slope value is greater than a first threshold value comprises:
And under the condition that the difference value between the first gradient value and the second gradient value is larger than the first threshold value, adjusting the view angle of the camera from a first angle to a second angle, wherein the first angle is the view angle of the camera at the first moment, and the second angle is the view angle corresponding to the view blind area in front of the vehicle at the second moment.
4. The method of claim 3, wherein a difference between the first slope value and the second slope value is equal to a difference between the first angle value and the second angle value.
5. The method according to any one of claims 2-4, wherein the first gradient data further includes a first timestamp, the second gradient data further includes a second timestamp, the first timestamp is a timestamp corresponding to the first moment, the second timestamp is a timestamp corresponding to the second moment, and adjusting the view angle of the camera according to the difference between the first gradient data and the second gradient data if the difference between the first gradient value and the second gradient value is greater than a first threshold value includes:
and when the difference value between the first gradient value and the second gradient value is larger than the first threshold value and the interval between the first time stamp and the second time stamp is larger than the second threshold value, adjusting the framing angle of the camera from the first angle to the second angle.
6. The method of any of claims 1-4, further comprising, prior to the acquiring the first grade data and the second grade data:
detecting third gradient data at a third moment; the third gradient data comprises a third gradient value, wherein the third gradient value is a gradient value of a road surface on which the vehicle runs at the third moment, and the third moment is earlier than the first moment;
and under the condition that the third gradient value is larger than a third threshold value, opening the camera, and displaying the road information shot by the camera to a user.
7. The method of claim 6, after adjusting the view angle of the camera according to the difference between the first slope data and the second slope data, comprising:
detecting fourth gradient data at a fourth moment; the fourth gradient data comprises a fourth gradient value, wherein the fourth gradient value is a gradient value of a road surface on which the vehicle runs at a fourth moment, and the fourth moment is later than the second moment;
and closing the camera under the condition that the fourth gradient value is smaller than or equal to the third threshold value.
8. The road information acquisition device is characterized by comprising a gradient sensor, a camera, a processor, a display and a memory,
The gradient sensor is used for detecting gradient values of a vehicle running road surface;
the camera is used for shooting road condition information in front of the vehicle;
the memory is used for storing the gradient value detected by the gradient sensor and the corresponding time stamp when the gradient value is detected;
the processor is used for determining the view finding angle of the camera according to the gradient value and the timestamp corresponding to the gradient value, and controlling the camera to adjust to the view finding angle;
the display is used for outputting road condition information shot by the camera.
9. An electronic device, comprising: a memory for storing a program; a processor for executing the program stored by the memory, the processor being for performing the method of any one of claims 1 to 7 when the program is executed.
10. A computer readable storage medium, in which a computer program is stored which, when run on one or more processors, performs the method of any one of claims 1 to 7.
CN202111581063.7A 2021-12-22 2021-12-22 Road information acquisition method, device, equipment and storage medium Pending CN116331218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111581063.7A CN116331218A (en) 2021-12-22 2021-12-22 Road information acquisition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111581063.7A CN116331218A (en) 2021-12-22 2021-12-22 Road information acquisition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116331218A true CN116331218A (en) 2023-06-27

Family

ID=86877579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111581063.7A Pending CN116331218A (en) 2021-12-22 2021-12-22 Road information acquisition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116331218A (en)

Similar Documents

Publication Publication Date Title
CN113284366B (en) Vehicle blind area early warning method, early warning device, MEC platform and storage medium
CN105590454B (en) The method and its system that a kind of rule-breaking vehicle behavior is put to the proof
JP5171629B2 (en) Driving information providing device
JP4935795B2 (en) Pedestrian pop-out prediction device and program
CN110738150B (en) Camera linkage snapshot method and device and computer storage medium
CN103204123A (en) Vehicle-pedestrian detecting, tracking and early-warning device and method
CN102765365A (en) Pedestrian detection method based on machine vision and pedestrian anti-collision warning system based on machine vision
CN105313778A (en) Camera and vehicle including the same
KR101729486B1 (en) Around view monitor system for detecting blind spot and method thereof
US10996469B2 (en) Method and apparatus for providing driving information of vehicle, and recording medium
Jiang et al. SafeCam: Analyzing intersection-related driver behaviors using multi-sensor smartphones
CN113111682A (en) Target object sensing method and device, sensing base station and sensing system
JP2007156755A (en) Intervehicular communication system
KR101719799B1 (en) CCTV monitoring system
US20220048502A1 (en) Event detection system for analyzing and storing real-time other-user vehicle speed and distance
JP2005202787A (en) Display device for vehicle
CN113420714B (en) Method and device for reporting acquired image and electronic equipment
CN111277956A (en) Method and device for collecting vehicle blind area information
CN116331218A (en) Road information acquisition method, device, equipment and storage medium
CN107640111B (en) Automobile visual image processing system and method based on hundred-core microprocessor control
JP7057074B2 (en) On-board unit and driving support device
JP2019175372A (en) Danger prediction device, method for predicting dangers, and program
JP4715479B2 (en) Inter-vehicle communication system
CN115131749A (en) Image processing apparatus, image processing method, and computer-readable storage medium
JP6753915B2 (en) Image processing equipment, image processing methods, image processing programs and image processing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination