CN115965938A - Riding safety detection method and device based on intelligent helmet and storage medium - Google Patents

Riding safety detection method and device based on intelligent helmet and storage medium Download PDF

Info

Publication number
CN115965938A
CN115965938A CN202211730029.6A CN202211730029A CN115965938A CN 115965938 A CN115965938 A CN 115965938A CN 202211730029 A CN202211730029 A CN 202211730029A CN 115965938 A CN115965938 A CN 115965938A
Authority
CN
China
Prior art keywords
information
rider
road type
motor vehicle
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211730029.6A
Other languages
Chinese (zh)
Inventor
沈子羡
俞炳荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202211730029.6A priority Critical patent/CN115965938A/en
Publication of CN115965938A publication Critical patent/CN115965938A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention discloses a riding safety detection method, a riding safety detection device and a riding safety detection storage medium based on an intelligent helmet, wherein the method comprises the following steps: acquiring position information of a rider, image information around the rider and sound information around the rider; judging the type of the road on which the rider rides according to the position information, the image information and the sound information; judging whether to start collision detection according to the road type, and if the collision detection is started, obtaining a collision detection result; and judging whether to display the rear road information or not according to the road type and/or the collision detection result. According to the embodiment of the invention, whether the image information behind the riding is presented in the AR glasses is judged according to the road type or the collision detection result. The situation that the AR glasses always display the image information behind the riding in the riding process is avoided, and the power consumption of the intelligent helmet is reduced; meanwhile, the riding safety and the comfort of wearing the helmet are improved.

Description

Riding safety detection method and device based on intelligent helmet and storage medium
Technical Field
The invention relates to the field of wearing equipment, in particular to a riding safety detection method and device based on an intelligent helmet and a storage medium.
Background
Use bicycle or electric bicycle's user more and more, generally use electric bicycle to carry out the takeaway delivery like take-away rider, the bicycle fan that rides the bicycle and carries out physical exercise etc.. These users are when riding the bicycle, because the bicycle does not install the rear-view mirror or does not have corresponding initiative safety device, lead to the incident that appears easily when riding the bicycle.
When the bicycle is ridden, the image information at the rear can not be observed in real time. Currently, the information of the rear of riding is generally displayed through AR glasses in real time, and certain interference can be brought to riding through the fact that the AR glasses display the rear information all the time in the riding process, so that the riding danger is increased; meanwhile, the power consumption of the AR glasses can be increased by always opening the AR glasses, and the working time of the AR glasses is reduced.
Disclosure of Invention
The invention mainly aims to provide a riding safety detection method, a riding safety detection device, a vehicle and a storage medium based on an intelligent helmet, and aims to solve the problems that certain interference is brought to riding by displaying rear information through AR glasses all the time in the riding process in the prior art, so that riding danger is increased, and the working time of the AR glasses is shortened.
In order to achieve the purpose, the invention provides a riding safety detection method based on an intelligent helmet, which comprises the following steps:
s1: acquiring position information of a rider, and judging a first road type of riding of the rider according to the position information; and/or acquiring image information around the rider, and judging a second road type of the rider according to the image information; and/or acquiring sound information around the rider, and judging a third road type of the rider according to the sound information;
s2: judging whether to start collision detection according to the first road type and/or the second road type and/or the third road type, and if the collision detection is started, obtaining a collision detection result;
s3: and judging whether to display rear road information according to the first road type and/or the second road type and/or the third road type and/or the collision detection result.
Optionally, the obtaining the position information of the rider and determining the first road type of the rider according to the position information includes:
acquiring position information of a rider by using a position sensor;
using the position information to inquire map data to obtain a first road type of the rider; the first road type comprises at least one of: non-motorized lanes, intersections.
Optionally, the acquiring image information of the surroundings of the rider and determining the second road type ridden by the rider according to the image information includes the following steps:
acquiring first image information by using a first image acquisition unit of an intelligent helmet, wherein the first image information is image information in front of the intelligent helmet;
acquiring second image information by using a second image acquisition unit of the intelligent helmet, wherein the second image information is image information behind the intelligent helmet;
processing the first image information and/or the second image information by using a first preset network model to obtain a second road type ridden by the rider; the second road type includes at least one of: non-motor vehicle lanes, crossroads.
Optionally, the acquiring sound information around the rider and determining a third road type on which the rider rides according to the sound information includes:
acquiring voice information around the rider by using a voice acquisition device;
performing voice classification on the voice information to obtain a voice type, wherein the voice type comprises at least one of the following types: human voice, motor vehicle running voice, non-motor vehicle running voice;
when the voice type only has the driving sound of the motor vehicle, the third road type on which the rider rides is a motor vehicle lane;
when the voice type only has the running sound of the non-motor vehicle, the third road type on which the rider rides is a non-motor vehicle lane;
when the voice type comprises the running sound of the non-motor vehicle and the voice of a person, the third road type of the riding person is the crossroad.
Optionally, the step S2 includes the following steps:
judging whether the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad or not;
if the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad, starting collision detection to obtain a collision detection result;
not turning on collision detection if the first road type and/or the second road type and/or the third road type is not a motorway or intersection.
Optionally, the starting of the collision detection to obtain a collision detection result includes the following steps:
acquiring third image information by using a third image acquisition unit of the intelligent helmet, wherein the third image acquisition unit is a monocular camera or a binocular camera;
processing the third image information by using a second preset network model to obtain the motor vehicle information and/or non-motor vehicle information and/or pedestrian information behind the rider;
the motor vehicle information includes at least one of: vehicle type, vehicle speed, vehicle position, vehicle to rider distance;
the non-motor vehicle information includes at least one of: non-motor vehicle type, non-motor vehicle speed, non-motor vehicle position, distance of non-motor vehicle from rider;
the pedestrian information includes at least one of: pedestrian speed, pedestrian position, distance of pedestrian to rider;
judging whether the rider collides with the motor vehicle or not according to the motor vehicle information and the riding information, and if so, calculating first collision time;
judging whether the rider collides with the non-motor vehicle or not according to the non-motor vehicle information and the riding information, and calculating second collision time if the rider collides with the non-motor vehicle;
and judging whether the rider collides with the pedestrian or not according to the pedestrian information and the riding information, and if so, calculating third collision time.
Optionally, the step S3 includes the following steps:
the first road type and/or the second road type and/or the third road type is a motor vehicle lane or an intersection, and rear road information is displayed; or the like, or, alternatively,
the collision detection result is that the rider collides with the motor vehicle, and rear road information and the first collision time are displayed; or the like, or, alternatively,
the collision detection result indicates that the rider collides with the non-motor vehicle, and rear road information and the second collision time are displayed; or the like, or, alternatively,
and the collision detection result is that the rider collides with the pedestrian, and rear road information and the third collision time are displayed.
Optionally, the method further comprises the steps of:
and after the riding steering information is received, displaying rear road information.
In addition, in order to achieve the above object, the present invention further provides a riding safety detecting device based on an intelligent helmet, the device comprising: the system comprises a helmet (2), an optical display unit (1), a control unit (3), a first image acquisition unit (4), a second image acquisition unit (5), a third image acquisition unit (6), a positioning unit (7) and a steering control unit (8);
the optical display unit (1) is rotatably connected with the helmet (2) and can enable the optical display unit (1) to be located at a first position and a second position, and when the optical display unit (1) is located at the first position, the optical display unit (1) is located in front of eyes of a user; when the optical display unit (1) is in a second position, the optical display unit (1) is positioned over the eyes of a user;
the first image acquisition unit (4) is installed in front of the helmet (2), the second image acquisition unit (5) and the third image acquisition unit (6) are installed in the rear of the helmet (2), the positioning unit (7) is installed above the helmet (2), and the control unit (3) is installed inside the helmet (2);
the first image acquisition unit (4) is used for acquiring first image information in front of the helmet (2);
the second image acquisition unit (5) is used for acquiring second image information behind the helmet (2);
the third image acquisition unit (6) is used for acquiring third image information behind the helmet (2);
the positioning unit (7) is used for acquiring the position information of the rider;
the optical display unit (1) is used for displaying rear road information;
the steering control unit (8) is used for receiving riding steering information initiated by a user and then sending the riding steering information to the control unit (3);
the control unit (3) is used for judging the type of a first road on which the rider rides according to the position information; the first image information and/or the second image information are/is used for judging the type of a first road on which the rider rides according to the first image information and/or the second image information; the first road type and/or the second road type are/is used for judging whether to start collision detection or not, and if the collision detection is started, a collision detection result is obtained; the collision detection device is also used for judging whether to display rear road information according to the first road type and/or the second road type and/or the collision detection result; and the bicycle is also used for controlling the optical display unit (1) to display rear road information after the riding steering information is received.
In addition, in order to achieve the above object, the present invention further provides a computer readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the steps of the intelligent helmet-based riding safety detection method as described above.
According to the embodiment of the invention, whether the image information behind the riding is presented in the AR glasses is judged according to the road type or the collision detection result. The situation that the AR glasses always display the image information behind the riding in the riding process is avoided, and the power consumption of the intelligent helmet is reduced; meanwhile, the riding safety and the comfort of wearing the helmet are improved.
Drawings
Fig. 1 is a schematic flow diagram of a riding safety detection method based on an intelligent helmet provided by the invention.
Fig. 2 is a schematic flow chart of acquiring a first road type according to the present invention.
Fig. 3 is a schematic flow chart of acquiring a second road type according to the present invention.
Fig. 4 is a schematic flow chart of obtaining a third road type according to the present invention.
Fig. 5 is a schematic flow chart of whether to start collision detection according to the present invention.
Fig. 6 is a schematic flow chart of collision detection provided by the present invention.
Fig. 7 is a schematic flow chart of displaying the rear information according to the present invention.
Fig. 8 is another schematic flow chart of the intelligent helmet-based riding safety detection method provided by the invention.
Fig. 9 is a structural block diagram of an embodiment of a riding safety detection device based on an intelligent helmet provided by the invention.
Fig. 10 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects of the present invention more clear and obvious, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
In one embodiment, as shown in fig. 1, the invention provides a riding safety detection method based on an intelligent helmet, which includes:
s1, acquiring position information of a rider, and judging a first road type ridden by the rider according to the position information; and/or acquiring image information around the rider, and judging a second road type ridden by the rider according to the image information; and/or acquiring sound information around the rider, and judging a third road type ridden by the rider according to the sound information.
A positioning device (such as a GPS positioning device and a Beidou customization device) is arranged in the intelligent helmet, and the position information of a rider wearing the intelligent helmet is acquired in real time. Then, the corresponding road type is obtained according to the position information of the rider, and the specific process refers to the process described in fig. 2.
Step S101 acquires position information of the rider using the position sensor.
A positioning device (such as a GPS positioning device and a Beidou customization device) is arranged in the intelligent helmet, and the position information of a rider wearing the intelligent helmet is acquired in real time. In order to acquire high-precision position information, differential positioning is carried out on the GPS position information or the Beidou position information, and the high-precision position information of the rider is obtained.
Step S102, using the position information to inquire map data to obtain a first road type of riding of a rider; the type of road includes at least one of: non-motor vehicle lanes, crossroads.
According to the position information of the rider, map data are inquired locally or from a server of the intelligent helmet, and the type of the road where the current position is located is obtained. The types of roads on which the current position is located include the following types: non-motorized lanes, intersections, etc., and may also include other road types, such as squares, parks, etc. The specific road type can be set according to actual requirements, and the technical scheme is not limited.
When the intelligent helmet is inquired by the server, the intelligent helmet uploads the current position information of the intelligent helmet to the server through a wireless network (such as a 4G network and a 5G network), and the server inquires map data after receiving the position information to obtain a corresponding road type and then returns the road type to the intelligent helmet.
After the road type of the position of the rider is obtained through position inquiry, the road type can be directly used for subsequent processing; the information around the rider can be further acquired through the front camera and the rear camera of the intelligent helmet, and then the road type where the rider is located is acquired according to image recognition, so that the accuracy of the road type is improved. See in particular the flow described in figure 3.
Step S201, a first image acquisition unit of the intelligent helmet is used for acquiring first image information, wherein the first image information is image information in front of the intelligent helmet.
A camera A is installed in front of the intelligent helmet, and image information in front of a rider is obtained in real time through the camera A to obtain the image information A.
Step S202, a second image acquisition unit of the intelligent helmet is used for acquiring second image information, wherein the second image information is image information behind the intelligent helmet.
The intelligent helmet is characterized in that a camera B is installed behind the intelligent helmet, and image information behind a rider is obtained in real time through the camera B to obtain the image information B.
Step S203, processing the first image information and/or the second image information by using a first preset network model to obtain a second road type for a rider to ride; the second road type includes at least one of: non-motorized lanes, intersections.
The method comprises the steps of deploying a first preset network model on the intelligent helmet or the server, wherein the first preset network model can be various types of network models capable of extracting image features. For example, the first predetermined Network model may refer to a U-Net (U-Network) model, a PSPNet (Pyramid Scene parsing Network) model, a densnet (density convolutional Network) model, a ResNet (Res idua Network, residual Network) model, or a mobile Network (mobile Network) model. The first pre-set network model itself may have initial parameters, which may be parameters pre-trained on the ImageNet dataset. In the first iterative training process, a first preset network model is trained on the basis of initial parameters.
In the embodiment of the application, a supervised model training mode is adopted, so that the sample image has a label for labeling a real recognition result of the sample image. In the embodiment of the present application, the recognition result of the sample image includes a classification result. Classification labeling is performed accordingly.
The various roads are classified, and the classification result is as follows: non-motor vehicle lanes, crossroads. Other road types can be included as well, and the specific road type can be set according to actual requirements, which is not limited in the technical scheme.
And acquiring pictures of roads corresponding to the classification identifications according to the classification identifications, labeling the pictures, and labeling the attribution type of each picture. And then, the first preset network model is trained by using the marked pictures to obtain the trained first preset network model.
And sending a plurality of pictures of the road surface shot by the front camera A or the rear camera B of the intelligent helmet to the trained first preset network model for processing to obtain the types of the pictures. If a plurality of pictures shot by the front camera A or the rear camera B of the intelligent helmet are classified, a plurality of types are correspondingly arranged. The type containing the largest number of photos will be the criterion. The classification results are shown in the following table:
picture frame Type of road
Front camera A _1 picture jpeg Non-motor vehicle lane
Front camera A _2 picture jpeg Non-motor vehicle lane
Front camera A _3 picture jpeg Non-motor vehicle lane
Front camera B _1 picture jpeg Motor vehicle lane
Front camera B _2 picture jpeg Non-motor vehicle lane
Front camera B _2 picture jpeg Motor vehicle lane
Then, it is determined that the type of road on which the rider is currently located is a non-motorway.
If the first preset network model is deployed on the server, the intelligent helmet uploads the picture or video information shot by the front camera and the rear camera of the intelligent helmet to the server through a wireless network, such as a 4G network or a 5G network, the server processes the picture or the image information by using the first preset network model to obtain the road type of the current position of the rider, and then the road type is returned to the intelligent helmet through the wireless network.
A voice acquisition device (such as a microphone) is installed in the intelligent helmet, and sound information around a rider wearing the intelligent helmet is acquired in real time. Then, the corresponding road type is obtained according to the sound information around the rider, and the specific process refers to the process described in fig. 4.
And S301, acquiring voice information around the rider by using a voice acquisition device.
Step S302, carrying out voice classification on the voice information to obtain a voice type, wherein the voice type comprises at least one of the following types: human voice, vehicle running voice, non-vehicle running voice.
Voice information around a rider wearing the intelligent helmet is collected in real time by using voice collecting equipment (such as a microphone), and then voice recognition analysis is carried out on the voice information. For the case that the voice information is too long, the voice information can be split into multiple pieces, and then the voice analysis is performed piece by piece.
And performing voice recognition analysis on the recording information, wherein the voice recognition analysis specifically comprises performing voice classification on the voice information to obtain voice types, and the voice types comprise human voice, motor vehicle driving sound and non-motor vehicle driving sound.
And S303, when the voice type only has the driving sound of the motor vehicle, the third road type on which the rider rides is a motor vehicle lane.
And S304, when the voice type only has the driving sound of the non-motor vehicle, the third road type on which the rider rides is a non-motor vehicle lane.
And S305, when the voice type includes the running sound of the non-motor vehicle and the voice, the third road type of the rider is the crossroad.
After obtaining the voice type through the voice analysis, the voice analysis is performed on the voice information within a time range (such as 60 seconds), and the obtained voice type may have various types, such as three types including human voice, vehicle driving sound and non-vehicle driving sound, or only one type.
If the voice type only has the driving sound of the motor vehicle, the third road type of riding of the rider is a motor vehicle lane; when the voice type only has the running sound of the non-motor vehicle, the third road type of riding of the rider is a non-motor vehicle lane; when the voice type includes the driving voice of the non-motor vehicle and the voice of the person, the third road type on which the rider rides is the crossroad.
The road type is judged by obtaining the sound information around the rider, so that the problem that the road type where the rider is located cannot be obtained when the intelligent helmet cannot obtain the position information or the image information can be avoided. If a rider rides at night, the road type cannot be judged by acquiring images through the camera; if the underground passage is not available, the position information can not be acquired through the GPS to judge the road type.
S2, judging whether to start collision detection or not according to the first road type and/or the second road type and/or the third road type, and if the collision detection is started, obtaining a collision detection result.
The method comprises the steps of obtaining the road type of the position where a current rider (wearing the intelligent helmet) is located, and judging whether to start collision detection according to the road type. Referring specifically to the flow shown in fig. 5:
step S401, judging whether the first road type and/or the second road type and/or the third road type is a motor vehicle lane or an intersection.
Step S402, if the first road type and/or the second road type and/or the third road type is a motor lane or an intersection, starting collision detection to obtain a collision detection result.
And after the image information is acquired through the intelligent helmet, judging the road type of the current position of the rider. If the road type is an intersection or a motorway, collision detection needs to be started to judge whether the rider collides with other vehicles or pedestrians.
When a bicycle or an electric vehicle is ridden at a crossroad or a motorway, due to the fact that the road surface and the road state are complex, such as more vehicles and pedestrians, a rider needs to be reminded whether a danger of collision with surrounding vehicles or pedestrians exists in the riding process. And (4) a collision detection process, which is referred to as the flow chart in fig. 6.
And S501, acquiring third image information by using a third image acquisition unit of the intelligent helmet, wherein the third image acquisition unit is a monocular camera or a binocular camera.
A monocular camera or a binocular camera is installed behind the intelligent helmet, and image information behind a rider is obtained in real time through a rear camera. The image information shot by the binocular camera can be used for acquiring the depth of field information in the image, namely the distance information between the back of the rider and the rider.
The position information of an object (such as a vehicle) and a rider in the image can also be obtained by shooting the image information through the monocular camera. This technical scheme belongs to prior art, for example: 201710225288.6, the present invention will not be described in detail.
And S502, processing the third image information by using a second preset network model to obtain the rear motor vehicle information and/or non-motor vehicle information and/or pedestrian information of the rider.
The intelligent helmet rear camera (such as a binocular camera) acquires image information behind the rider in real time, and then the acquired image is processed by using a second preset network model to obtain motor vehicle information, non-motor vehicle information and pedestrian information behind the rider.
The vehicle information includes at least one of: vehicle type, vehicle speed, vehicle position, vehicle to rider distance; the non-motor vehicle information includes at least one of: non-motor vehicle type, non-motor vehicle speed, non-motor vehicle location, distance of non-motor vehicle from rider; the pedestrian information includes at least one of: pedestrian speed, pedestrian position, pedestrian-to-rider distance.
The second preset network model specifically uses that algorithm, the technical scheme is not limited, and a corresponding algorithm can be selected according to actual requirements for realization. Various technologies exist in the prior art for Advanced Driving Assistance System (ADAS) collision avoidance warning systems to detect whether a vehicle is transmitting a collision.
The technical principle of vehicle collision detection is as follows:
a signal acquisition system: the speed of the vehicle, the speed of the front vehicle and the distance between the two vehicles are automatically measured by adopting the technologies such as a camera and the like;
a data processing system: after the computer chip processes the distance between the two vehicles and the instantaneous relative speed of the two vehicles, the safety distance of the two vehicles is judged, and if the distance between the two vehicles is smaller than the safety distance, the data processing system sends an instruction; and the other method is that the computer chip calculates the Time To Collision (TTC) of two vehicles to calculate the danger degree, and then gives an alarm and a brake instruction.
An executing mechanism: the system is responsible for implementing instructions sent by the data processing system, giving an alarm and reminding a user of braking, avoiding and the like.
The speed of the rider can be obtained by calculating the position information and the time information of the positioning device, and how to calculate the speed is calculated. A pose acquisition device can be arranged in the intelligent helmet, for example, a six-axis pose sensor acquires pose information of the intelligent helmet, and then collision detection calculation is carried out through the pose information and rear image information.
And S503, judging whether the rider collides with the motor vehicle or not according to the motor vehicle information and the riding information, and if so, calculating first collision time.
And step S504, judging whether the rider collides with the non-motor vehicle or not according to the non-motor vehicle information and the riding information, and calculating second collision time if the rider collides with the non-motor vehicle.
And S505, judging whether the rider collides with the pedestrian or not according to the pedestrian information and the riding information, and calculating third collision time if the rider collides with the pedestrian.
After the intelligent helmet is processed through the rear image information, collision time is calculated after the possibility that a rider collides with a rear motor vehicle, a rear non-motor vehicle and a rear pedestrian is judged. As shown in the following table:
Figure BDA0004031173330000111
Figure BDA0004031173330000121
step S403, if the first road type and/or the second road type and/or the third road type is not a motor lane or an intersection, not starting collision detection.
If the type of the road corresponding to the position of the rider is not a motor vehicle lane or a crossroad but a non-motor vehicle lane, the probability of collision between the rider and other vehicles or pedestrians is lower, or the damage caused by the collision is small, and the intelligent helmet does not need to start collision detection, so that the calculated amount of the intelligent helmet is reduced, and the power consumption of a battery is reduced.
And S3, judging whether rear road information is displayed or not according to the first road type and/or the second road type and/or the third road type and/or the collision detection result.
The intelligent helmet judges whether image information and collision early warning information at the rear need to be displayed through AR glasses or HUD in front of the intelligent helmet according to the road type of the position of a rider or whether collision exists. The rider can perform corresponding operations such as leaving a motor vehicle lane, accelerating running and the like according to the rear road image information or the collision early warning information. The specific process is shown in FIG. 7:
step S601, the first road type and/or the second road type and/or the third road type are motor vehicle lanes or crossroads, and rear road information is displayed.
And judging whether the road type corresponding to the position of the rider is a motor vehicle lane or an intersection, if so, displaying image information behind the intelligent helmet through AR glasses in front of the intelligent helmet or a display screen of the HUD, such as road information behind the rider.
An AR glasses or HUD are installed to the goggles department at the intelligence helmet, and this AR glasses or HUD can rotate along with the goggles. Also can directly regard AR glasses or HUD as the goggles use, need not install the goggles again on intelligent helmet.
When riding passerby at crossroad or motorway, can look over the road information of the rear of riding in real time through AR glasses or HUD installed on the intelligent helmet, be convenient for ride passerby and carry out corresponding operation according to the road information at rear, if leave motorway or operation such as accelerating to go.
And step S602, displaying rear road information and the first collision time when the collision detection result indicates that the rider collides with the motor vehicle.
And S603, displaying rear road information and the second collision time when the collision detection result indicates that the rider collides with the non-motor vehicle.
And step S604, displaying rear road information and the third collision time when the collision detection result indicates that the rider collides with the pedestrian.
The intelligent helmet judges that there is collision with the motor vehicle or non-motor vehicle or pedestrian at rear through the image that rear camera obtained, acquires the time of collision, then exports the road image information and the time of collision at rider rear to AR glasses or HUD, presents in real time for rider. And the rider takes corresponding operations according to the road information and the collision time at the rear.
When riding the road type at passerby place when the non-motor way, need not present the road information at rear through AR glasses or HUD of intelligent helmet, AR glasses or HUD only use as a windshield this moment, are convenient for ride passerby and concentrate on riding, improve the experience of riding of passerby.
According to the embodiment of the invention, whether the image information behind the riding is presented in the AR glasses or the HUD is judged according to the road type or the collision detection result. The situation that the AR glasses or the HUD always display the image information behind the riding in the riding process is avoided, and the power consumption of the intelligent helmet is reduced; meanwhile, the riding safety and the comfort of wearing the helmet are improved.
In addition, the present invention provides another embodiment, as shown in fig. 8, which further includes the following steps based on the embodiment shown in fig. 1:
and S4, displaying rear road information after the riding steering information is received.
Install a button respectively on two handlebar hands about bicycle or electric bicycle, this button is connected through wireless and intelligent helmet, if connect through bluetooth and intelligent helmet. When the rider needs to steer, the rider triggers the ride steer by pressing a button mounted on the handlebar. If the rider needs to turn to the right, the rider presses a wireless button mounted on the right handlebar; after the wireless button receives the pressing operation of the user, riding steering information which turns to the right is sent to the intelligent helmet.
The intelligence helmet receives to ride and turns to behind the information (if the information that turns to of riding that turns to right), acquires in real time through the camera at intelligence helmet rear and rides the road image information at passerby rear, then presents for riding passerby in real time through AR glasses or HUD, is convenient for ride passerby and turns to the operation according to rear road information.
According to the embodiment of the invention, whether the image information behind the riding is presented in the AR glasses or the HUD is judged according to the steering intention of the rider. The situation that the AR glasses or the HUD always display the image information behind the riding in the riding process is avoided, and the power consumption of the intelligent helmet is reduced; meanwhile, the riding safety and the comfort of wearing the helmet are improved.
In addition, an embodiment of the present invention further provides a riding safety detection device based on an intelligent helmet, and with reference to fig. 9, the riding safety detection device based on the intelligent helmet includes:
the system comprises a helmet (2), an optical display unit (1), a control unit (3), a first image acquisition unit (4), a second image acquisition unit (5), a third image acquisition unit (6), a positioning unit (7) and a steering control unit (8);
the optical display unit (1) is rotatably connected with the helmet (2) and can enable the optical display unit (1) to be located at a first position and a second position, and when the optical display unit (1) is located at the first position, the optical display unit (1) is located in front of eyes of a user; when the optical display unit (1) is in the second position, the optical display unit (1) is positioned above the eyes of the user;
the first image acquisition unit (4) is installed in front of the helmet (2), the second image acquisition unit (5) and the third image acquisition unit (6) are installed in the rear of the helmet (2), the positioning unit (7) is installed above the helmet (2), and the control unit (3) is installed inside the helmet (2);
the first image acquisition unit (4) is used for acquiring first image information in front of the helmet (2);
the second image acquisition unit (5) is used for acquiring second image information behind the helmet (2);
the third image acquisition unit (6) is used for acquiring third image information behind the helmet (2);
the positioning unit (7) is used for acquiring the position information of the rider;
the optical display unit (1) is used for displaying rear road information;
the steering control unit (8) is used for receiving riding steering information initiated by a user and then sending the riding steering information to the control unit (3);
the control unit (3) is used for judging the type of a first road on which the rider rides according to the position information; the first image information and/or the second image information are/is used for judging a first road type ridden by the rider; the first road type and/or the second road type are/is used for judging whether to start collision detection or not, and if the collision detection is started, a collision detection result is obtained; the collision detection device is also used for judging whether to display rear road information according to the first road type and/or the second road type and/or the collision detection result; and the bicycle is also used for controlling the optical display unit (1) to display rear road information after receiving the riding steering information.
According to the embodiment of the invention, whether the image information behind the riding is presented in the AR glasses or not is judged according to the road type or the collision detection result. The situation that the AR glasses always display image information behind the user who rides the bicycle in the riding process is avoided, and the power consumption of the intelligent helmet is reduced; meanwhile, the riding safety and the comfort of wearing the helmet are improved.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 10, the hardware execution environment may include: a processor 1001, e.g. a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a display screen (Di sp l ay), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include standard wired interfaces, wireless interfaces (e.g., WI-FI, 4G, 5G interfaces). The memory 1005 may be a high-speed RAM memory or a non-vo l at i l e memory, such as a magnetic disk memory. The memory 1005 may alternatively be a storage device separate from the processor 1001 described previously.
Those skilled in the art will appreciate that the architecture shown in FIG. 10 does not constitute a limitation of the hardware operating environment and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 10, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a smart helmet-based riding safety detection program.
In the hardware operating environment shown in FIG. 10, the network interface 1004 is primarily used for data communication with external networks; the user interface 1003 is mainly used for receiving input instructions of a user; the hardware operating environment calls the intelligent helmet-based riding safety detection program stored in the memory 1005 through the processor 1001, and performs the following operations:
s1: acquiring position information of a rider, and judging a first road type of the rider according to the position information; and/or acquiring image information around the rider, and judging a second road type ridden by the rider according to the image information; and/or acquiring sound information around the rider, and judging a third road type ridden by the rider according to the sound information;
s2: judging whether to start collision detection according to the first road type and/or the second road type and/or the third road type, and if the collision detection is started, obtaining a collision detection result;
s3: and judging whether to display rear road information according to the first road type and/or the second road type and/or the third road type and/or the collision detection result.
Optionally, the obtaining the position information of the rider and determining the first road type of the rider according to the position information includes:
acquiring position information of a rider by using a position sensor;
using the position information to inquire map data to obtain a first road type of the rider; the first road type comprises at least one of: non-motor vehicle lanes, crossroads.
Optionally, the acquiring image information of the surroundings of the rider and determining the second road type on which the rider rides according to the image information includes the following steps:
acquiring first image information by using a first image acquisition unit of an intelligent helmet, wherein the first image information is image information in front of the intelligent helmet;
acquiring second image information by using a second image acquisition unit of the intelligent helmet, wherein the second image information is image information behind the intelligent helmet;
processing the first image information and/or the second image information by using a first preset network model to obtain a second road type ridden by the rider; the second road type includes at least one of: non-motorized lanes, intersections.
Optionally, the acquiring sound information around the rider and determining a third road type on which the rider rides according to the sound information includes:
acquiring voice information around the rider by using a voice acquisition device;
performing voice classification on the voice information to obtain a voice type, wherein the voice type comprises at least one of the following: human voice, motor vehicle running voice, non-motor vehicle running voice;
when the voice type only has the driving sound of the motor vehicle, the third road type on which the rider rides is a motor vehicle lane;
when the voice type only has the running sound of the non-motor vehicle, the third road type on which the rider rides is a non-motor vehicle lane;
when the voice type comprises the running sound of the non-motor vehicle and the voice of a person, the third road type of the riding person is the crossroad.
Optionally, the step S2 includes the following steps:
judging whether the first road type and/or the second road type and/or the third road type is a motor lane or an intersection;
if the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad, starting collision detection to obtain a collision detection result;
not turning on collision detection if the first road type and/or the second road type and/or the third road type is not a motorway or an intersection.
Optionally, the starting of the collision detection to obtain a collision detection result includes the following steps:
acquiring third image information by using a third image acquisition unit of the intelligent helmet, wherein the third image acquisition unit is a monocular camera or a binocular camera;
processing the third image information by using a second preset network model to obtain the motor vehicle information and/or non-motor vehicle information and/or pedestrian information behind the rider;
the motor vehicle information includes at least one of: vehicle type, vehicle speed, vehicle position, vehicle to rider distance;
the non-motor vehicle information includes at least one of: non-motor vehicle type, non-motor vehicle speed, non-motor vehicle location, distance of non-motor vehicle from rider;
the pedestrian information includes at least one of: pedestrian speed, pedestrian position, distance of pedestrian to rider;
judging whether the rider collides with the motor vehicle or not according to the motor vehicle information and the riding information, and if so, calculating first collision time;
judging whether the rider collides with the non-motor vehicle or not according to the non-motor vehicle information and the riding information, and calculating second collision time if the rider collides with the non-motor vehicle;
and judging whether the rider collides with the pedestrian or not according to the pedestrian information and the riding information, and if so, calculating third collision time.
Optionally, the step S3 includes the steps of:
the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad, and rear road information is displayed; or the like, or, alternatively,
the collision detection result is that the rider collides with the motor vehicle, and rear road information and the first collision time are displayed; or the like, or, alternatively,
the collision detection result indicates that the rider collides with the non-motor vehicle, and rear road information and the second collision time are displayed; or the like, or, alternatively,
and the collision detection result is that the rider collides with the pedestrian, and rear road information and the third collision time are displayed.
Optionally, the method further comprises the steps of:
and after the riding steering information is received, displaying rear road information.
According to the embodiment of the invention, whether the image information behind the riding is presented in the AR glasses is judged according to the road type or the collision detection result. The situation that the AR glasses always display the image information behind the riding in the riding process is avoided, and the power consumption of the intelligent helmet is reduced; meanwhile, the riding safety and the comfort of wearing the helmet are improved.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a riding safety detection program based on an intelligent helmet is stored on the computer-readable storage medium, and when executed by a processor, the riding safety detection program based on the intelligent helmet implements the following operations:
s1: acquiring position information of a rider, and judging a first road type of the rider according to the position information; and/or acquiring image information around the rider, and judging a second road type ridden by the rider according to the image information; and/or acquiring sound information around the rider, and judging a third road type ridden by the rider according to the sound information;
s2: judging whether to start collision detection according to the first road type and/or the second road type and/or the third road type, and if the collision detection is started, obtaining a collision detection result;
s3: and judging whether to display rear road information according to the first road type and/or the second road type and/or the third road type and/or the collision detection result.
Optionally, the obtaining the position information of the rider and determining the first road type of the rider according to the position information includes:
acquiring position information of a rider by using a position sensor;
using the position information to inquire map data to obtain a first road type of the rider; the first road type comprises at least one of: non-motorized lanes, intersections.
Optionally, the acquiring image information of the surroundings of the rider and determining the second road type ridden by the rider according to the image information includes the following steps:
acquiring first image information by using a first image acquisition unit of an intelligent helmet, wherein the first image information is image information in front of the intelligent helmet;
acquiring second image information by using a second image acquisition unit of the intelligent helmet, wherein the second image information is image information behind the intelligent helmet;
processing the first image information and/or the second image information by using a first preset network model to obtain a second road type ridden by the rider; the second road type includes at least one of: non-motorized lanes, intersections.
Optionally, the acquiring sound information around the rider and determining a third road type on which the rider rides according to the sound information includes:
acquiring voice information around the rider by using a voice acquisition device;
performing voice classification on the voice information to obtain a voice type, wherein the voice type comprises at least one of the following types: human voice, motor vehicle running voice, non-motor vehicle running voice;
when the voice type only has the driving sound of the motor vehicle, the third road type on which the rider rides is a motor vehicle lane;
when the voice type only has the running sound of the non-motor vehicle, the third road type on which the rider rides is a non-motor vehicle lane;
when the voice type comprises the running sound of the non-motor vehicle and the voice of a person, the third road type of the riding person is the crossroad.
Optionally, the step S2 includes the following steps:
judging whether the first road type and/or the second road type and/or the third road type is a motor lane or an intersection;
if the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad, starting collision detection to obtain a collision detection result;
not turning on collision detection if the first road type and/or the second road type and/or the third road type is not a motorway or intersection.
Optionally, the starting collision detection to obtain a collision detection result includes the following steps:
acquiring third image information by using a third image acquisition unit of the intelligent helmet, wherein the third image acquisition unit is a monocular camera or a binocular camera;
processing the third image information by using a second preset network model to obtain the motor vehicle information and/or non-motor vehicle information and/or pedestrian information behind the rider;
the motor vehicle information includes at least one of: vehicle type, vehicle speed, vehicle position, vehicle to rider distance;
the non-motor vehicle information includes at least one of: non-motor vehicle type, non-motor vehicle speed, non-motor vehicle location, distance of non-motor vehicle from rider;
the pedestrian information includes at least one of: pedestrian speed, pedestrian position, distance of pedestrian to rider;
judging whether the rider collides with the motor vehicle or not according to the motor vehicle information and the riding information, and if so, calculating first collision time;
judging whether the rider collides with the non-motor vehicle or not according to the non-motor vehicle information and the riding information, and calculating second collision time if the rider collides with the non-motor vehicle;
and judging whether the rider collides with the pedestrian or not according to the pedestrian information and the riding information, and if so, calculating third collision time.
Optionally, the step S3 includes the following steps:
the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad, and rear road information is displayed; or the like, or, alternatively,
the collision detection result is that the rider collides with the motor vehicle, and rear road information and the first collision time are displayed; or the like, or, alternatively,
the collision detection result indicates that the rider collides with the non-motor vehicle, and rear road information and the second collision time are displayed; or the like, or, alternatively,
and the collision detection result is that the rider collides with the pedestrian, and rear road information and the third collision time are displayed.
Optionally, the method further comprises the steps of:
and after the riding steering information is received, displaying rear road information.
According to the embodiment of the invention, whether the image information behind the riding is presented in the AR glasses or not is judged according to the road type or the collision detection result. The situation that the AR glasses always display the image information behind the riding in the riding process is avoided, and the power consumption of the intelligent helmet is reduced; meanwhile, the riding safety and the comfort of wearing the helmet are improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, a controller, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (10)

1. The riding safety detection method based on the intelligent helmet is characterized by comprising the following steps:
s1: acquiring position information of a rider, and judging a first road type of riding of the rider according to the position information; and/or acquiring image information around the rider, and judging a second road type ridden by the rider according to the image information; and/or acquiring sound information around the rider, and judging a third road type ridden by the rider according to the sound information;
s2: judging whether to start collision detection according to the first road type and/or the second road type and/or the third road type, and if the collision detection is started, obtaining a collision detection result;
s3: and judging whether to display rear road information or not according to the first road type and/or the second road type and/or the third road type and/or the collision detection result.
2. The method according to claim 1, wherein the step of obtaining the position information of the rider and judging the first road type ridden by the rider according to the position information comprises the following steps:
acquiring position information of a rider by using a position sensor;
using the position information to inquire map data to obtain a first road type of the rider; the first road type comprises at least one of: non-motor vehicle lanes, crossroads.
3. The method according to claim 1, wherein said obtaining image information of the surroundings of the rider and determining the second road type ridden by the rider according to the image information comprises the following steps:
acquiring first image information by using a first image acquisition unit of an intelligent helmet, wherein the first image information is image information in front of the intelligent helmet;
acquiring second image information by using a second image acquisition unit of the intelligent helmet, wherein the second image information is image information behind the intelligent helmet;
processing the first image information and/or the second image information by using a first preset network model to obtain a second road type ridden by the rider; the second road type includes at least one of: non-motorized lanes, intersections.
4. The method according to claim 1, wherein the step of obtaining sound information around the rider and judging a third road type ridden by the rider according to the sound information comprises the following steps:
acquiring voice information around the rider by using a voice acquisition device;
performing voice classification on the voice information to obtain a voice type, wherein the voice type comprises at least one of the following types: human voice, motor vehicle running voice, non-motor vehicle running voice;
when the voice type only has the driving sound of the motor vehicle, the third road type on which the rider rides is a motor vehicle lane;
when the voice type only has the running sound of the non-motor vehicle, the third road type on which the rider rides is a non-motor vehicle lane;
when the voice type comprises the running sound of the non-motor vehicle and the voice of a person, the third road type of the riding person is the crossroad.
5. The method according to claim 1, wherein the step S2 comprises the steps of:
judging whether the first road type and/or the second road type and/or the third road type is a motor lane or an intersection;
if the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad, starting collision detection to obtain a collision detection result;
not turning on collision detection if the first road type and/or the second road type and/or the third road type is not a motorway or an intersection.
6. The method of claim 5, wherein the step of enabling the collision detection results in a collision detection result, comprises the steps of:
acquiring third image information by using a third image acquisition unit of the intelligent helmet, wherein the third image acquisition unit is a monocular camera or a binocular camera;
processing the third image information by using a second preset network model to obtain the motor vehicle information and/or non-motor vehicle information and/or pedestrian information behind the rider;
the motor vehicle information includes at least one of: vehicle type, vehicle speed, vehicle position, vehicle to rider distance;
the non-motor vehicle information includes at least one of: non-motor vehicle type, non-motor vehicle speed, non-motor vehicle location, distance of non-motor vehicle from rider;
the pedestrian information includes at least one of: pedestrian speed, pedestrian position, distance of pedestrian to rider;
judging whether the rider collides with the motor vehicle or not according to the motor vehicle information and the riding information, and if so, calculating first collision time;
judging whether the rider collides with the non-motor vehicle or not according to the non-motor vehicle information and the riding information, and calculating second collision time if the rider collides with the non-motor vehicle;
and judging whether the rider collides with the pedestrian or not according to the pedestrian information and the riding information, and if so, calculating third collision time.
7. The method according to claim 6, wherein the step S3 comprises the steps of:
the first road type and/or the second road type and/or the third road type is a motor vehicle lane or a crossroad, and rear road information is displayed; or the like, or a combination thereof,
the collision detection result is that the rider collides with the motor vehicle, and rear road information and the first collision time are displayed; or the like, or, alternatively,
the collision detection result indicates that the rider collides with the non-motor vehicle, and rear road information and the second collision time are displayed; or the like, or, alternatively,
and the collision detection result is that the rider collides with the pedestrian, and rear road information and the third collision time are displayed.
8. The method of claim 1, further comprising the steps of:
and after the riding steering information is received, displaying rear road information.
9. Safety inspection device rides based on intelligent helmet, its characterized in that, the device includes: the system comprises a helmet (2), an optical display unit (1), a control unit (3), a first image acquisition unit (4), a second image acquisition unit (5), a third image acquisition unit (6), a positioning unit (7) and a steering control unit (8);
the optical display unit (1) is rotatably connected with the helmet (2) and can enable the optical display unit (1) to be located at a first position and a second position, and when the optical display unit (1) is located at the first position, the optical display unit (1) is located in front of eyes of a user; when the optical display unit (1) is in the second position, the optical display unit (1) is positioned above the eyes of the user;
the first image acquisition unit (4) is installed in front of the helmet (2), the second image acquisition unit (5) and the third image acquisition unit (6) are installed in the rear of the helmet (2), the positioning unit (7) is installed above the helmet (2), and the control unit (3) is installed inside the helmet (2);
the first image acquisition unit (4) is used for acquiring first image information in front of the helmet (2);
the second image acquisition unit (5) is used for acquiring second image information behind the helmet (2);
the third image acquisition unit (6) is used for acquiring third image information behind the helmet (2);
the positioning unit (7) is used for acquiring the position information of the rider;
the optical display unit (1) is used for displaying rear road information;
the steering control unit (8) is used for receiving riding steering information initiated by a user and then sending the riding steering information to the control unit (3);
the control unit (3) is used for judging the type of a first road on which the rider rides according to the position information; the first image information and/or the second image information are/is used for judging a first road type ridden by the rider; the first road type and/or the second road type are/is used for judging whether to start collision detection or not, and if the collision detection is started, a collision detection result is obtained; the collision detection device is also used for judging whether to display rear road information according to the first road type and/or the second road type and/or the collision detection result; and the bicycle is also used for controlling the optical display unit (1) to display rear road information after receiving the riding steering information.
10. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, implements the steps of the intelligent helmet-based cycling safety detection method according to any one of claims 1 to 7.
CN202211730029.6A 2022-12-30 2022-12-30 Riding safety detection method and device based on intelligent helmet and storage medium Pending CN115965938A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211730029.6A CN115965938A (en) 2022-12-30 2022-12-30 Riding safety detection method and device based on intelligent helmet and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211730029.6A CN115965938A (en) 2022-12-30 2022-12-30 Riding safety detection method and device based on intelligent helmet and storage medium

Publications (1)

Publication Number Publication Date
CN115965938A true CN115965938A (en) 2023-04-14

Family

ID=87363289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211730029.6A Pending CN115965938A (en) 2022-12-30 2022-12-30 Riding safety detection method and device based on intelligent helmet and storage medium

Country Status (1)

Country Link
CN (1) CN115965938A (en)

Similar Documents

Publication Publication Date Title
EP3759700B1 (en) Method for determining driving policy
US20210357670A1 (en) Driver Attention Detection Method
CN107004363B (en) Image processing device, on-vehicle display system, display device, and image processing method
US8953841B1 (en) User transportable device with hazard monitoring
JP6841843B2 (en) Vehicle control systems, vehicle control methods, and vehicle control programs
EP2107503A1 (en) Method and device for generating a real time environment model for vehicles
CN111332309B (en) Driver monitoring system and method of operating the same
KR20190031951A (en) An electronic device and Method for controlling the electronic device thereof
EP2107504A1 (en) Method and device for generating a real time environment model for vehicles
CN109804400B (en) Information providing device and moving object
CN113205088B (en) Obstacle image presentation method, electronic device, and computer-readable medium
CN109690601B (en) Information providing device and moving object
JP2022169621A (en) Reproduction device, reproduction method, program for the same, recording apparatus, and control method of recording apparatus
CN111243332A (en) Information providing system and method, server, in-vehicle device, and storage medium
JPWO2017163514A1 (en) Glasses-type wearable terminal, control method thereof, and control program
JP2021163345A (en) Driving support device and data collection system
JP7136538B2 (en) electronic device
CN110871810A (en) Vehicle, vehicle equipment and driving information prompting method based on driving mode
JP2022047580A (en) Information processing device
Kashevnik et al. Context-based driver support system development: Methodology and case study
CN115965938A (en) Riding safety detection method and device based on intelligent helmet and storage medium
JP6623657B2 (en) Information providing apparatus, information providing system, and information providing method
KR102431493B1 (en) System and method for providing vehicle function guidance and vritrual test-driving experience based on augmented reality
CN113492864A (en) Driving support device and data collection system
CN112183415A (en) Lane line processing method, vehicle, and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination