CN115995142A - Driving training reminding method based on wearable device and wearable device - Google Patents

Driving training reminding method based on wearable device and wearable device Download PDF

Info

Publication number
CN115995142A
CN115995142A CN202211247781.5A CN202211247781A CN115995142A CN 115995142 A CN115995142 A CN 115995142A CN 202211247781 A CN202211247781 A CN 202211247781A CN 115995142 A CN115995142 A CN 115995142A
Authority
CN
China
Prior art keywords
lane
feature
image
vehicle
wearable device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211247781.5A
Other languages
Chinese (zh)
Inventor
李景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Desai Xiwei Intelligent Transportation Technology Co ltd
Original Assignee
Guangzhou Desai Xiwei Intelligent Transportation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Desai Xiwei Intelligent Transportation Technology Co ltd filed Critical Guangzhou Desai Xiwei Intelligent Transportation Technology Co ltd
Priority to CN202211247781.5A priority Critical patent/CN115995142A/en
Publication of CN115995142A publication Critical patent/CN115995142A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a driving training reminding method based on a wearable device and the wearable device, wherein the method is applied to the wearable device worn by a user; the method comprises the following steps: acquiring a sight line image through an image acquisition module; acquiring current project information, wherein the project information is related to a currently-performed driving training project; identifying preset features in the sight line image according to the project information; and acquiring corresponding operation reminding information based on the identification result, and executing the display of the operation reminding information. According to the method and the device, the dependence of the students on teaching of the coaches in the driving training process can be solved, and the operation reminding made according to the condition of the current sight of the students is timely and accurate relative to the guidance of the traditional coaches, so that the accuracy of the operation reminding is improved, and the driving training efficiency of the students is improved.

Description

Driving training reminding method based on wearable device and wearable device
Technical Field
The application relates to the field of intelligent equipment, in particular to a driving training reminding method based on a wearable device and the wearable device.
Background
In the existing driving training mode, a learner is usually required to follow a car to guide the learner to drive, namely, the learner operates in a driving position, the learner sits on a secondary driving to observe the action of the learner and the running condition of the vehicle, and when the learner encounters a problem in a specific training project, the learner is required to combine the current running condition of the vehicle and the action of the learner to carry out on-site strain, so that the learner is guided to operate the vehicle to continuously finish the operation and meet the project requirement.
However, this approach requires the learner to accompany the learner at any time, and the learner is difficult to learn the current vision situation of the learner because the learner is not in the main driving position, and only depends on experience and on-site strain capacity to instruct the learner to complete the exercise program, resulting in relatively delayed instruction obtained by the learner. For example, when a learner exercises to reverse and put in storage, the situations that the vehicle is about to be pressed by a lane line and the like may occur due to misoperation, but the learner usually waits until the learner finds out the line, and the learner is difficult to perceive the false action of the learner, at the moment, the learner can only be guided to recover the posture of the vehicle, and then the reverse and put in storage is performed again, so that time is wasted, and the learner is difficult to know in which link the learner is wrong, and the repeated training is required.
Therefore, the learner cannot get prompt in time after the operation is in error, the accuracy of prompt contents under different sights is relatively poor, the teaching of the learner also depends on accompanying of a coach, the cost is high, the learning effect of the learner can be influenced if the accuracy of the prompt contents is low, and the driving training efficiency is difficult to improve.
Disclosure of Invention
The application provides a driving training reminding method based on a wearable device and the wearable device, which can improve the driving training efficiency of students.
In a first aspect, the present application discloses a driving training reminding method based on a wearable device, the method being applied to a wearable device provided for wearing by a user;
the wearable device comprises an image acquisition module, wherein the image acquisition module faces to the sight direction of a user;
the method comprises the following steps:
acquiring a sight line image through the image acquisition module;
acquiring current project information, wherein the project information is related to a currently-performed driving training project;
identifying preset features in the sight line image according to the item information;
and acquiring corresponding operation reminding information based on the identification result, and executing the display of the operation reminding information.
Optionally, the preset features at least include vehicle features and lane features;
the identifying the preset features in the sight line image according to the item information comprises the following steps:
determining corresponding preset reminding conditions according to the item information, wherein the preset reminding conditions comprise the correlation of the preset relative position relationship between the vehicle characteristics and the lane characteristics;
identifying position information of vehicle features and lane features in the line-of-sight image;
and judging whether the relative position relation between the vehicle features in the sight line image and the lane features accords with a preset reminding condition or not.
Optionally, identifying the position information of the vehicle feature and the lane feature in the sight line image includes:
identifying a first vehicle feature in the line-of-sight image;
determining a first region of interest relative to a location of the first vehicle feature;
identifying the position information of the lane feature in the first region of interest, or identifying the position information of the lane feature and the second vehicle feature;
the judging whether the relative position relation between the vehicle feature and the lane feature in the sight line image accords with a preset reminding condition comprises the following steps:
Judging whether the relative position relation between the lane characteristics and the first vehicle characteristics accords with a preset reminding condition or not; or alternatively
And judging whether the relative position relation between the lane characteristic and the second vehicle characteristic accords with a preset reminding condition or not.
Optionally, the first vehicle feature is a rearview mirror and the first region of interest is a specular region of the rearview mirror;
the identifying the position information of the vehicle feature and the lane feature in the sight line image includes:
amplifying the mirror area;
identifying vehicle features and lane features in the enlarged image;
acquiring the pixel width of the lane feature and the pixel width between the vehicle feature and the lane feature;
and estimating the relative distance and the relative angle between the current lane feature and the vehicle feature according to the lane feature, the pixel width between the vehicle feature and the lane feature and the actual width of the lane.
Optionally, the identifying the preset feature in the line-of-sight image according to the item information includes:
and identifying preset features in the sight line image by adopting a Mask R-CNN deep learning algorithm.
Optionally, the preset features at least include vehicle features and lane features;
The identifying the preset features in the sight line image according to the item information comprises the following steps:
obtaining location information of a first lane feature from the line-of-sight image based on digital image detection;
identifying and obtaining the position information of the second road characteristic based on Mask R-CNN deep learning algorithm;
and obtaining position information corresponding to the target lane characteristic according to the first lane characteristic and the second lane characteristic.
Optionally, the obtaining the location information of the first lane feature from the line-of-sight image based on digital image detection includes:
identifying a first vehicle feature in the line-of-sight image;
determining a first region of interest relative to a location of the first vehicle feature;
performing color space conversion on a target image of a first region of interest to convert the target image from an RGB color space to an HSV color space;
and processing the target image after the color space is converted to obtain a lane feature mask of the target image.
Optionally, the obtaining the location information corresponding to the target lane feature according to the first lane feature and the second lane feature includes:
acquiring a first lane region, a second lane region and a third lane region;
Performing similarity analysis on the first lane region, the second lane region and the third lane region to obtain position information corresponding to the target lane characteristics;
the first vehicle road area is a common area of a first vehicle road feature and a second vehicle road feature; the second lane region is a lane region corresponding to the first lane feature and the second lane feature; the third lane region is a lane region corresponding to the lane features detected in the historical sight line image.
In a second aspect, the present application also discloses a wearable device comprising an image acquisition module facing a line of sight direction of a user;
the wearable device further comprises a processor and a memory, wherein the processor is electrically connected with the memory;
the memory stores a computer program, and the processor executes the wearable device-based driving training reminding method according to any one of the embodiments above by calling the computer program stored in the memory.
Optionally, the wearable device is a camera device for wearing on the head of a human body;
the image acquisition module is a camera, and the camera is arranged at a position close to eyes of a human body in the camera device.
Optionally, the camera device is intelligent glasses, the camera sets up between the both sides picture frame of intelligent glasses.
Optionally, the wearable device further includes an image output module, and the image output module is connected with the processor;
the processor is further configured to perform:
acquiring driving operation information of a current vehicle;
displaying the driving operation information in the sight line image and performing superposition processing;
and outputting the sight line image after the superposition processing to the image output module so as to output the image through the image output module.
From the above, the visual line image acquisition method and device can identify the acquired visual line image through the wearable device worn by the user and make corresponding operation reminding, so that the dependence of a student on teaching of a coach in the driving training process can be solved, and the operation reminding made according to the condition in the current visual line of the student is more timely and accurate compared with the guidance of the traditional coach, the accuracy of the operation reminding is improved, and the driving training efficiency of the student is improved.
Drawings
Fig. 1 is a schematic diagram of functional modules of a wearable device according to an embodiment of the present application.
Fig. 2 is a flowchart of implementation of a driving training reminding method based on a wearable device according to an embodiment of the present application.
Fig. 3 is a flowchart of an implementation of obtaining operation reminding information according to an embodiment of the present application.
Fig. 4 is an application scenario schematic diagram of a driving training reminding method based on a wearable device according to an embodiment of the present application.
Fig. 5 is a flowchart of an implementation of obtaining location information of a target lane feature according to an embodiment of the present application.
Fig. 6 is a schematic structural diagram of a wearable device according to an embodiment of the present application.
Detailed Description
The preferred embodiments of the present application will be described in detail below with reference to the attached drawings so that the advantages and features of the present application will be more readily understood by those skilled in the art, thereby more clearly defining the scope of the present application.
Example 1
Referring to fig. 1, a schematic diagram of functional modules of a wearable device according to an embodiment of the present application is shown.
As shown in fig. 1, the wearable device 10 may include an image acquisition module 11 for providing for wearing by a user. The wearable device 10 may be worn on the head to acquire an image of the user's gaze direction by the image acquisition module 11. In an embodiment, the image capturing module 11 may be a camera, which may use an image sensor such as a CCD, CMOS, etc. to sensitively image external light, and perform image processing by the processing element in the image capturing module 11 or the wearable device 10 to obtain a line-of-sight image of the line-of-sight direction of the user.
Further, in order to obtain an image as close to the line of sight of the user as possible, the camera may be disposed near the eyes of the user, for example, at the forehead, the eardrum, etc. The wearable device 10 may take the form of, for example, a helmet, glasses, headphones, or other wearable device 10 that is secured to the head.
By means of the wearable device 10, a sight line image which is close to the sight line direction of the user as much as possible can be obtained, and therefore accuracy and reliability of operation reminding information are improved.
Referring to fig. 2, a flow of implementation of the driving training reminding method based on the wearable device provided in the embodiment of the application is shown. The method is applied to a wearable device provided to be worn by a user, which may be any of the wearable devices of the embodiments shown in fig. 1.
The method comprises the following implementation steps:
101. and acquiring a sight line image through an image acquisition module.
The line-of-sight image may be a continuous multi-frame image, and the specific image definition and frame rate may be set based on the capabilities of the image acquisition module.
In an embodiment, after the light signal is acquired by the image sensor of the image acquisition module, processing may be performed in the image acquisition module, or in other processing elements of the wearable device, to obtain a line-of-sight image.
It will be appreciated that the image information in the line of sight image varies with the head movement of the user and scene changes in the direction of the line of sight.
102. And acquiring current project information, wherein the project information is related to the currently-performed driving training project.
The project information can be project information corresponding to examination projects such as reversing, warehousing and lateral parking, and the type of the project information can be determined according to the examination projects.
The project information at least comprises characteristic information related to the current driving training project, for example, the characteristic information corresponding to the relative position relation between a lane and the side surface of a vehicle body is needed in reversing and warehousing, the characteristic information corresponding to the relative position relation between a vehicle head engine cover and the lane is needed in turning training, and the like.
Of course, the item information may include other information related to the driving training item, such as the item name, driving information of the vehicle, and the like, in addition to this.
In some implementations, the project information may be preset on the wearable device and directly called, or may be preset on a terminal of a driving training related device or a user mobile phone on a vehicle, and obtained by communicating the wearable device with the terminal, or may be connected with the internet through the terminal of the driving training related device or the user mobile phone on the vehicle, so that the wearable device may obtain related project information from the internet.
103. And identifying preset features in the sight line image according to the project information.
The preset feature may be a feature related to project information, for example, a feature related to reversing and warehousing may be a feature of a rearview mirror, and whether the current reversing and warehousing action accords with a preset condition is judged by identifying the position of the rearview mirror and the relative position of the side surface of the vehicle body and the lane line reflected by the rearview mirror. For another example, the hood and the lane line in the vicinity of the hood are recognized, and when the hood and the lane line are recognized, the positional relationship between the vehicle and the lane line can be determined based on the positional relationship between the hood and the lane line.
It can be appreciated that different item information may have corresponding different preset features, and the preset features may be preset according to different item information, which is not limited in this application.
In an embodiment, the identification of the preset feature in the sight line image may be achieved through an artificial intelligence technology or a deep learning technology, for example, the preset feature is trained in advance through a learning algorithm to obtain a corresponding identification model, and the identification model is used for feature identification of the sight line image, so that the position information of the preset feature in the sight line image is determined and obtained through the identification model. Specifically, a Mask R-CNN deep learning algorithm can be adopted to identify preset features in the sight line image. The Mask R-CNN algorithm can simultaneously realize multiple tasks such as target classification, target detection, semantic segmentation, instance segmentation and the like in one network.
In a specific process, the method can comprise the following steps: the input image is firstly subjected to feature extraction through a CNN network layer, and a lightweight neural network SheffeNet v1 is adopted as convolutional backbone to perform corresponding optimization in order to accelerate the execution speed of the network. The input image generates a corresponding feature map after passing through a CNN network layer; then, a predetermined number of ROIs are set for each point in this feature map, thereby obtaining a plurality of candidate ROIs; these candidate ROIs are then fed into the RPN network for binary classification (foreground or background) and BB regression, filtering out a portion of the candidate ROIs. The ROIAlign operation is performed on these remaining ROIs (i.e., the original map is associated with the pixel of the feature map and then the feature map is associated with the fixed feature). Finally, branching is carried out on the ROIs, wherein one class generates the result of target detection, namely, the class (probability value) of target detection and the circumscribed rectangular frame of the target are finally generated through fully connected layers; the other class then generates segmented images via deconvolution (FCN operation).
The Mask R-CNN network can realize target detection and image segmentation, wherein the target detection can detect targets such as a vehicle engine cover, a rearview mirror and the like; the image segmentation can separate the positions of lane lines, parking space lines and boundary lines in the image.
The Mask R-CNN deep learning algorithm can not only effectively improve the recognition efficiency and recognition accuracy of preset features, but also reduce the operation processing pressure of the wearable device.
104. And acquiring corresponding operation reminding information based on the identification result, and executing the display of the operation reminding information.
In an embodiment, after identifying the corresponding position information of the preset features in the sight line image, the relative position of the current vehicle in the training process can be judged according to the position information among the preset features, and the control mode of the vehicle to be executed next is predicted, so that the corresponding operation reminding information can be obtained for display.
For example, if the vehicle is turning, when the relative position between the identified vehicle hood and the lane line changes, and the change gradually decreases the distance between the vehicle hood and the lane line, a steering operation alert may be sent to the user, so that the user may operate the vehicle to perform corresponding steering, so as to avoid the vehicle from exceeding the lane line.
The operation reminding information can be displayed in a mode of voice, display image or sound-light combination. In addition, the display can be performed through hardware on the wearable device or through external equipment connected and communicated with the wearable device, and the specific display mode can be determined according to actual needs, so that the display is not limited.
The user can timely acquire the driving condition in the current sight line image through the operation reminding information displayed by the wearable device or the external equipment, and can execute corresponding operation according to the operation reminding information without a coach to conduct guidance aside.
From the above, the visual line image acquisition method and device can identify the acquired visual line image through the wearable device worn by the user and make corresponding operation reminding, so that the dependence of a student on teaching of a coach in the driving training process can be solved, and the operation reminding made according to the condition in the current visual line of the student is more timely and accurate compared with the guidance of the traditional coach, the accuracy of the operation reminding is improved, and the driving training efficiency of the student is improved.
Example 2
Referring to fig. 3, a flow of obtaining operation reminding information according to an embodiment of the present application is shown.
In one embodiment, the predetermined characteristics include at least vehicle characteristics and lane characteristics.
As shown in fig. 3, the identifying the preset feature in the sight line image according to the item information includes:
201. and determining corresponding preset reminding conditions according to the project information, wherein the preset reminding conditions are related to the relative position relationship between preset vehicle characteristics and lane characteristics.
The vehicle features can include any feature of the vehicle body related to position comparison structure adopted for completing driving training projects or habits, such as a rearview mirror, an engine cover, an A column, a B column, a rearview mirror and the like of the vehicle, and can also include other vehicle body structures.
The lane feature may be the position, color, or shape of the lane line, parking spot line, or boundary line (e.g., a garage corner of a garage), etc., or other features of the lane line.
It will be appreciated that the vehicle characteristics and the lane characteristics may be determined according to the requirements of the actual driving training, and the specific characteristics are not limited in this application.
In an embodiment, corresponding preset reminding conditions are determined according to the item information, parameters in the preset reminding conditions may include a position distance, a relative angle or a relative direction between a vehicle feature and a lane feature, and the preset reminding conditions may be triggered by detecting the position distance, the relative angle or the relative direction between the vehicle feature and the lane feature and judging whether a certain threshold condition is met.
For example, when the item information includes item information of a reverse warehouse entry item, the corresponding preset reminding condition may be set as whether a distance threshold between a side surface of the vehicle body and a lane line of the garage is smaller than an a value or whether an included angle threshold between the side surface of the vehicle body and the lane line of the garage is smaller than a B value. And when the threshold is met, corresponding operation reminding can be carried out.
202. Position information of a vehicle feature and a lane feature in the line-of-sight image is identified.
The positions of the vehicle features and the lane features in the sight line image can be identified by utilizing an identification model in a deep learning mode, and position information corresponding to the vehicle features and the lane features is obtained.
In an embodiment, identifying the location information of the vehicle features and the lane features in the line-of-sight image may include steps 2021, 2022, and 2023.
2021. A first vehicle feature is identified in the line-of-sight image.
The first vehicle characteristic can be a vehicle characteristic which is preferentially identified according to actual training requirements.
For example, under a reverse entry item, a reverse mirror is set as a first vehicle feature to locate the position of the reverse mirror. For another example, if under the turning item, the hood is set to the first vehicle characteristic.
Of course, the first vehicle characteristic may be selected according to the actual requirements of the driving training program.
2022. A first region of interest is determined relative to a location of a first vehicle feature.
The first region of interest may be a ROI (Region Of Interest) region under the recognition model, and may be a preferentially recognized image region that needs to be locked based on actual training.
For example, under a reverse entry item, in order to better identify the vehicle situation, the mirror surface area of the reverse mirror may be set as the first region of interest. For another example, if under the turning item, in order to identify a lane line in front of the vehicle, the area above the hood may be set as the first region of interest. The first region of interest may be a pre-set image region located at a specific orientation of the first vehicle feature.
Of course, the selection of the first region of interest may also depend on the actual requirements of the driving training program.
2023. The location information of the lane feature is identified in the first region of interest or the location information of the lane feature and the second vehicle feature is identified.
Wherein the other vehicle characteristic in the first interest relative to the first vehicle characteristic may be defined herein as the second vehicle characteristic. For example, under the reversing entry item, the identified reversing mirror is a first vehicle feature, and the side portion of the vehicle body reflected in the mirror surface area of the reversing mirror may be defined as a second vehicle feature, and the corresponding operation reminding is performed by determining the relative positional relationship between the side portion of the vehicle body and the lane line.
The first region of interest is confirmed through the first vehicle feature, and the position information of the lane feature and the second vehicle feature is identified in the first region of interest, so that the calculation resources consumed in the identification process can be reduced, and the identification efficiency and accuracy of the sight line image can be further improved.
In some embodiments, the image of the first region of interest may also be scaled in this step, thereby improving recognition efficiency and accuracy.
203. And judging whether the relative position relation between the vehicle features and the lane features in the sight line image accords with a preset reminding condition.
In one embodiment, the specific determining process may include: judging whether the relative position relation between the lane characteristics and the first vehicle characteristics accords with preset reminding conditions or not; or judging whether the relative position relation between the lane characteristic and the second vehicle characteristic accords with a preset reminding condition.
It is known that corresponding preset reminding conditions are set according to different item information, and then the position of the vehicle relative to the lane line is determined by judging the relative position relationship between the lane feature and the first vehicle feature or the relative position relationship between the lane feature and the second vehicle feature, so that the subsequent operation required to be executed is more accurately judged according to the requirements of different items.
In some implementations, the first region of interest is a rearview mirror and the first region of interest is a specular region of the rearview mirror. The position information for identifying the vehicle feature and the lane feature in the sight line image includes:
amplifying the mirror surface area; identifying vehicle features and lane features in the enlarged image; acquiring the pixel width of the lane feature and the pixel width between the vehicle feature and the lane feature; and estimating the relative distance and the relative angle between the current lane feature and the vehicle feature according to the lane feature, the pixel width between the vehicle feature and the lane feature and the actual width of the lane.
204. And acquiring corresponding operation reminding information based on the identification result.
Referring to fig. 4, a schematic application scenario diagram of a driving training reminding method based on a wearable device provided in an embodiment of the present application is shown in the figure.
In a specific application, as shown in fig. 4, for example, during a reverse and warehouse-in exercise, the position information of the vehicle on the outside of the door 21 of the assistant driver and the rearview mirror 22 is detected through a Mask R-CNN algorithm, then, the mirror area 23 of interest is cut out as a first area of interest (ROI area), the vehicle feature and the lane feature are detected in the area, and the position of the vehicle feature 24 and the position of the lane line 25 in the rearview mirror 22 can be detected through the Mask R-CNN. For the detected yellow lane line, counting the number of pixels occupied by the width of the lane line in an image, wherein the width of the yellow lane line is generally 10cm, estimating the angle (whether parallel) and the distance between the current vehicle body and the lane line or not based on the counted pixel width and the actual width (10 cm) of the lane line, and finally carrying out voice broadcasting according to the angle and the distance to prompt a student to carry out steering wheel alignment and other operations.
Example 3
Referring to fig. 5, a flow of implementation of obtaining location information for a target lane feature is shown.
In an embodiment, as shown in fig. 5, the preset features at least include a vehicle feature and a lane feature, and identifying the preset features in the sight line image according to the item information includes:
301. obtaining location information of a first lane feature from the line-of-sight image based on digital image detection;
302. identifying and obtaining the position information of the second road characteristic based on Mask R-CNN deep learning algorithm;
303. and obtaining position information corresponding to the target lane characteristic according to the first lane characteristic and the second lane characteristic.
As shown in connection with fig. 4, in this embodiment, the driving training reminding method is applied to a wearable device for acquiring a sight line image of a user's sight line direction.
The driving training project can be a reversing warehouse entry project. The user needs to perform a reverse-entry operation by observing the rear view mirror 42, and can observe the rear view mirror when the user's line of sight is directed to the outside of the window 21 in the secondary driving direction. At this time, the camera on the wearable device acquires a line-of-sight image of the line-of-sight direction of the user.
The processing unit of the wearable device may first acquire current item information, where the item information may include preset reminding conditions of vehicle features and lane features, and identify the line-of-sight image according to the item information. The preset reminding condition can identify the image of the rearview mirror 22 as a first vehicle characteristic, and a Mask R-CNN algorithm is adopted in the identification process.
In order to improve the accuracy of the related lines such as the lane line 25, referring to fig. 2, firstly, the position of the rearview mirror 22 (the first vehicle feature) of the self-driving vehicle is detected through the Mask R-CNN network, and then the mirror surface region 23 of the rearview mirror 22 in the image is intercepted as the ROI (the first region of interest 23) region 23, wherein the ROI region 23 includes the vehicle feature 24 and the lane line 25. Taking the image of the ROI area 23 as a target image and further analyzing the position of the lane line 25 in the target image by means of conventional digital image correlation knowledge, specifically: firstly, the acquired target image of the ROI area 23 is subjected to color space conversion, the image is converted from RGB color space to HSV color space, then an H channel (the H color space represents the tone) of the image after the color space conversion is extracted, the channel is subjected to binarization, two thresholds (T1 < T2) of T1 and T2 are set because the lane line 25 of the training field is generally yellow, namely, the pixel of the H channel image with the pixel value between the T1 and T2 thresholds is primarily determined as the position of the lane line 25, and the other pixels are background pixel points. The binarized image is further filtered to obtain non-linear yellow pixels through Hough transformation. Finally, a lane line 25 mask detected based on digital image processing is generated.
In an embodiment, to improve the confidence of the lane line recognition result, the obtaining the location information corresponding to the target lane feature according to the first lane feature and the second lane feature includes:
acquiring a first lane region, a second lane region and a third lane region;
performing similarity analysis on the first lane region, the second lane region and the third lane region to obtain position information corresponding to the target lane characteristics;
the first vehicle road area is a common area of the first vehicle road feature and the second vehicle road feature; the second lane region is a lane region corresponding to the first lane feature and the second lane feature respectively; the third lane region is a lane region corresponding to the lane feature detected in the history line-of-sight image. That is, the lane lines obtained by segmentation based on Mask R-CNN and the lane lines detected based on the digital image are logically and operated to obtain common lane line areas obtained by two different schemes, then similarity analysis is performed based on the positions of the common detected lane lines, the positions of the lane lines detected by two algorithms respectively and independently, and the positions of the lane lines detected in the historical sight line image, the lane line positions with higher similarity are reserved, and finally the position information of the target lane features is generated.
Through the processing mode, the recognition result of the target lane characteristic can be corrected by utilizing the characteristics and differences of the lane line positions detected in the two different detection modes and the historical sight line images, so that the confidence of the recognition result of the target lane characteristic is improved.
Example 4
Referring to fig. 6, a structure of the vehicle-mounted system provided in the embodiment of the application is shown.
As shown in fig. 6, the wearable device 400 includes a processor 401 and a memory 402, where the processor 401 is electrically connected to the memory 402.
The wearable device 400 may include an image acquisition module 403 for providing for wearing by a user. The wearable device can be worn on the head to acquire an image of a user's gaze direction through the image acquisition module. In an embodiment, the image capturing module 403 may be a camera, and the camera may use an image sensor such as a CCD, CMOS, etc. to image the external light, and perform image processing through the processing element in the image capturing module 403 or the wearable device 400 to obtain a line-of-sight image of the line-of-sight direction of the user.
In order to obtain an image as close to the direction of the user's line of sight as possible, in one embodiment, the image acquisition module is a camera, and the camera is disposed in the camera device at a position close to the eyes of the human body. The camera is positioned near the eyes of the user, such as the forehead, the eardrum, etc. The wearable device 400 may take the form of, for example, a helmet, glasses, headphones, or other wearable hardware implementation that is secured to the head. Further, the camera device is intelligent glasses, and the camera is arranged between two side mirror frames of the intelligent glasses. The intelligent glasses are convenient to use and can be attached to the eye positions of users.
By means of the wearable device 400, a sight line image which is close to the sight line direction of the user as much as possible can be obtained, and therefore accuracy and reliability of operation reminding information are improved.
The processor 401 is a control center of the wearable device 400, connects various parts of the entire wearable device 400 with various interfaces and lines, executes or executes software programs and/or modules stored in the memory 402, and invokes data stored in the memory 402 to perform various functions of the electronic wearable device 400 and/or process data. The processor 401 may be composed of an integrated circuit (Integrated Circuit, simply referred to as IC), for example, may be composed of a single packaged IC, or may be composed of a plurality of packaged ICs connected with the same function or different functions. For example, the processor 401 may include only the central processor 401 (Central Processing Unit, simply referred to as CPU). In the construction completion mode of the present application, the CPU may be a single operation core or may include multiple operation cores.
The memory 402 may be used to store instructions for execution by the processor 401, and the memory 402 may be implemented by any type of volatile or non-volatile memory wearable device 400 or combination thereof, such as static random access memory 402 (SRAM), electrically erasable programmable read-only memory 402 (EEPROM), erasable programmable read-only memory 402 (EPROM), programmable read-only memory 402 (PROM), read-only memory 402 (ROM), magnetic memory 402, flash memory 402, magnetic disk or optical disk.
The memory 402 stores a computer program, and the processor 401 executes the following steps by calling the computer program stored in the memory 402:
acquiring a sight line image through an image acquisition module; acquiring current project information, wherein the project information is related to a currently-performed driving training project; identifying preset features in the sight line image according to the project information; and acquiring corresponding operation reminding information based on the identification result, and executing the display of the operation reminding information.
The image acquisition device 403 is electrically connected to the processor 401. In an embodiment, the camera of the image capturing device 403 may include 1, 2 or more than 2 cameras to meet different image capturing requirements.
The wearable apparatus 400 may also include a reminder device, which may be implemented, for example, as a display, speaker, or other acousto-optic device.
In an embodiment, the wearable device 400 further comprises an image output module, the image output module being connected to the processor 401;
the processor 401 is further configured to perform:
acquiring driving operation information of a current vehicle; displaying driving operation information in the sight line image and performing superposition processing; and outputting the sight line image after the superposition processing to the image output module so as to output the image through the image output module.
The driving operation information can comprise the speed, the braking condition, the gear, the number of turns of the steering wheel and the like of the vehicle, so that the current driving operation of a user can be reflected through the driving operation information. The driving operation information is displayed in the sight line image and is output, a student and a coach can obtain video data under the view angle of the first person, the student and the coach can conveniently multiplex the whole driving training process through the video data, the student can conveniently learn and summarize by himself after driving, and the learning efficiency and the learning effect of the student are improved.
In an embodiment, the processor 401 is further configured to perform:
determining corresponding preset reminding conditions according to the item information, wherein the preset reminding conditions comprise the correlation of the preset relative position relationship between the vehicle characteristics and the lane characteristics; identifying position information of vehicle features and lane features in the line-of-sight image; and judging whether the relative position relation between the vehicle features in the sight line image and the lane features accords with a preset reminding condition or not.
In an embodiment, the processor 401 is further configured to perform:
identifying a first vehicle feature in the line-of-sight image; determining a first region of interest relative to a location of the first vehicle feature; identifying the position information of the lane feature in the first region of interest, or identifying the position information of the lane feature and the second vehicle feature; judging whether the relative position relation between the lane characteristics and the first vehicle characteristics accords with a preset reminding condition or not; or judging whether the relative position relation between the lane characteristic and the second vehicle characteristic accords with a preset reminding condition.
Amplifying the mirror area; identifying vehicle features and lane features in the enlarged image; acquiring the pixel width of the lane feature and the pixel width between the vehicle feature and the lane feature; and estimating the relative distance and the relative angle between the current lane feature and the vehicle feature according to the lane feature, the pixel width between the vehicle feature and the lane feature and the actual width of the lane.
In an embodiment, the processor 401 is further configured to perform:
and identifying preset features in the sight line image by adopting a Mask R-CNN deep learning algorithm.
In an embodiment, the processor 401 is further configured to perform:
obtaining location information of a first lane feature from the line-of-sight image based on digital image detection; identifying and obtaining the position information of the second road characteristic based on Mask R-CNN deep learning algorithm; and obtaining position information corresponding to the target lane characteristic according to the first lane characteristic and the second lane characteristic.
In an embodiment, the processor 401 is further configured to perform:
identifying a first vehicle feature in the line-of-sight image; determining a first region of interest relative to a location of the first vehicle feature; performing color space conversion on a target image of a first region of interest to convert the target image from an RGB color space to an HSV color space; and processing the target image after the color space is converted to obtain a lane feature mask of the target image.
In an embodiment, the processor 401 is further configured to perform:
acquiring a first lane region, a second lane region and a third lane region; performing similarity analysis on the first lane region, the second lane region and the third lane region to obtain position information corresponding to the target lane characteristics; the first vehicle road area is a common area of a first vehicle road feature and a second vehicle road feature; the second lane region is a lane region corresponding to the first lane feature and the second lane feature; the third lane region is a lane region corresponding to the lane features detected in the historical sight line image.
In this embodiment of the present application, the wearable device and the driving training reminding method based on the wearable device in the foregoing embodiment belong to the same concept, and any method step provided in the driving training reminding method embodiment based on the wearable device may be executed on the wearable device, and a specific implementation process of the method step is detailed in the driving training reminding method embodiment based on the wearable device, and any combination may be adopted to form an optional embodiment of the present application, which is not repeated herein.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The embodiments of the present application have been described in detail above with reference to the drawings, but the present application is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present application.

Claims (12)

1. A driving training reminding method based on a wearable device, which is characterized in that the method is applied to the wearable device worn by a user;
the wearable device comprises an image acquisition module, wherein the image acquisition module faces to the sight direction of a user;
the method comprises the following steps:
acquiring a sight line image through the image acquisition module;
acquiring current project information, wherein the project information is related to a currently-performed driving training project;
identifying preset features in the sight line image according to the item information;
and acquiring corresponding operation reminding information based on the identification result, and executing the display of the operation reminding information.
2. The wearable device-based driving training reminding method according to claim 1, wherein the preset features at least comprise vehicle features and lane features;
the identifying the preset features in the sight line image according to the item information comprises the following steps:
Determining corresponding preset reminding conditions according to the item information, wherein the preset reminding conditions comprise the correlation of the preset relative position relationship between the vehicle characteristics and the lane characteristics;
identifying position information of vehicle features and lane features in the line-of-sight image;
and judging whether the relative position relation between the vehicle features in the sight line image and the lane features accords with a preset reminding condition or not.
3. The wearable device-based driving training reminding method according to claim 2, wherein identifying the position information of the vehicle feature and the lane feature in the line-of-sight image includes:
identifying a first vehicle feature in the line-of-sight image;
determining a first region of interest relative to a location of the first vehicle feature;
identifying the position information of the lane feature in the first region of interest, or identifying the position information of the lane feature and the second vehicle feature;
the judging whether the relative position relation between the vehicle feature and the lane feature in the sight line image accords with a preset reminding condition comprises the following steps:
judging whether the relative position relation between the lane characteristics and the first vehicle characteristics accords with a preset reminding condition or not; or alternatively
And judging whether the relative position relation between the lane characteristic and the second vehicle characteristic accords with a preset reminding condition or not.
4. The wearable device-based driving training reminder method of claim 3, wherein the first vehicle feature is a rearview mirror and the first region of interest is a specular region of the rearview mirror;
the identifying the position information of the vehicle feature and the lane feature in the sight line image includes:
amplifying the mirror area;
identifying vehicle features and lane features in the enlarged image;
acquiring the pixel width of the lane feature and the pixel width between the vehicle feature and the lane feature;
and estimating the relative distance and the relative angle between the current lane feature and the vehicle feature according to the lane feature, the pixel width between the vehicle feature and the lane feature and the actual width of the lane.
5. The wearable device-based driving training reminding method according to any one of claims 1-4, wherein the identifying the preset feature in the line-of-sight image according to the item information comprises:
and identifying preset features in the sight line image by adopting a Mask R-CNN deep learning algorithm.
6. The wearable device-based driving training alert method of claim 5, wherein the preset features include at least vehicle features and lane features;
the identifying the preset features in the sight line image according to the item information comprises the following steps:
obtaining location information of a first lane feature from the line-of-sight image based on digital image detection;
identifying and obtaining the position information of the second road characteristic based on Mask R-CNN deep learning algorithm;
and obtaining position information corresponding to the target lane characteristic according to the first lane characteristic and the second lane characteristic.
7. The wearable device-based driving training alert method of claim 6, wherein the obtaining location information of a first lane feature from the line-of-sight image based on digital image detection comprises:
identifying a first vehicle feature in the line-of-sight image;
determining a first region of interest relative to a location of the first vehicle feature;
performing color space conversion on a target image of a first region of interest to convert the target image from an RGB color space to an HSV color space;
and processing the target image after the color space is converted to obtain a lane feature mask of the target image.
8. The wearable device-based driving training reminding method according to claim 6, wherein the obtaining the position information corresponding to the target lane feature according to the first lane feature and the second lane feature includes:
acquiring a first lane region, a second lane region and a third lane region;
performing similarity analysis on the first lane region, the second lane region and the third lane region to obtain position information corresponding to the target lane characteristics;
the first vehicle road area is a common area of a first vehicle road feature and a second vehicle road feature; the second lane region is a lane region corresponding to the first lane feature and the second lane feature; the third lane region is a lane region corresponding to the lane features detected in the historical sight line image.
9. A wearable device, comprising an image acquisition module oriented toward a line of sight of a user;
the wearable device further comprises a processor and a memory, wherein the processor is electrically connected with the memory;
the memory stores a computer program, and the processor executes the wearable device-based driving training reminding method according to any one of the embodiments of claims 1 to 8 by calling the computer program stored in the memory.
10. The wearable device of claim 9, wherein the wearable device is a camera device for wearing on a human head;
the image acquisition module is a camera, and the camera is arranged at a position close to eyes of a human body in the camera device.
11. The wearable device of claim 10, wherein the camera device is smart glasses and the camera is disposed between two side frames of the smart glasses.
12. The wearable device of claim 9, further comprising an image output module, the image output module coupled to the processor;
the processor is further configured to perform:
acquiring driving operation information of a current vehicle;
displaying the driving operation information in the sight line image and performing superposition processing;
and outputting the sight line image after the superposition processing to the image output module so as to output the image through the image output module.
CN202211247781.5A 2022-10-12 2022-10-12 Driving training reminding method based on wearable device and wearable device Pending CN115995142A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211247781.5A CN115995142A (en) 2022-10-12 2022-10-12 Driving training reminding method based on wearable device and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211247781.5A CN115995142A (en) 2022-10-12 2022-10-12 Driving training reminding method based on wearable device and wearable device

Publications (1)

Publication Number Publication Date
CN115995142A true CN115995142A (en) 2023-04-21

Family

ID=85990970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211247781.5A Pending CN115995142A (en) 2022-10-12 2022-10-12 Driving training reminding method based on wearable device and wearable device

Country Status (1)

Country Link
CN (1) CN115995142A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116729422A (en) * 2023-06-07 2023-09-12 广州市德赛西威智慧交通技术有限公司 Deviation correction method for vehicle track, vehicle driving assistance method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116729422A (en) * 2023-06-07 2023-09-12 广州市德赛西威智慧交通技术有限公司 Deviation correction method for vehicle track, vehicle driving assistance method and device
CN116729422B (en) * 2023-06-07 2024-03-08 广州市德赛西威智慧交通技术有限公司 Deviation correction method for vehicle track, vehicle driving assistance method and device

Similar Documents

Publication Publication Date Title
JP7332726B2 (en) Detecting Driver Attention Using Heatmaps
CN110543867B (en) Crowd density estimation system and method under condition of multiple cameras
US10769459B2 (en) Method and system for monitoring driving behaviors
Gilroy et al. Overcoming occlusion in the automotive environment—A review
CN111797657A (en) Vehicle peripheral obstacle detection method, device, storage medium, and electronic apparatus
US10650257B2 (en) Method and device for identifying the signaling state of at least one signaling device
CN109784150B (en) Video driver behavior identification method based on multitasking space-time convolutional neural network
CN109584507A (en) Driver behavior modeling method, apparatus, system, the vehicles and storage medium
US9626599B2 (en) Reconfigurable clear path detection system
US20220058407A1 (en) Neural Network For Head Pose And Gaze Estimation Using Photorealistic Synthetic Data
US20150286885A1 (en) Method for detecting driver cell phone usage from side-view images
EP3286056B1 (en) System and method for a full lane change aid system with augmented reality technology
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN112677977B (en) Driving state identification method and device, electronic equipment and steering lamp control method
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
Romera et al. A Real-Time Multi-scale Vehicle Detection and Tracking Approach for Smartphones.
CN113657409A (en) Vehicle loss detection method, device, electronic device and storage medium
US20160232415A1 (en) Detection detection of cell phone or mobile device use in motor vehicle
CN112654998B (en) Lane line detection method and device
CN115995142A (en) Driving training reminding method based on wearable device and wearable device
US20120189161A1 (en) Visual attention apparatus and control method based on mind awareness and display apparatus using the visual attention apparatus
CN116935361A (en) Deep learning-based driver distraction behavior detection method
CN110837760B (en) Target detection method, training method and device for target detection
CN111062311B (en) Pedestrian gesture recognition and interaction method based on depth-level separable convolution network
Rathnayake et al. Lane detection and prediction under hazy situations for autonomous vehicle navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination