WO2019114014A1 - 一种基于智能眼镜的驾车安全预警方法及智能眼镜 - Google Patents

一种基于智能眼镜的驾车安全预警方法及智能眼镜 Download PDF

Info

Publication number
WO2019114014A1
WO2019114014A1 PCT/CN2017/117679 CN2017117679W WO2019114014A1 WO 2019114014 A1 WO2019114014 A1 WO 2019114014A1 CN 2017117679 W CN2017117679 W CN 2017117679W WO 2019114014 A1 WO2019114014 A1 WO 2019114014A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
smart glasses
vehicle
scene
virtual scene
Prior art date
Application number
PCT/CN2017/117679
Other languages
English (en)
French (fr)
Inventor
蔡任轩
Original Assignee
广州德科投资咨询有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州德科投资咨询有限公司 filed Critical 广州德科投资咨询有限公司
Publication of WO2019114014A1 publication Critical patent/WO2019114014A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K28/00Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions
    • B60K28/02Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver
    • B60K28/06Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/0004Gaseous mixtures, e.g. polluted air
    • G01N33/0009General constructional details of gas analysers, e.g. portable test equipment
    • G01N33/0027General constructional details of gas analysers, e.g. portable test equipment concerning the detector
    • G01N33/0036Specially adapted to detect a particular component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to the field of electronic device technologies, and in particular, to a driving safety warning method based on smart glasses and smart glasses.
  • the embodiment of the invention specifically relates to a driving safety warning method based on smart glasses and smart glasses, which can detect the drunk driving behavior in time through the smart glasses, and alert the user to the painful consequences of drunk driving, which is beneficial to improving the user's prevention awareness. To reduce the occurrence of drunk driving accidents and maintain the safety of users' lives and property.
  • the first aspect of the embodiment of the present invention discloses a driving safety warning method based on smart glasses, and the method includes:
  • the smart glasses detect whether the user vehicle communicatively connected to the smart glasses receives an unlock command, and if received, acquires a first real scene captured by the camera on the smart glasses and an alcohol component in the air of the smart glasses concentration;
  • the smart glasses determine whether the concentration of the alcohol component is greater than a concentration threshold, and if greater than the concentration threshold, obtain an evolution of a drunk driving accident that matches the first real scene from the server according to the first real scene.
  • the smart glasses project the first virtual scene to the eye of the user, so that the user sees the mixed scene of the first real scene and the first virtual scene.
  • the method further includes:
  • the smart glasses detect whether the driving seat of the user vehicle is in a pressurized state
  • the smart glasses If in the pressed state, the smart glasses detect a current brain wave frequency of the user;
  • the smart glasses match the current brain wave frequency with a pre-stored brain wave frequency table to obtain a current mental fatigue index corresponding to the current brain wave frequency, the pre-stored brain wave frequency table including a brain wave frequency range and Mental fatigue index corresponding to the range of brain wave frequencies;
  • the smart glasses determine whether the current mental fatigue index is greater than a safety index threshold, and if greater than the safety index threshold, detecting whether the user vehicle is in a running state;
  • the smart glasses If not in the driving state, the smart glasses project an avatar indicating that driving is prohibited to the user's eyeball, and acquire a second real scene captured by the camera on the smart glasses;
  • the smart glasses identify whether there is a car windshield in the second real scene, and if so, determine an area of the car windshield as a second fusion area;
  • the smart glasses project the second virtual scene to the user's eyeball to enable the user to see a mixed scene of the second virtual scene superimposed at the second fusion region.
  • the smart glasses project the second virtual scene to a user's eyeball, so that the user sees the superimposed area at the second fusion area.
  • the method further includes:
  • the smart glasses project a travel suggestion virtual scene to the user's eyeball, so that the user sees a hybrid scene in which the travel suggestion virtual scene is superimposed at the second fusion area, and the travel suggestion virtual scene includes a navigation virtual image and a phone call.
  • the travel suggestion virtual scene includes a navigation virtual image and a phone call.
  • the smart glasses detect an operation instruction triggered by the avatar for the travel suggestion virtual scene, and identify the type of the avatar corresponding to the operation instruction, where the type of the avatar includes a navigation avatar, a phone book virtual Image and any of the rest suggestions avatars;
  • the method further includes:
  • the smart glasses send a fault detection instruction to the user vehicle to cause the user vehicle to detect the brake of the user vehicle according to the fault detection instruction.
  • the smart glasses receive a detection report sent by the user vehicle, the detection report including a failure coefficient of a brake system of the user vehicle and a failure coefficient of a steering system of the user vehicle;
  • the smart glasses acquire a fault reminding virtual scene from the server according to the detection report, where the fault reminding virtual scene includes a brake system fault avatar that matches a fault coefficient of a brake system of the user vehicle, and a steering system fault avatar that matches the failure factor of the steering system of the user vehicle;
  • the smart glasses project the fault reminding virtual scene to the eye of the user.
  • a second aspect of the embodiments of the present invention discloses a smart glasses, including:
  • a first detecting unit configured to detect whether a user vehicle communicatively connected to the smart glasses receives an unlocking command
  • a first acquiring unit configured to acquire, when the first detecting unit detects that the user vehicle receives the unlocking command, the first real scene captured by the camera on the smart glasses and the air of the smart glasses The concentration of the alcohol component;
  • a first determining unit configured to determine whether a concentration of the alcohol component acquired by the first acquiring unit is greater than a concentration threshold
  • a second acquiring unit configured to: when the first determining unit determines that the concentration of the alcohol component is greater than the concentration threshold, obtain, according to the first real scene acquired by the first acquiring unit, from a server The first virtual scene in which the first real scene matches the evolution of the drunk driving accident;
  • a projection unit configured to project the first virtual scene acquired by the second acquiring unit to a user's eyeball, so that the user sees a mixed scene in which the first real scene is superimposed with the first virtual scene.
  • the method further includes:
  • a second detecting unit configured to detect, when the first determining unit determines that the concentration of the alcohol component is less than or equal to the concentration threshold, whether the driving seat of the user vehicle is in a pressurized state; and when detecting the location Detecting the current brain wave frequency of the user when the driving seat of the user vehicle is in the pressed state;
  • a matching unit configured to match the current brain wave frequency detected by the second detecting unit with a pre-stored brain wave frequency table, to obtain a current mental fatigue index corresponding to the current brain wave frequency, and the pre-stored brain wave frequency
  • the table includes a brain wave frequency range and a mental fatigue index corresponding to the brain wave frequency range;
  • a second determining unit configured to determine whether the current mental fatigue index obtained by the matching unit is greater than a safety index threshold
  • the second detecting unit is further configured to: when the second determining unit determines that the current mental fatigue index is greater than the safety index threshold, detecting whether the user vehicle is in a driving state;
  • the projection unit is further configured to: when the second detecting unit detects that the user vehicle is not in the driving state, project an avatar indicating that driving is prohibited to the user's eyeball;
  • the first acquiring unit is further configured to acquire a second real scene captured by the camera on the smart glasses;
  • a recognition unit configured to identify whether a car windshield exists in the second real scene acquired by the first acquiring unit
  • a determining unit configured to determine an area of the windshield of the automobile as a second fusion area when the identification unit recognizes that the automobile windshield exists in the second real scene
  • a third acquiring unit configured to acquire, according to the current mental fatigue index obtained by the matching unit, a second virtual scene that matches the second fusion area, where the second virtual scene includes A scenario in which a traffic accident is caused by driving under the current mental fatigue index;
  • the projection unit is further configured to project the second virtual scene acquired by the third acquiring unit to a user's eyeball, so that the user sees that the second virtual scene is superimposed at the second fusion region. Mixed scenes.
  • the projection unit is further configured to project the second virtual scene to a user's eyeball, so that the user sees the second fusion.
  • the travel suggestion virtual scene matching the second fusion area is projected to the user's eyeball, so that the user sees the superimposition at the second fusion area.
  • Traveling suggests a mixed scene of virtual scenes, the travel suggestion virtual scene including one or more of a navigation avatar, a phone book avatar, and a rest suggestion avatar;
  • the smart glasses further include:
  • a third detecting unit configured to detect an operation instruction triggered by the user for the avatar in the virtual scene, and identify a type of the avatar corresponding to the operation instruction, where the type of the avatar includes a navigation avatar, Any one of the phone book avatar and the rest suggestion avatar;
  • the projecting unit is further configured to generate a target virtual scene that matches the second fused area according to the type of the avatar recognized by the third detecting unit, and project the target virtual scene to a user An eyeball to cause a user to see a mixed scene in which the target virtual scene is superimposed at the second fusion region.
  • the method further includes:
  • a sending unit configured to: when the second determining unit determines that the current mental fatigue index is less than or equal to the safety index threshold, send a fault detection instruction to the user vehicle, so that the user vehicle is configured according to the fault Detecting a command to detect a brake system of the user vehicle and a steering system of the user vehicle;
  • a receiving unit configured to receive a detection report sent by the user vehicle, where the detection report includes a failure coefficient of a braking system of the user vehicle and a failure coefficient of a steering system of the user vehicle;
  • the third obtaining unit is further configured to acquire a fault reminding virtual scene from the server according to the detection report received by the receiving unit, where the fault reminding virtual scene includes a braking system with the user vehicle a brake system fault avatar that matches the fault factor and a steering system fault avatar that matches the fault factor of the steering system of the user vehicle;
  • the projection unit is further configured to project the fault reminding virtual scene acquired by the third acquiring unit to a user's eyeball.
  • a third aspect of the embodiments of the present invention discloses a smart glasses, including:
  • a processor coupled to the memory
  • the processor invokes the executable program code stored in the memory to perform some or all of the steps of any of the methods disclosed in the first aspect of the embodiments of the present invention.
  • a fourth aspect of the embodiments of the present invention discloses a computer readable storage medium storing program code, wherein the program code includes any one of the first aspects disclosed in the first embodiment of the present invention. Instructions for some or all of the steps of the method.
  • a fifth aspect of the embodiments of the present invention discloses a computer program product, which when the computer program product is run on a computer, causes the computer to perform some or all of the steps of any one of the methods disclosed in the first aspect of the embodiments of the present invention.
  • a sixth aspect of the embodiments of the present invention discloses an application publishing platform, where the application publishing platform is configured to publish the computer program product, wherein when the computer program product runs on a computer, the computer is caused to perform the implementation of the present invention.
  • the embodiment of the invention has the following beneficial effects:
  • the smart glasses firstly detect whether the user vehicle communicatively connected to the smart glasses receives the unlocking command, and if the unlocking command is received, indicating that the user is going to use the user vehicle to travel at this time, the smart glasses will acquire the smart glasses.
  • the smart glasses acquire the first virtual scene of the drunk driving accident that matches the first real scene from the server according to the first real scene; further, the smart The glasses project the first virtual scene to the eye of the user, so that the user sees the mixed scene in which the first real scene is superimposed with the first virtual scene.
  • the embodiment of the present invention can detect the driving behavior of drunk driving in time, and can integrate the first virtual scene of the evolution of the drunk driving accident in the real scene through the smart glasses, so that the user can truly experience the painful consequences caused by drunk driving. It is conducive to improving the user's awareness of prevention, stopping the drunk driving behavior in time, reducing the occurrence of drunk driving accidents, and maintaining the safety of the user's life and property.
  • FIG. 1 is a schematic diagram of a model of a smart glasses disclosed in an embodiment of the present invention.
  • FIG. 2 is a schematic flow chart of another driving safety warning method based on smart glasses disclosed in an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart of another driving safety warning method based on smart glasses disclosed in an embodiment of the present invention.
  • FIG. 4 is a schematic flow chart of another driving safety warning method based on smart glasses disclosed in an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a smart glasses disclosed in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the embodiment of the invention discloses a driving safety warning method based on smart glasses and smart glasses, which can detect the drunk driving behavior in time through the smart glasses, and alert the user to the painful consequences of drunk driving, thereby improving the user's awareness of prevention and reducing
  • the occurrence of drunk driving accidents has safeguarded the lives and property of users. The details are described below separately.
  • FIG. 1 is a schematic diagram of a model of a smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses include a sensor A, a sensor B, a pico projector B, and a camera D.
  • the sensor A can detect the concentration of the alcohol component in the air where the smart glasses are located.
  • the sensor A can convert the detected concentration of the alcohol component in the air into an electrical signal, and then the smart glasses can obtain the concentration information of the alcohol gas in the environment according to the strength of the electrical signal, and the sensitivity is high, convenient and reliable, and the air alcohol can be detected at any time. concentration.
  • the sensor B can detect the current brain wave frequency of the user.
  • Sensor B can use ThinkGear AM chip, which uses dry electrode sensor, can detect very weak EEG signals, has good noise cancellation and anti-interference ability, can accurately detect the user's current brain wave frequency, reduce errors and improve The measurement accuracy optimizes the user experience.
  • the pico projector C can directly project the image onto the wearer's retina; the pico projector C can also project the virtual scene to be projected onto the semi-transparent prism, and then the translucent The prism reflects the virtual scene on the human retina; the pico projector C can also project the virtual scene onto the micro screen loaded in front of the pico projector C, and the user can see the projection of the pico projector in the upper right corner of the field of view. content.
  • the camera D is located outside the smart glasses, and the real scene seen by the human eye can be acquired.
  • the real scene seen by the human eye is the same as the real scene captured by the camera D.
  • the camera D provided on the smart glasses may be a pair of binocular cameras, and the acquired real scene is more matched with the real scene seen by the human eye, and the smart glasses can also be obtained according to the obtained The real scene, determining the projection area, is conducive to improving the user experience.
  • FIG. 2 is a schematic flowchart diagram of a driving safety warning method based on smart glasses according to an embodiment of the present invention.
  • the smart glasses-based driving safety warning method may include the following steps:
  • the smart glasses detect whether the user vehicle communicatively connected to the smart glasses receives the unlock command, and if yes, perform steps 102 to 103; if not, perform step 101 to continue to detect the user vehicle connected to the smart glasses. Whether to receive the unlock command.
  • the smart glasses have an independent operating system, and a program provided by a software service provider such as application software installed by the user can be run on the operating system.
  • the smart glasses can complete corresponding operations according to the voice instruction input by the user or the gesture action instruction of the user, such as adding a schedule, map navigation, interacting with friends, taking photos and videos, and developing a video call with a friend, and can Wireless network access is achieved through a mobile communication network.
  • the smart glasses acquire the first real scene captured by the camera on the smart glasses and the concentration of the alcohol component in the air where the smart glasses are located.
  • the smart glasses determine whether the concentration of the alcohol component is greater than a concentration threshold. If the concentration is greater than the concentration threshold, perform steps 104 to 105. If the concentration is less than or equal to the concentration threshold, perform steps 102 to 103.
  • drinking and driving means that the alcohol content in the blood of the driver driving the vehicle is greater than or equal to 20 mg/100 ml and less than 80 mg/100 ml; drunk driving means that the alcohol content in the blood of the driver driving the vehicle is greater than or equal to 80 mg. /100ml.
  • the alcohol is absorbed, but it is not digested. Some of the alcohol will evaporate and will be exhaled through the alveoli. It has been determined that the ratio of the concentration of alcohol in the gas exhaled after drinking to the concentration of alcohol in the blood is 1:2100, that is, the alcohol contained in 2100 ml of gas per person after drinking, and the alcohol content in 1 ml of blood.
  • the smart glasses can determine whether the user has a drunk driving behavior by measuring the concentration of the alcohol component in the air in which they are located.
  • the concentration threshold may be set to 1 mg/10500 ml or the like, which is not limited in the embodiment of the present invention.
  • the smart glasses acquire, according to the first real scene, a first virtual scene from the server that evolves the drunk driving accident that matches the first real scene.
  • the smart glasses can match the appropriate first virtual scene according to different first real scenes.
  • the first virtual scene may be a dynamic accident scene caused by drunk driving, and the virtual scene may be matched with a matching background sound, which causes a thrilling feeling and can give the user a warning effect.
  • the first real scene obtained by smart glasses can also be that the driver of the drunkenness is in a rescue scene, and the scene of his family's anxiety and sadness in the emergency room can trigger the family's sense of family responsibility, timely reflection, and reduce drunk driving behavior.
  • the first virtual scene presented by the smart glasses may be different each time, and the problem that the warning effect caused by repeated occurrences of the same scene can be avoided.
  • the smart glasses project the first virtual scene to the eye of the user, so that the user sees the mixed scene of the first real scene and the first virtual scene.
  • the method described in FIG. 2 is implemented to detect the drunk driving behavior in real time by detecting the concentration of the alcohol component in the air of the smart glasses in real time, and can integrate the evolution of the drunk driving accident in the real scene through the smart glasses.
  • a virtual scene allows users to experience the painful consequences of drunk driving, which helps to improve the user's awareness of prevention, stop drinking and driving in a timely manner, reduce the occurrence of drunk driving accidents, and maintain the safety of users' lives and property.
  • FIG. 3 is a schematic flowchart diagram of another driving safety warning method based on smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses-based driving safety warning method may include the following steps:
  • step 201 The smart glasses detect whether the user vehicle communicatively connected to the smart glasses receives the unlock command, and if yes, step 202 is performed; if not, step 201 is performed to continue to detect whether the user vehicle connected to the smart glasses is received. Unlock the command.
  • the smart glasses acquire the first real scene captured by the camera on the smart glasses and the concentration of the alcohol component in the air where the smart glasses are located.
  • the smart glasses may be in communication connection with the user's user's vehicle, and the user may control the user's vehicle to perform the unlocking operation by inputting an unlock command to the smart glasses; or the user may control the key through the user's vehicle.
  • the user vehicle performs an unlocking operation, and when the user's vehicle receives the unlocking command triggered by the key matched by the user's vehicle, the smart glasses that communicate with the user may be sent a signal to receive the unlocking command, and the smart glasses are receiving Immediately after the signal, the concentration of the alcohol component in the air with the smart glasses can be detected.
  • the smart glasses determine whether the concentration of the alcohol component is greater than a concentration threshold. If the concentration is greater than the concentration threshold, perform step 204 to step 209; if less than or equal to the concentration threshold, perform step 210.
  • the smart glasses identify all vehicles in the first real scene and extract vehicle characteristics of each vehicle.
  • the vehicle feature includes one or more of a vehicle type, a color of the vehicle, a brand category of the vehicle, and a logo of the vehicle, which are not limited in the embodiment of the present invention.
  • the smart glasses compare the vehicle characteristics of the user vehicles communicatively connected with the smart glasses with the vehicle characteristics of each of the vehicles, respectively, to obtain a comparison result of each vehicle feature.
  • the smart glasses determine the user vehicle from all the vehicles according to the obtained comparison result of all the vehicle features.
  • the first real scene is matched with the scene that the user sees at this time.
  • performing the above steps 204 to 206 may be performed from multiple vehicles in the first scene.
  • the user's user's vehicle is identified.
  • the smart glasses determine an area of the user vehicle in the first real scene, and determine a first fusion area in the first real scene according to the area of the user vehicle.
  • the smart glasses obtain, from the server, a first virtual scene in which the drunk driving accident is matched with the first fusion area, as a first virtual scene in which the drunk driving accident is matched with the first real scene.
  • the smart glasses that perform the above steps 204 to 208 can obtain the first virtual scene of the drunk driving accident evolution that matches the first real scene from the server according to the first real scene.
  • the smart glasses project the first virtual scene to the user's eyeball, so that the user sees the mixed scene of the first real scene and the first virtual scene, and ends the process.
  • the smart glasses project the first virtual scene to the user's eyeball, and the user The first virtual scene superimposed at the position of the user's vehicle can be immediately seen, which can attract the attention of the user and is beneficial to enhance the warning effect.
  • the smart glasses-based driving safety warning method may further include the following steps:
  • the smart glasses detect whether the driving seat of the user vehicle is in a pressurized state. If the pressure is in a pressurized state, perform steps 211 to 212; if not in a pressurized state, end the process.
  • the smart glasses detect the current brain wave frequency of the user, and match the current brain wave frequency with the pre-stored brain wave frequency table to obtain a current mental fatigue index corresponding to the current brain wave frequency.
  • the pre-stored brain wave frequency table includes a brain wave frequency range and a mental fatigue index corresponding to the brain wave frequency range.
  • the brain wave is also called brain wave, and mainly records the change of the electric wave during brain activity, which is an overall reflection of the electrophysiological activity of the brain nerve cell on the surface of the cerebral cortex or scalp.
  • Brain waves are spontaneous rhythmic nerve electrical activities whose frequency varies from 1 to 30 times per second and can be divided into four bands, namely ⁇ (1-3 Hz) and ⁇ (4-7 Hz).
  • the frequency of brain waves detected by smart glasses is in the frequency range of 1 ⁇ 3Hz; In the state of frustration or depression and mental illness, the frequency of brain waves detected by smart glasses is in the frequency range of 4 to 7 Hz; when the adults are in a state of waking and quiet, the frequency of brain waves detected by smart glasses is 8 In the frequency range of ⁇ 13 Hz; when the adult is in a state of nervousness and emotional excitement or excitement, the frequency of brain waves detected by the smart glasses is in the frequency range of 14 to 30 Hz.
  • the smart glasses determine whether the current mental fatigue index is greater than a safety index threshold. If the security index threshold is greater than the security index threshold, step 213 is performed; if the security index threshold is less than or equal to the security index threshold, the process ends.
  • the smart glasses when the smart glasses determine that the current mental fatigue index is greater than the safety threshold, it indicates that the user is in a fatigue state at this time, and is not suitable for driving; the smart glasses determine that the current mental fatigue index is less than or equal to the safety threshold, indicating At this time, the user is not in a state of fatigue, and it is suitable for driving.
  • the mental fatigue index corresponding to the frequency range may be set to 10; If the wave frequency is in the frequency range of 4 to 7 Hz, the mental fatigue index corresponding to the frequency range may be set to 9; if the current brain wave frequency is in the frequency range of 8 to 13 Hz, the mental fatigue index corresponding to the frequency range It can be set to 2; if the current brain wave frequency is in the frequency range of 14-30 Hz, the mental fatigue index corresponding to the frequency range can be set to 5, and the safety index threshold can be set to 6. Smart glasses can also output different warning prompts according to different mental fatigue indexes.
  • the smart glasses detect whether the user vehicle is in a running state, and if not in the running state, perform steps 214 to 215; if in the running state, end the process.
  • the method may further include the following steps:
  • the smart glasses output a fatigue driving voice warning to remind the user that the current fatigue state is not suitable for driving;
  • the smart glasses detect a parking instruction input by the user, and acquire a first position of the smart glasses according to the parking instruction;
  • the smart glasses query the parking point closest to the distance between the first location according to the first location, and generate a parking navigation virtual scene;
  • the smart glasses project the parking navigation virtual scene to the user's eye.
  • the smart glasses project an avatar indicating that the driving is prohibited to the user's eyeball, and acquire a second real scene captured by the camera on the smart glasses.
  • the smart glasses recognize whether there is a car windshield in the second real scene. If yes, perform steps 216 to 218; if there is no car windshield, end the process.
  • the smart glasses determine the area of the windshield of the automobile as the second fusion area.
  • the smart glasses acquire a second virtual scene that matches the second fusion area from the server according to the current mental fatigue index.
  • the second virtual scene includes a scenario in which a traffic accident is caused by driving under the current mental fatigue index.
  • the smart glasses can obtain different second virtual scenes from the server according to different current mental fatigue indexes, which is beneficial to enhance the warning intensity and improve the user experience.
  • the smart glasses project the second virtual scene to the eye of the user, so that the user sees the mixed scene of the second virtual scene superimposed at the second fusion area.
  • the method described in FIG. 3 can implement the drunk driving behavior in time, and can integrate the first virtual scene of the drunk driving accident evolution in the real scene through the smart glasses, so that the user can truly experience the pain caused by drunk driving.
  • the consequences are conducive to improving the user's awareness of prevention, stopping the drunk driving behavior in time, reducing the occurrence of drunk driving accidents; meanwhile, when detecting that the user is not in the drinking state, he can also detect the current user's current by detecting the brain wave frequency.
  • the mental state when detecting that the user's current mental state is in a state of fatigue, can immediately alert the user to the hazard of fatigue driving, promptly remind the user of the dangerous driving behavior, avoid the occurrence of danger, and maintain the safety of the user's life and property.
  • FIG. 4 is a schematic flowchart diagram of another driving safety warning method based on smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses-based driving safety warning method may include the following steps:
  • step 301 The smart glasses detect whether the user vehicle communicatively connected to the smart glasses receives the unlock command, and if yes, step 302 is performed; if not, step 301 is performed to continue to detect whether the user vehicle connected to the smart glasses is received. Unlock the command.
  • the smart glasses acquire the first real scene captured by the camera on the smart glasses and the concentration of the alcohol component in the air where the smart glasses are located.
  • the smart glasses determine whether the concentration of the alcohol component is greater than a concentration threshold. If the concentration is greater than the concentration threshold, perform steps 304-305. If the concentration is less than or equal to the concentration threshold, perform step 306.
  • the smart glasses acquire, according to the first real scene, a first virtual scene from the server that evolves the drunk driving accident that matches the first real scene.
  • the first virtual scene in which the smart glasses obtain the evolution of the drunk driving accident that matches the first real scene from the server according to the first real scene may include the following steps:
  • the smart glasses identify all of the vehicles in the first real scene and extract vehicle characteristics for each vehicle, including one or more of the type of vehicle, the color of the vehicle, the brand category of the vehicle, and the identification of the vehicle.
  • the smart glasses compare the vehicle characteristics of the user vehicles communicatively connected with the smart glasses with the vehicle characteristics of each of the vehicles, respectively, to obtain a comparison result of each vehicle feature;
  • the smart glasses determine the user vehicle from all of the vehicles based on the obtained comparison results of all vehicle characteristics
  • the smart glasses determine an area of the user vehicle in the first real scene, and determine a first fusion area in the first real scene according to the area of the user's vehicle;
  • the smart glasses obtain a first virtual scene from the server that evolves the drunk driving accident that matches the first fusion area, and is the first virtual scene of the evolution of the drunk driving accident that matches the first real scene.
  • the smart glasses project the first virtual scene to the eyeball of the user, so that the user sees the mixed scene of the first real scene and the first virtual scene, and ends the process.
  • the smart glasses-based driving safety warning method may further include the following steps:
  • the smart glasses detect whether the driving seat of the user vehicle is in a pressurized state. If the pressure is in a pressurized state, step 307 is performed; if not in a pressurized state, step 306 is performed to continue to detect whether the driving seat of the user vehicle is in a pressurized state.
  • the smart glasses detect the current brain wave frequency of the user, and match the current brain wave frequency with the pre-stored brain wave frequency table to obtain a current mental fatigue index corresponding to the current brain wave frequency.
  • the pre-stored brain wave frequency table includes a brain wave frequency range and a mental fatigue index corresponding to the brain wave frequency range.
  • the smart glasses determine whether the current mental fatigue index is greater than a safety index threshold. If the security index threshold is greater than the security index threshold, step 309 is performed. If the security index threshold is less than or equal to the security index threshold, step 315 to step 318 are performed.
  • the smart glasses detect whether the user vehicle is in a running state, and if the driving state is in the driving state, the process ends; if not in the driving state, step 310 to step 311 are performed.
  • the smart glasses project an avatar indicating that driving is prohibited to the user's eyeball, and acquire a second real scene captured by the camera on the smart glasses.
  • the smart glasses recognize whether there is a car windshield in the second real scene. If yes, perform steps 312 to 318; if not, end the process.
  • the smart glasses may select a default area as a second fusion area, and the default area is a default fusion area of the smart glasses, and the size and position thereof are determined by
  • the setting of the engineer of the smart glasses is not limited in the embodiment of the present invention.
  • the smart glasses determine an area of the windshield of the automobile as a second fusion area.
  • the smart glasses acquire, according to the current mental fatigue index, a second virtual scene that matches the second fusion area from the server.
  • the second virtual scene includes a scenario in which a traffic accident is caused by driving under the current mental fatigue index.
  • the smart glasses project the second virtual scene to the eye of the user, so that the user sees the mixed scene of the second virtual scene superimposed at the second fusion area.
  • the following steps may be further included:
  • the smart glasses project a travel suggestion virtual scene to the user's eyeball, so that the user sees a mixed scene of superimposing the suggested virtual scene at the second fusion area, and the travel suggestion virtual scene may include a navigation avatar, a phone book avatar, and a rest suggestion.
  • the travel suggestion virtual scene may include a navigation avatar, a phone book avatar, and a rest suggestion.
  • the smart glasses detect an operation instruction triggered by the user for the avatar in the virtual scene, and identify the type of the avatar corresponding to the operation instruction, and the type of the avatar includes a navigation avatar, a phone book avatar, and a rest suggestion avatar. Any of them;
  • the smart glasses generate a target virtual scene that matches the second fusion area according to the type of the avatar, and project the target virtual scene to the user's eyeball, so that the user sees the mixed scene of the target virtual scene superimposed at the second fusion area. .
  • the user may suggest a virtual scene in the travel area of the windshield of the car, and the travel suggestion virtual scene includes at least a navigation avatar, a phone book avatar, and a rest suggestion avatar, and the user may An operation instruction needs to be triggered for one of the avatars.
  • the manner in which the user triggers the operation instruction may be a voice trigger, an action behavior trigger, and an eye movement tracking trigger.
  • the smart glasses may recognize the operation instruction input by the user by means of voice recognition, and may also recognize the gesture action of the user.
  • the operation instruction output by the user by means of the gesture is recognized, and the avatar that the user wants to select can also be identified by recognizing the line of sight of the user's eye.
  • the following steps may also be included:
  • the smart glasses acquire the current location of the smart glasses and detect the destination entered by the user;
  • Smart glasses generate optimal travel plans based on current location and destination
  • the smart glasses generate a target virtual scene that matches the second fusion area according to the travel plan, and the target virtual scene includes a travel mode avatar and a route map avatar corresponding to the travel mode avatar, and the travel plan includes a walking plan and a ride public One or more of the transportation plan and the network car plan;
  • the smart glasses project the target virtual scene to the user's eyeball so that the user sees the mixed scene of the target virtual scene superimposed at the second fusion area.
  • the smart glasses may recommend the user to reach the destination according to the current location of the smart glasses and the destination input by the user.
  • the solution, the optimal solution does not include the self-driving program.
  • the following steps may also be included:
  • the smart glasses generate a target virtual scene that matches the second fusion area according to the pre-stored phone book, and the target virtual scene includes a phone number avatar and a contact avatar corresponding to the phone number avatar;
  • the smart glasses project the target virtual scene to the user's eyeball so that the user sees the mixed scene of the target virtual scene superimposed at the second fusion area.
  • the smart glasses may display the phone book for the user, and provide the user with a video call and the like. Users can contact the familiar person to pick up and drop off to the place they need, which improves the user experience.
  • the following steps may also be included:
  • the smart glasses obtain their current location, and obtain all the rest places in the surrounding area, the location of each rest place, and the avatar of each rest place from the server, which is centered on the current position and drawn at a preset radius.
  • the smart glasses acquire, from the server, a map virtual scene that matches the second fusion area including the current location;
  • the smart glasses add the avatar of the rest place corresponding to the position of the rest place to the map virtual scene according to the position of each rest place, and obtain the target virtual scene;
  • the smart glasses project the target virtual scene to the user's eyeball so that the user sees the mixed scene of the target virtual scene superimposed at the second fusion area.
  • the avatar selected by the user when it is recognized that the avatar selected by the user is a vacancy suggestion avatar, it indicates that the user may need a short break at this time, and the smart glasses may be recommended for a nearby rest place, which is available for rest.
  • the length of the present invention may be a public place or a payment place provided by the merchant, which is not limited in the embodiment of the present invention.
  • the smart glasses can not only provide the user with the danger of fatigue driving, but also provide the user with various effective solutions when the user is in a fatigue state and is not suitable to drive the vehicle, thereby providing convenience for the user and improving the user.
  • the degree of experience also reduces the risk of dangerous accidents caused by fatigue driving.
  • the smart glasses-based driving safety warning method may further include the following steps:
  • the smart glasses send a fault detection instruction to the user vehicle, so that the user vehicle detects the brake system and the steering system thereof according to the fault detection instruction.
  • the braking system on the automobile is mainly used to ensure that the vehicle can decelerate according to the driver's request during driving, ensure the reliable parking of the vehicle and ensure the safety of the vehicle and the driver; and the steering system on the vehicle can follow the driver.
  • the willingness to control the direction of travel of the car is critical to the safety of the car.
  • Automotive steering systems and braking systems are two systems that must be considered for automotive safety. Therefore, it is especially important to detect the braking system and steering system of the user's vehicle in real time.
  • the smart glasses receive a test report sent by the user vehicle.
  • the detection report includes a failure factor of a braking system of the user vehicle and a failure factor of a steering system of the user vehicle.
  • the user vehicle can automatically detect the performance of the braking system and the steering system according to the fault detection instruction sent by the smart glasses, and can compare the detected performance data with the pre-stored standard performance data to obtain the failure coefficient of the braking system and the steering system.
  • the failure factor and generate a test report sent to the smart glasses.
  • the smart glasses obtain a fault reminding virtual scene from the server according to the detection report.
  • the fault reminding virtual scene includes a brake system fault avatar matching the fault coefficient of the brake system of the user vehicle and a steering system fault virtuality matching the fault coefficient of the steering system of the user vehicle. Image.
  • the smart glasses project a virtual scene to the user's eye.
  • the smart glasses can detect whether the user is drinking alcohol and whether it is in a fatigue state in real time.
  • the smart glasses can also be detected according to the detected braking of the user vehicle.
  • the failure coefficient of the system and the failure coefficient of the steering system obtain a matching fault reminding virtual scene from the server. The user can remind the virtual scene through the fault to know whether the user vehicle braking system and the steering system are faulty, which can effectively avoid Potential hazards and reduce accidents.
  • implementing the method described in FIG. 4 not only can monitor the user's drunk driving behavior in real time, but also can monitor the user's fatigue driving behavior in real time, and at the same time, obtain a test report by controlling the user's vehicle self-test, and can obtain a test report according to the test report.
  • Obtain the corresponding fault reminding virtual scene from the server, improve the user's awareness of prevention, promptly warn the user of dangerous driving behavior, reduce the occurrence of driving accidents, and maintain the safety of the user's life and property.
  • FIG. 5 is a schematic structural diagram of a smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses may include:
  • the first detecting unit 401 is configured to detect whether the user vehicle communicatively connected to the smart glasses receives the unlocking command.
  • the first obtaining unit 402 is configured to: when the first detecting unit 401 detects that the user vehicle receives the unlocking command, acquire the first real scene captured by the camera on the smart glasses and the concentration of the alcohol component in the air where the smart glasses are located.
  • the first determining unit 403 is configured to determine whether the concentration of the alcohol component acquired by the first acquiring unit 402 is greater than a concentration threshold.
  • the second obtaining unit 404 is configured to: when the first determining unit 403 determines that the concentration of the alcohol component is greater than the concentration threshold, obtain the first real scene from the server according to the first real scene acquired by the first acquiring unit 402. The first virtual scene of the evolution of drunk driving accidents.
  • the projection unit 405 is configured to project the first virtual scene acquired by the second acquiring unit 404 to the user's eyeball, so that the user sees the mixed scene of the first real scene and the first virtual scene.
  • the first detecting unit 401 may first detect whether the user vehicle communicatively connected to the smart glasses receives the unlocking command, and if it detects that the unlocking command is received, indicating that the user is to use the user at this time.
  • the first obtaining unit 402 will acquire the first real scene captured by the camera on the smart glasses and the concentration of the alcohol component in the air where the smart glasses are located, and then the first determining unit 403 can determine the concentration of the alcohol component.
  • the second obtaining unit 404 can obtain the first from the server according to the first real scene.
  • the first virtual scene in which the real scene matches the drunk driving accident evolves; further, the projection unit 405 projects the first virtual scene to the user's eyeball, so that the user sees the mixture of the first real scene and the first virtual scene. Scenes. It can be seen that the smart glasses described in FIG. 5 are implemented, and when the drunk driving behavior is detected, the user can see the first virtual scene that is integrated into the drunk driving accident evolution under the real scene, so that the user truly experiences the drunk driving.
  • the painful consequences are conducive to improving the user's awareness of prevention, stopping the drunk driving behavior in a timely manner, reducing the occurrence of drunk driving accidents, and maintaining the safety of the user's life and property.
  • FIG. 6 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses shown in FIG. 6 are optimized by the smart glasses shown in FIG. 5.
  • the second obtaining unit 404 shown in FIG. 5 includes:
  • the first sub-unit 4041 is configured to, when the first determining unit 403 determines that the concentration of the alcohol component is greater than the concentration threshold, identify all the vehicles in the first real scene acquired by the first acquiring unit 402, and extract each vehicle a vehicle feature; further configured to compare a vehicle feature of the user vehicle communicatively coupled to the smart glasses with a vehicle feature of each vehicle to obtain a comparison result of each vehicle feature; and further configured to compare the results of all the vehicle features obtained, Determining a user vehicle from all of the vehicles; also for determining an area of the user's vehicle in the first real scene, and determining a first fusion area in the first real scene according to the area of the user's vehicle, the vehicle feature including the vehicle One or more of the type of the vehicle, the color of the vehicle, the brand category of the vehicle, and the logo of the vehicle.
  • the first subunit 4041 may identify the user vehicle communicatively connected to the smart glasses in the first real scene.
  • the second sub-unit 4042 is configured to acquire, from the server, a first virtual scene of the drunk driving accident that matches the first fusion area determined by the first sub-unit 4041, as a drink matching the first real scene.
  • the first virtual scene of the evolution of a driving accident is configured to acquire, from the server, a first virtual scene of the drunk driving accident that matches the first fusion area determined by the first sub-unit 4041, as a drink matching the first real scene.
  • the smart glasses may further include:
  • a second detecting unit 406 configured to detect, when the first determining unit 403 determines that the concentration of the alcohol component is less than or equal to the concentration threshold, whether the driving seat of the user vehicle is in a pressurized state; and when detecting that the driving seat of the user vehicle is When the state is under pressure, the current brain wave frequency of the user is detected.
  • the matching unit 407 is configured to match the current brain wave frequency detected by the second detecting unit 406 with the pre-stored brain wave frequency table to obtain a current mental fatigue index corresponding to the current brain wave frequency, where the pre-stored brain wave frequency table includes a brain wave frequency Range and mental fatigue index corresponding to the range of brainwave frequencies.
  • the second determining unit 408 is configured to determine whether the current mental fatigue index obtained by the matching unit 407 is greater than a safety index threshold.
  • the second detecting unit 406 is further configured to detect, when the second determining unit 408 determines that the current mental fatigue index is greater than the safety index threshold, whether the user vehicle is in a running state.
  • the projection unit 405 is further configured to project an avatar indicating that the driving is prohibited to the user's eyeball when the second detecting unit 406 detects that the user's vehicle is not in the running state.
  • the first acquiring unit 402 may be triggered to be activated.
  • the first obtaining unit 402 is further configured to acquire a second real scene captured by the camera on the smart glasses.
  • the identification unit 409 is configured to identify whether there is a car windshield in the second real scene acquired by the first acquiring unit 402.
  • the determining unit 410 is configured to determine an area of the windshield of the automobile as the second fusion area when the identifying unit 409 recognizes that the automobile windshield exists in the second real scene.
  • the third obtaining unit 411 is further configured to acquire, according to the current mental fatigue index obtained by the matching unit 407, a second virtual scene that matches the second fusion region determined by the determining unit 410, where the second virtual scene is included in the second virtual scene.
  • the current situation of driving accidents caused by driving under the mental fatigue index is further configured to acquire, according to the current mental fatigue index obtained by the matching unit 407, a second virtual scene that matches the second fusion region determined by the determining unit 410, where the second virtual scene is included in the second virtual scene.
  • the projection unit 405 is further configured to project the second virtual scene acquired by the third obtaining unit 411 to the user's eyeball, so that the user sees the mixed scene of the second virtual scene superimposed at the second fusion region.
  • the first obtaining unit 402 can detect the concentration of the alcohol component in the air in which the smart glasses are located in time, and can pass the real scene when the first determining unit 403 determines that the user has a drunk driving behavior.
  • the first virtual scene that incorporates the evolution of drunk driving accidents allows users to experience the painful consequences of drunk driving, which helps to improve the user's awareness of prevention, stop drinking and driving in time, and reduce the occurrence of drunk driving accidents;
  • the second determining unit 408 can also detect whether the user is in a fatigue state by detecting the user's brain wave frequency to detect the current mental state of the user.
  • the projection unit 405 can provide the user with an eyeball. Projecting the corresponding early warning virtual scene, instantly alerting the user to the danger of fatigue driving, is conducive to promptly reminding the user of dangerous driving behavior and maintaining the safety of the user's life and property.
  • FIG. 7 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses shown in FIG. 7 are optimized by the smart glasses shown in FIG. 6.
  • the projection unit 405 is further configured to: when the second virtual scene is projected to the user's eyeball, so that the user sees the hybrid scene of the second virtual scene superimposed at the second fusion region, the second fusion determined by the determining unit 410
  • the matching travel of the area suggests that the virtual scene is projected to the eye of the user, so that the user sees a mixed scene of superimposing the suggested virtual scene at the second fusion area, and the recommended virtual scene includes a navigation avatar, a phone book avatar, and a rest suggestion.
  • the smart glasses may further include:
  • the third detecting unit 412 is configured to detect an operation instruction triggered by the user for the avatar in the virtual scene, and identify a type of the avatar corresponding to the operation instruction, where the type of the avatar includes a navigation avatar, a phone book avatar, and Rest suggest any of the avatars.
  • the third detection unit 412 may be triggered.
  • the projection unit 405 is further configured to generate a target virtual scene that matches the second fusion region determined by the determining unit 410 according to the type of the avatar recognized by the third detecting unit 412, and project the target virtual scene to the user's eyeball. So that the user sees a mixed scene of superimposing the target virtual scene at the second fusion area.
  • the smart glasses may further include:
  • the sending unit 413 is configured to: when the second determining unit 408 determines that the current mental fatigue index is less than or equal to the safety index threshold, send a fault detection instruction to the user vehicle, so that the user vehicle detects the braking system according to the fault detecting instruction and steering system.
  • the receiving unit 414 is configured to receive a detection report sent by the user vehicle, where the detection report includes a failure coefficient of the braking system of the user vehicle and a failure coefficient of the steering system of the user vehicle.
  • the third obtaining unit 411 is further configured to acquire, according to the detection report received by the receiving unit 414, a fault reminding virtual scene from the server, where the fault reminding virtual scene includes a braking system that matches a fault coefficient of the braking system of the user's vehicle.
  • the fault reminding virtual scene includes a braking system that matches a fault coefficient of the braking system of the user's vehicle.
  • a fault avatar and a steering system fault avatar that matches the fault factor of the steering system of the user vehicle.
  • the projection unit 405 is further configured to project the fault reminding virtual scene acquired by the third acquiring unit 411 to the eyeball of the user.
  • the third obtaining unit 411 can also obtain a corresponding fault reminding virtual scene from the server through the user vehicle self-checking detection report, promptly prompt the user for the fault condition of the important system of the user vehicle, reduce the potential danger, and maintain the user. The safety of life and property.
  • FIG. 8 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention. As shown in FIG. 8, the smart glasses may include:
  • a memory 501 storing executable program code is stored.
  • a processor 502 coupled to the memory 501.
  • the processor 502 calls the executable program code stored in the memory 501 to execute the driving safety warning method based on the smart glasses of any of FIGS. 2 to 4.
  • the embodiment of the invention discloses a computer readable storage medium, which stores a computer program, wherein the computer program causes the computer to execute any driving safety warning method based on the smart glasses of any of FIGS. 2 to 4.
  • the embodiment of the invention discloses a computer program product.
  • the computer program product runs on a computer
  • the computer executes the driving safety warning method based on the smart glasses of any one of FIG. 2 to FIG. 4 .
  • An embodiment of the present invention discloses an application publishing platform, which is used for publishing a computer program product, wherein when the computer program product is run on a computer, the computer is executed according to any one of FIG. 2 to FIG. Driving safety warning method.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • OTPROM One-Time Programmable Read-Only Memory
  • EEPROM Electronically-Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory

Abstract

一种基于智能眼镜的驾车安全预警方法及智能眼镜,包括:智能眼镜检测与智能眼镜通信连接的用户车辆是否接收到开锁命令(101),如果接收到,获取智能眼镜上摄像头捕捉到的第一真实场景以及智能眼镜所处空气中酒精成分的浓度(102);智能眼镜判断酒精成分的浓度是否大于浓度阈值(103),如果大于,根据第一真实场景从服务端获取与第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景(104);智能眼镜将第一虚拟场景投射至用户眼球,以使用户看到第一真实场景与第一虚拟场景叠加的混合场景(105)。这种方式能够及时检测酒后驾车,并能够通过智能眼镜及时向用户预警酒后驾车的危险,提高用户的防范意识,减少酒后驾车事故的发生,维护了用户的生命财产安全。

Description

一种基于智能眼镜的驾车安全预警方法及智能眼镜 技术领域
本发明涉及电子设备技术领域,具体涉及一种基于智能眼镜的驾车安全预警方法及智能眼镜。
背景技术
在日常生活中,很多的交通事故都与酒后驾驶有关,酒后驾驶已经被列为车祸致死的主要原因。在中国,每年由于酒后驾车引发的交通事故达数万起,而造成死亡的交通事故中50%以上都与酒后驾车有关,酒后驾车已经成为交通事故的第一大“杀手”。在实际中发现,人们在饮酒后常常存在侥幸心理,驾车安全意识薄弱,不能及时停止酒后驾车行为,等到酒后驾车事故发生后,已经为时已晚,造成了无法挽回的生命财产损失。
发明内容
本发明实施例公开具体涉及一种基于智能眼镜的驾车安全预警方法及智能眼镜,能够通过智能眼镜及时检测酒后驾车行为,并向用户预警酒后驾车的惨痛后果,有利于提高用户的防范意识,减少酒后驾车事故的发生,维护了用户的生命财产安全。
本发明实施例第一方面公开一种基于智能眼镜的驾车安全预警方法,所述方法包括:
所述智能眼镜检测与所述智能眼镜通信连接的用户车辆是否接收到开锁命令,如果接收到,获取所述智能眼镜上摄像头捕捉到的第一真实场景以及所述智能眼镜所处空气中酒精成分的浓度;
所述智能眼镜判断所述酒精成分的浓度是否大于浓度阈值,如果大于所述浓度阈值,根据所述第一真实场景从服务端获取与所述第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景;
所述智能眼镜将所述第一虚拟场景投射至用户眼球,以使用户看到所述第一真实场景与所述第一虚拟场景叠加的混合场景。
作为一种可选的实施方式,在本发明实施例第一方面中,所述方法还包括:
如果所述酒精成分的浓度小于或者等于所述浓度阈值,所述智能眼镜检测所述用户车辆的驾驶座位是否处于受压状态;
如果处于所述受压状态,所述智能眼镜检测用户的当前脑波频率;
所述智能眼镜将所述当前脑波频率与预存脑波频率表进行匹配,得到所述当前脑波频率对应的当前精神疲劳指数,所述预存脑波频率表包括脑波频率范围以及与所述脑波频率范围对应的精神疲劳指数;
所述智能眼镜判断所述当前精神疲劳指数是否大于安全指数阈值,如果大于所述安全指数阈值,检测所述用户车辆是否处于行驶状态;
如果不处于所述行驶状态,所述智能眼镜向用户眼球投射表示禁止驾驶的虚拟形象,并获取所述智能眼镜上摄像头捕捉到的第二真实场景;
所述智能眼镜识别所述第二真实场景中是否存在汽车挡风玻璃,如果存在,确定出所述汽车挡风玻璃的区域,作为第二融合区域;
所述智能眼镜根据所述当前精神疲劳指数,从所述服务端获取与所述第二融合区域相匹配的第二虚拟场景,所述第二虚拟场景包括在所述当前精神疲劳指数下驾车引发交通事故的场景;
所述智能眼镜将所述第二虚拟场景投射至用户眼球,以使用户看到在所述第 二融合区域处叠加所述第二虚拟场景的混合场景。
作为一种可选的实施方式,在本发明实施例第一方面中,所述智能眼镜将所述第二虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述第二虚拟场景的混合场景之后,所述方法还包括:
所述智能眼镜将出行建议虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述出行建议虚拟场景的混合场景,所述出行建议虚拟场景包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的一种或者多种;
所述智能眼镜检测用户针对所述出行建议虚拟场景中虚拟形象触发的操作指令,并识别所述操作指令对应的所述虚拟形象的类型,所述虚拟形象的类型包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的任意一种;
所述智能眼镜根据所述虚拟形象的类型,生成与所述第二融合区域相匹配的目标虚拟场景,并将所述目标虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述目标虚拟场景的混合场景。
作为一种可选的实施方式,在本发明实施例第一方面中,所述方法还包括:
如果所述当前精神疲劳指数小于或者等于所述安全指数阈值,所述智能眼镜向所述用户车辆发送故障检测指令,以使所述用户车辆根据所述故障检测指令检测所述用户车辆的制动系统以及所述用户车辆的转向系统;
所述智能眼镜接收所述用户车辆发送的检测报告,所述检测报告包括所述用户车辆的制动系统的故障系数以及所述用户车辆的转向系统的故障系数;
所述智能眼镜根据所述检测报告,从所述服务端获取故障提醒虚拟场景,所述故障提醒虚拟场景包括与所述用户车辆的制动系统的故障系数相匹配的制动系统故障虚拟形象以及与所述用户车辆的转向系统的故障系数相匹配的转向系统故障虚拟形象;
所述智能眼镜将所述故障提醒虚拟场景投射至用户眼球。
本发明实施例第二方面公开一种智能眼镜,包括:
第一检测单元,用于检测与所述智能眼镜通信连接的用户车辆是否接收到开锁命令;
第一获取单元,用于当所述第一检测单元检测出所述用户车辆接收到所述开锁命令时,获取所述智能眼镜上摄像头捕捉到的第一真实场景以及所述智能眼镜所处空气中酒精成分的浓度;
第一判断单元,用于判断所述第一获取单元获取到的所述酒精成分的浓度是否大于浓度阈值;
第二获取单元,用于当所述第一判断单元判断出所述酒精成分的浓度大于所述浓度阈值时,根据所述第一获取单元获取到的所述第一真实场景从服务端获取与所述第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景;
投射单元,用于将所述第二获取单元获取到的所述第一虚拟场景投射至用户眼球,以使用户看到所述第一真实场景与所述第一虚拟场景叠加的混合场景。
作为一种可选的实施方式,在本发明实施例第二方面中,还包括:
第二检测单元,用于当所述第一判断单元判断出所述酒精成分的浓度小于或者等于所述浓度阈值时,检测所述用户车辆的驾驶座位是否处于受压状态;以及当检测出所述用户车辆的驾驶座位处于所述受压状态时,检测用户的当前脑波频率;
匹配单元,用于将所述第二检测单元检测出的所述当前脑波频率与预存脑波频率表进行匹配,得到所述当前脑波频率对应的当前精神疲劳指数,所述预存脑波频率表包括脑波频率范围以及与所述脑波频率范围对应的精神疲劳指数;
第二判断单元,用于判断所述匹配单元得到的所述当前精神疲劳指数是否大 于安全指数阈值;
所述第二检测单元,还用于当所述第二判断单元判断出所述当前精神疲劳指数大于所述安全指数阈值时,检测所述用户车辆是否处于行驶状态;
所述投射单元,还用于当所述第二检测单元检测出所述用户车辆不处于所述行驶状态时,向用户眼球投射表示禁止驾驶的虚拟形象;
所述第一获取单元,还用于获取所述智能眼镜上摄像头捕捉到的第二真实场景;
识别单元,用于识别所述第一获取单元获取到的所述第二真实场景中是否存在汽车挡风玻璃;
确定单元,用于当所述识别单元识别出所述第二真实场景中存在所述汽车挡风玻璃时,确定出所述汽车挡风玻璃的区域,作为第二融合区域;
第三获取单元,还用于根据所述匹配单元得到的所述当前精神疲劳指数,从所述服务端获取与所述第二融合区域相匹配的第二虚拟场景,所述第二虚拟场景包括在所述当前精神疲劳指数下驾车引发交通事故的场景;
所述投射单元,还用于将所述第三获取单元获取到的所述第二虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述第二虚拟场景的混合场景。
作为一种可选的实施方式,在本发明实施例第二方面中,所述投射单元,还用于将所述第二虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述第二虚拟场景的混合场景之后,将与所述第二融合区域相匹配的出行建议虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述出行建议虚拟场景的混合场景,所述出行建议虚拟场景包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的一种或者多种;
所述智能眼镜还包括:
第三检测单元,用于检测用户针对所述出行建议虚拟场景中虚拟形象触发的操作指令,并识别所述操作指令对应的所述虚拟形象的类型,所述虚拟形象的类型包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的任意一种;
所述投射单元,还用于根据所述第三检测单元识别出的所述虚拟形象的类型,生成与所述第二融合区域相匹配的目标虚拟场景,并将所述目标虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述目标虚拟场景的混合场景。
作为一种可选的实施方式,在本发明实施例第二方面中,还包括:
发送单元,用于当所述第二判断单元判断出所述当前精神疲劳指数小于或者等于所述安全指数阈值时,向所述用户车辆发送故障检测指令,以使所述用户车辆根据所述故障检测指令检测所述用户车辆的制动系统以及所述用户车辆的转向系统;
接收单元,用于接收所述用户车辆发送的检测报告,所述检测报告包括所述用户车辆的制动系统的故障系数以及所述用户车辆的转向系统的故障系数;
所述第三获取单元,还用于根据所述接收单元接收到的所述检测报告,从所述服务端获取故障提醒虚拟场景,所述故障提醒虚拟场景包括与所述用户车辆的制动系统的故障系数相匹配的制动系统故障虚拟形象以及与所述用户车辆的转向系统的故障系数相匹配的转向系统故障虚拟形象;
所述投射单元,还用于将所述第三获取单元获取到的所述故障提醒虚拟场景投射至用户眼球。
本发明实施例第三方面公开一种智能眼镜,包括:
存储有可执行程序代码的存储器;
与所述存储器耦合的处理器;
所述处理器调用所述存储器中存储的所述可执行程序代码,执行本发明实施例第一方面公开的任意一种方法的部分或全部步骤。
本发明实施例第四方面公开一种计算机可读存储介质,所述计算机可读存储介质存储了程序代码,其中,所述程序代码包括用于执行本发明实施例第一方面公开的任意一种方法的部分或全部步骤的指令。
本发明实施例第五方面公开一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本发明实施例第一方面公开的任意一种方法的部分或全部步骤。
本发明实施例第六方面公开一种应用发布平台,所述应用发布平台用于发布所述计算机程序产品,其中,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本发明实施例第一方面公开的任意一种方法的部分或全部步骤。
与现有技术相比,本发明实施例具有以下有益效果:
本发明实施例中,智能眼镜首先检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令,如果接收到该开锁命令,表明此时用户将要使用该用户车辆出行,此时智能眼镜将获取该智能眼镜上摄像头捕捉到的第一真实场景以及该智能眼镜所处空气中酒精成分的浓度,然后智能眼镜判断该酒精成分的浓度是否大于浓度阈值,如果大于,表明此时智能眼镜所处环境中酒精成分的浓度过高,用户处于酒后状态,则智能眼镜根据该第一真实场景从服务端获取与该第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景;进一步地,智能眼镜将该第一虚拟场景投射至用户眼球,以使用户看到第一真实场景与第一虚拟场景叠加的混合场景。可见,实施本发明实施例,能够及时检测酒后驾车行为,并能够通过智能眼镜在真实场景下融入酒后驾车事故演化的第一虚拟场景,让用户真切体验到酒后驾车造成的惨痛后果,有利于提高用户的防范意识,及时停止酒后驾车行为,减少了酒后驾车事故的发生,维护了用户的生命财产安全。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例公开的一种智能眼镜的模型示意图;
图2是本发明实施例公开的另一种基于智能眼镜的驾车安全预警方法的流程示意图;
图3是本发明实施例公开的另一种基于智能眼镜的驾车安全预警方法的流程示意图;
图4是本发明实施例公开的另一种基于智能眼镜的驾车安全预警方法的流程示意图;
图5是本发明实施例公开的一种智能眼镜的结构示意图;
图6是本发明实施例公开的另一种智能眼镜的结构示意图;
图7是本发明实施例公开的另一种智能眼镜的结构示意图;
图8是本发明实施例公开的另一种智能眼镜的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部 的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
本发明实施例公开一种基于智能眼镜的驾车安全预警方法及智能眼镜,能够通过智能眼镜及时检测酒后驾车行为,并向用户预警酒后驾车的惨痛后果,有利于提高用户的防范意识,减少酒后驾车事故的发生,维护了用户的生命财产安全。以下分别进行详细说明。
请参阅图1,图1是本发明实施例公开的一种智能眼镜的模型示意图。如图1所示,该智能眼镜包括传感器A、传感器B、微型投影仪B以及摄像头D组成。
本发明实施例中,传感器A可以检测该智能眼镜所处空气中酒精成分的浓度。传感器A可以将探测到的空气中酒精成分的浓度转换成电信号,随后智能眼镜可以根据该电信号的强弱得到酒精气体在环境中的浓度信息,灵敏度高,方便可靠,可以随时检测空气酒精浓度。
本发明实施例中,传感器B可以检测用户的当前脑波频率。传感器B可以采用ThinkGear AM芯片,该芯片采用干电极传感器,可以检测到极微弱的脑电信号,具有良好的消噪抗干扰能力,能够准确检测出用户的当前脑波频率,减少了误差,提升了测量精度,优化了用户体验。
作为一种可选的实施方式,微型投影仪C可以直接把图像投射到佩戴者的视网膜上;微型投影仪C也可以先将需要投射的虚拟场景投射到半透明棱镜上,再由该半透明棱镜将该虚拟场景反射在人体视网膜上;该微型投影仪C也可以先将虚拟场景投影到微型投影仪C前面装载的微型屏幕上,用户可以在视野的右上角看到微型投影仪所投影的内容。
本发明实施例中,摄像头D位于智能眼镜的外侧,可以获取到人眼所看到的真实场景。其中,人眼所看到的真实场景与摄像头D所捕获的真实场景相同。
作为一种可选的实施方式,智能眼镜上配备的摄像头D可以是一对双目摄像头,其获取到的真实场景与人眼看到的真实场景更相匹配,同时智能眼镜还可以根据获取到的真实场景,确定出投射区域,有利于提升用户体验。
实施例一
请参阅图2,图2是本发明实施例公开的一种基于智能眼镜的驾车安全预警方法的流程示意图。其中,如图2所示,该基于智能眼镜的驾车安全预警方法可以包括以下步骤:
101、智能眼镜检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令,如果接收到,执行步骤102~步骤103;如果未接收到,执行步骤101继续检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令。
本发明实施例中,智能眼镜具有独立的操作系统,在该操作系统上可以运行由用户安装的应用软件等软件服务商提供的程序。同时智能眼镜可以根据识别用户输入的语音指令或检测用户的手势动作指令来完成相应的操作,例如添加日程、地图导航、与好友互动、拍摄照片和视频、与朋友展开视频通话等功能,并可以通过移动通讯网络来实现无线网络接入。
102、智能眼镜获取该智能眼镜上摄像头捕捉到的第一真实场景以及该智能眼镜所处空气中酒精成分的浓度。
103、智能眼镜判断酒精成分的浓度是否大于浓度阈值,如果大于该浓度阈 值,执行步骤104~步骤105;如果小于或者等于该浓度阈值,执行步骤102~步骤103。
本发明实施例中,饮酒驾车是指驾驶车辆的驾驶人员血液中的酒精含量大于或者等于20mg/100ml以及小于80mg/100ml;醉酒驾车是指驾驶车辆的驾驶人员血液中的酒精含量大于或者等于80mg/100ml。当人饮酒时,酒精被吸收,但并不会被消化,一部分酒精会挥发出去,经过肺泡就会被人呼出体外。经测定,人在饮酒后呼出气体中酒精成分的浓度与其血液中酒精成分的浓度的比例是1:2100,即人在饮酒后每呼出2100ml气体中含有的酒精,与其1ml血液中含有的酒精含量相等,则智能眼镜可以通过测定其所处空气中酒精成分的浓度来判断出用户是否存在饮酒驾车行为。其中,浓度阈值可以设置为1mg/10500ml等,本发明实施例不作限定。
104、智能眼镜根据第一真实场景从服务端获取与该第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
本发明实施例中,智能眼镜可以根据第一真实场景的不同,匹配出合适的第一虚拟场景。该第一虚拟场景可以是由于酒后驾车导致的动态事故现场,同时该虚拟场景可以配以相匹配的背景声音,造成惊心动魄的感觉,能够给用户以警示的作用。智能眼镜获取的第一真实场景还可以是酒后肇事司机处于被抢救的场景,同时包括其家属在急救室外面焦急伤心的场面,能够引发用户的家庭责任感,及时反省,减少酒后驾车行为。
本发明实施例中,智能眼镜每次呈现的第一虚拟场景可以不相同,能够避免由于同一场景反复出现造成的警示作用下降的问题。
105、智能眼镜将第一虚拟场景投射至用户眼球,以使用户看到第一真实场景与第一虚拟场景叠加的混合场景。
可见,实施图2所描述的方法,通过实时检测该智能眼镜所处空气中酒精成分的浓度,能够实时监控酒后驾车行为,并能够通过智能眼镜在真实场景下融入酒后驾车事故演化的第一虚拟场景,让用户真切体验到酒后驾车造成的惨痛后果,有利于提高用户的防范意识,及时停止酒后驾车行为,减少了酒后驾车事故的发生,维护了用户的生命财产安全。
实施例二
请参阅图3,图3是本发明实施例公开的另一种基于智能眼镜的驾车安全预警方法的流程示意图。其中,如图3所示,该基于智能眼镜的驾车安全预警方法可以包括以下步骤:
201、智能眼镜检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令,如果接收到,执行步骤202;如果未接收到,执行步骤201继续检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令。
202、智能眼镜获取该智能眼镜上摄像头捕捉到的第一真实场景以及该智能眼镜所处空气中酒精成分的浓度。
本发明实施例中,该智能眼镜可以与用户的用户车辆进行通信连接,用户可以通过向智能眼镜输入开锁命令,控制其用户车辆执行开锁操作;或者用户可以通过与该用户车辆相匹配的钥匙控制该用户车辆执行开锁操作,当用户车辆接收到用户通过与该用户车辆相匹配的钥匙触发的开锁指令时,可以向与其通信连接的智能眼镜发送收到开锁指令的信号,此时智能眼镜在接收到该信号后,可以立即检测与该智能眼镜所处空气中酒精成分的浓度。
203、智能眼镜判断酒精成分的浓度是否大于浓度阈值,如果大于该浓度阈值,执行步骤204~步骤209;如果小于或者等于该浓度阈值,执行步骤210。
204、智能眼镜识别第一真实场景中所有的车辆,并提取每个车辆的车辆特 征。
本发明实施例中,该车辆特征包括车辆的类型、车辆的颜色、车辆的品牌类别以及车辆的标识物中的一种或者多种,本发明实施例不作限定。
205、智能眼镜将与该智能眼镜通信连接的用户车辆的车辆特征分别与该每个车辆的车辆特征进行比较,获得每一个车辆特征比较结果。
206、智能眼镜根据获得的所有车辆特征比较结果,从该所有的车辆中确定出上述用户车辆。
本发明实施例中,第一真实场景与用户此时看到的场景相匹配,当第一真实场景中存在多个车辆时,执行上述步骤204~步骤206可以从第一场景中的多个车辆中,识别出用户的用户车辆。
207、智能眼镜在第一真实场景中确定出该用户车辆的区域,并根据用户车辆的区域在第一真实场景中确定出第一融合区域。
208、智能眼镜从服务端获取与该第一融合区域相匹配的酒后驾车事故演化的第一虚拟场景,作为与第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
本发明实施例中,执行上述步骤204~步骤208智能眼镜可以根据第一真实场景从服务端获取与该第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
209、智能眼镜将第一虚拟场景投射至用户眼球,以使用户看到第一真实场景与第一虚拟场景叠加的混合场景,结束本流程。
本发明实施例中,由于用户此时处于饮酒状态,在向其用户车辆发送开锁指令时,该用户注意力主要集中于其用户车辆,所以智能眼镜将第一虚拟场景投射至用户眼球之后,用户可以立即看到在其用户车辆的位置叠加的第一虚拟场景,能够引起用户注意,有利于提升警示效果。
作为一种可选的实施方式,该基于智能眼镜的驾车安全预警方法还可以包括以下步骤:
210、智能眼镜检测该用户车辆的驾驶座位是否处于受压状态,如果处于受压状态,执行步骤211~步骤212;如果不处于受压状态,结束本流程。
211、智能眼镜检测用户的当前脑波频率,并将该当前脑波频率与预存脑波频率表进行匹配,得到当前脑波频率对应的当前精神疲劳指数。
本发明实施例中,该预存脑波频率表包括脑波频率范围以及与该脑波频率范围对应的精神疲劳指数。
本发明实施例中,脑波也称作脑电波,主要记录大脑活动时的电波变化,是脑神经细胞的电生理活动在大脑皮层或头皮表面的总体反映。脑电波是一些自发的有节律的神经电活动,其频率变动范围在每秒1-30次之间的,可划分为四个波段,即δ(1-3Hz)、θ(4-7Hz)、α(8-13Hz)、β(14-30Hz);其中,当成年人在极度疲劳和昏睡或麻醉状态下,智能眼镜检测到的脑波频率在1~3Hz的频率范围内;当成年人意愿受挫或者抑郁以及患有精神病的状态下,智能眼镜检测到的脑波频率在4~7Hz的频率范围内;当成年人在清醒、安静的正常状态下,智能眼镜检测到的脑波频率在8~13Hz的频率范围内;当成年人在精神紧张和情绪激动或亢奋的状态下,智能眼镜检测到的脑波频率在14~30Hz的频率范围内。
212、智能眼镜判断该当前精神疲劳指数是否大于安全指数阈值,如果大于安全指数阈值,执行步骤213;如果小于或者等于安全指数阈值,结束本流程。
本发明实施例中,当智能眼镜判断出当前精神疲劳指数大于安全阈值时,表明此时用户处于疲劳状态,则不适宜驾车出行;智能眼镜判断出当前精神疲劳指数小于或者等于安全阈值时,表明此时用户不处于疲劳状态,则适宜驾车出行。
作为一种可选的实施方式,当精神疲劳指数最大为10时,如果当前脑波频 率在1~3Hz的频率范围内,则该频率范围所对应的精神疲劳指数可以设置为10;如果当前脑波频率在4~7Hz的频率范围内,则该频率范围所对应的精神疲劳指数可以设置为9;如果当前脑波频率在8~13Hz的频率范围内,则该频率范围所对应的精神疲劳指数可以设置为2;如果当前脑波频率在14~30Hz的频率范围内,则该频率范围所对应的精神疲劳指数可以设置为5,则该安全指数阈值可以设置为6。智能眼镜还可以根据不同的精神疲劳指数,输出不同的预警提示。
213、智能眼镜检测该用户车辆是否处于行驶状态,如果不处于行驶状态,执行步骤214~步骤215;如果处于行驶状态,结束本流程。
作为一种可选的实施方式,当智能眼镜判断出该用户车辆处于行驶状态时,还可以包括以下步骤:
智能眼镜输出疲劳驾驶语音预警,以提示用户当前的疲劳状态不适宜驾车;
智能眼镜检测用户输入的停车指令,并根据该停车指令,获取该智能眼镜的第一位置;
智能眼镜根据该第一位置查询与该第一位置之间距离最近的停车点,并生成停车导航虚拟场景;
智能眼镜将停车导航虚拟场景投射至用户眼球。
214、智能眼镜向用户眼球投射表示禁止驾驶的虚拟形象,并获取该智能眼镜上摄像头捕捉到的第二真实场景。
215、智能眼镜识别第二真实场景中是否存在汽车挡风玻璃,如果存在,执行步骤216~步骤218;如果不存在汽车挡风玻璃,结束本流程。
216、智能眼镜确定出汽车挡风玻璃的区域,作为第二融合区域。
217、智能眼镜根据当前精神疲劳指数,从服务端获取与第二融合区域相匹配的第二虚拟场景。
本发明实施例中,该第二虚拟场景包括在当前精神疲劳指数下驾车引发交通事故的场景。智能眼镜可以根据不同当前精神疲劳指数从服务端获取不同的第二虚拟场景,有利于提升警示力度,提升用户体验度。
218、智能眼镜将第二虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加第二虚拟场景的混合场景。
可见,实施图3所描述的方法,能够及时检测酒后驾车行为,并能够通过智能眼镜在真实场景下融入酒后驾车事故演化的第一虚拟场景,让用户真切体验到酒后驾车造成的惨痛后果,有利于提高用户的防范意识,及时停止酒后驾车行为,减少了酒后驾车事故的发生;同时,在检测到用户不处于饮酒状态时,还可以通过检测脑波频率来检测用户的当前精神状态,当检测到用户的当前精神状态处于疲劳状态时,能够即时预警用户疲劳驾驶的危害,即时提醒用户的危险驾车行为,避免危险的发生,维护了用户的生命财产安全。
实施例三
请参阅图4,图4是本发明实施例公开的另一种基于智能眼镜的驾车安全预警方法的流程示意图。其中,如图4所示,该基于智能眼镜的驾车安全预警方法可以包括以下步骤:
301、智能眼镜检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令,如果接收到,执行步骤302;如果未接收到,执行步骤301继续检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令。
302、智能眼镜获取该智能眼镜上摄像头捕捉到的第一真实场景以及该智能眼镜所处空气中酒精成分的浓度。
303、智能眼镜判断酒精成分的浓度是否大于浓度阈值,如果大于该浓度阈值,执行步骤304~305;如果小于或者等于该浓度阈值,执行步骤306。
304、智能眼镜根据第一真实场景从服务端获取与该第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
作为一种可选的实施方式,智能眼镜根据第一真实场景从服务端获取与该第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景可以包括以下步骤:
智能眼镜识别第一真实场景中所有的车辆,并提取每个车辆的车辆特征,该车辆特征包括车辆的类型、车辆的颜色、车辆的品牌类别以及车辆的标识物中的一种或者多种。
智能眼镜将与该智能眼镜通信连接的用户车辆的车辆特征分别与该每个车辆的车辆特征进行比较,获得每一个车辆特征比较结果;
智能眼镜根据获得的所有车辆特征比较结果,从该所有的车辆中确定出上述用户车辆;
智能眼镜在第一真实场景中确定出该用户车辆的区域,并根据用户车辆的区域在第一真实场景中确定出第一融合区域;
智能眼镜从服务端获取与该第一融合区域相匹配的酒后驾车事故演化的第一虚拟场景,作为与第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
305、智能眼镜将第一虚拟场景投射至用户眼球,以使用户看到第一真实场景与第一虚拟场景叠加的混合场景,结束本流程。
作为一种可选的实施方式,该基于智能眼镜的驾车安全预警方法还可以包括以下步骤:
306、智能眼镜检测该用户车辆的驾驶座位是否处于受压状态,如果处于受压状态,执行步骤307;如果不处于受压状态,执行步骤306继续检测用户车辆的驾驶座位是否处于受压状态。
307、智能眼镜检测用户的当前脑波频率,并将该当前脑波频率与预存脑波频率表进行匹配,得到当前脑波频率对应的当前精神疲劳指数。
本发明实施例中,该预存脑波频率表包括脑波频率范围以及与脑波频率范围对应的精神疲劳指数。
308、智能眼镜判断该当前精神疲劳指数是否大于安全指数阈值,如果大于安全指数阈值,执行步骤309;如果小于或者等于安全指数阈值,执行步骤315~步骤318。
309、智能眼镜检测该用户车辆是否处于行驶状态,如果处于行驶状态,结束本流程;如果不处于行驶状态,执行步骤310~步骤311。
310、智能眼镜向用户眼球投射表示禁止驾驶的虚拟形象,并获取该智能眼镜上摄像头捕捉到的第二真实场景。
311、智能眼镜识别第二真实场景中是否存在汽车挡风玻璃,如果存在,执行步骤312~步骤318;如果不存在,结束本流程。
本发明实施例中,当识别出第二真实场景中不存在汽车档分玻璃时,智能眼镜可以选择默认区域作为第二融合区域,该默认区域为智能眼镜默认的融合区域,其大小和位置由智能眼镜的工程人员设定,本发明实施例不作限定。
312、智能眼镜确定出汽车挡风玻璃的区域,作为第二融合区域。
313、智能眼镜根据当前精神疲劳指数,从服务端获取与第二融合区域相匹配的第二虚拟场景。
本发明实施例中,该第二虚拟场景包括在该当前精神疲劳指数下驾车引发交通事故的场景。
314、智能眼镜将第二虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加第二虚拟场景的混合场景。
作为一种可选的实施方式,在智能眼镜将第二虚拟场景投射至用户眼球,以 使用户看到在第二融合区域处叠加第二虚拟场景的混合场景之后,还可以包括以下步骤:
智能眼镜将出行建议虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加出行建议虚拟场景的混合场景,该出行建议虚拟场景可以包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的一种或者多种;
智能眼镜检测用户针对该出行建议虚拟场景中虚拟形象触发的操作指令,并识别该操作指令对应的虚拟形象的类型,该虚拟形象的类型包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的任意一种;
智能眼镜根据该虚拟形象的类型,生成与第二融合区域相匹配的目标虚拟场景,并将目标虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加目标虚拟场景的混合场景。
本发明实施例中,用户可以在其汽车挡风玻璃的区域看到的出行建议虚拟场景,该出行建议虚拟场景至少包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象,用户可以根据自己的需要针对其中一种虚拟形象触发操作指令。其中,用户触发操作指令的方式可以为语音触发、动作行为触发以及眼动跟踪触发,智能眼镜可以通过语音识别,识别出用户通过语音的方式输入的操作指令,还可以通识别用户的手势动作,识别出用户通过手势的方式输出的操作指令,还可以通过识别用户眼球的视线关注点,识别出用户想要选择的虚拟形象。
作为一种可选的实施方式,如果识别出虚拟形象的类型是导航虚拟形象时,还可以包括以下步骤:
智能眼镜获取该智能眼镜的当前位置,并检测用户输入的目的地;
智能眼镜根据当前位置和目的地生成最优的出行方案;
智能眼镜根据该出行方案生成与该第二融合区域相匹配的目标虚拟场景,该目标虚拟场景包括出行方式虚拟形象以及与出行方式虚拟形象对应的路线地图虚拟形象,出行方案包括步行方案、乘坐公共交通方案以及乘坐网约车方案中的一种或多种;
智能眼镜将该目标虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加目标虚拟场景的混合场景。
本发明实施例中,当识别出用户选择的虚拟形象是导航虚拟形象时,智能眼镜可以根据此时该智能眼镜的当前位置与用户输入的目的地,为用户推荐可以到达该目的地的最优方案,该最优方案不包括自驾方案。
作为另一种可选的实施方式,如果识别出虚拟形象的类型是电话簿虚拟形象时,还可以包括以下步骤:
智能眼镜根据预存电话薄生成与该第二融合区域相匹配的目标虚拟场景,该目标虚拟场景包括电话号码虚拟形象以及该电话号码虚拟形象对应的联系人虚拟形象;
智能眼镜将目标虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加目标虚拟场景的混合场景。
本发明实施例中,当识别出用户选择的虚拟形象是电话薄虚拟形象时,表明用户此时可能有急事需要联系别人,智能眼镜可以为用户显示电话薄,并能够为用户提供视频通话等功能,用户可以通过联系熟悉的人过来接送自己到需要的地方,提升了用户体验度。
作为另一种可选的实施方式,如果识别出虚拟形象的类型是休息建议虚拟形象时,还可以包括以下步骤:
智能眼镜获取其当前位置,并从服务端获取周边范围内所有的休息场所、每个休息场所的位置以及每个休息场所的虚拟形象,该周边范围是以当前位置为中 心,以预设半径画圆得到的范围;
智能眼镜从服务端获取包括当前位置的与该第二融合区域相匹配的地图虚拟场景;
智能眼镜根据每个休息场所的位置,将与休息场所的位置对应的休息场所的虚拟形象,添加到地图虚拟场景中,得到目标虚拟场景;
智能眼镜将目标虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加目标虚拟场景的混合场景。
本发明实施例中,当识别出用户选择的虚拟形象是休息建议薄虚拟形象时,表明用户此时可能需要短暂的休息,智能眼镜可以为推荐附近的可供休息的场所,该可供休息的长多可以是公共场所,也可以是商户提供的付费场所,本发明实施例不作限定。
本发明实施例中,智能眼镜不仅可以为用户预警疲劳驾驶的危害,还能够在用户处于疲劳状态不适宜驾驶车辆时,为用户提供有效可能的多种方案,为用户提供了便利,提升了用户体验度,同时也减少了由于疲劳驾驶导致的危险事故的发生。
作为一种可选的实施方式,该基于智能眼镜的驾车安全预警方法还可以包括以下步骤:
315、智能眼镜向用户车辆发送故障检测指令,以使该用户车辆根据故障检测指令检测其制动系统以及其转向系统。
本发明实施例中,汽车上的制动系统主要用于保证汽车行驶中能按驾驶员要求减速停车、保证车辆可靠停放以及保障汽车和驾驶人的安全;同时汽车上的转向系统可以按照驾驶员的意愿控制汽车的行驶方向,该系统对汽车的行驶安全至关重要。汽车转向系统和制动系统都是汽车安全必须要重视的两个系统,因此实时检测用户车辆的制动系统和转向系统尤为重要。
316、智能眼镜接收该用户车辆发送的检测报告。
本发明实施例中,该检测报告包括该用户车辆的制动系统的故障系数以及该用户车辆的转向系统的故障系数。用户车辆可以根据智能眼镜发送的故障检测指令自动检测其制动系统和转向系统的性能,并能够根据检测出来的性能数据与预存的标准性能数据进行比较,得到制动系统的故障系数以及转向系统的故障系数,并生成检测报告发送给智能眼镜。
317、智能眼镜根据该检测报告,从服务端获取故障提醒虚拟场景。
本发明实施例中,该故障提醒虚拟场景包括与该用户车辆的制动系统的故障系数相匹配的制动系统故障虚拟形象以及与该用户车辆的转向系统的故障系数相匹配的转向系统故障虚拟形象。
318、智能眼镜将故障提醒虚拟场景投射至用户眼球。
本发明实施例中,智能眼镜可以实时检测用户是否饮酒以及是否处于疲劳状态,智能眼镜在检测出此时用户处于非饮酒以及非疲劳的状态时,还可以根据检测出的该用户车辆的制动系统的故障系数以及其转向系统的故障系数,从服务端获取相匹配的故障提醒虚拟场景,用户可以通过故障提醒虚拟场景,了解到其用户车辆制动系统和转向系统是否存在故障,能够有效避免潜在危险,减少事故的发生。
可见,实施图4所描述的方法,不仅能够实时监控用户的酒后驾车行为,还能够实时监控用户的疲劳驾车行为,同时,通过控制用户车辆自检,得到检测报告,并能够根据该检测报告从服务端获取相应的故障提醒虚拟场景,提高了用户的防范意识,及时警示用户的危险驾车行为,减少了驾车事故的发生,维护了用户的生命财产安全。
实施例四
请参阅图5,图5是本发明实施例公开的一种智能眼镜的结构示意图。其中,如图5所示,该智能眼镜可以包括:
第一检测单元401,用于检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令。
第一获取单元402,用于当第一检测单元401检测出用户车辆接收到该开锁命令时,获取智能眼镜上摄像头捕捉到的第一真实场景以及该智能眼镜所处空气中酒精成分的浓度。
第一判断单元403,用于判断第一获取单元402获取到的酒精成分的浓度是否大于浓度阈值。
第二获取单元404,用于当第一判断单元403判断出酒精成分的浓度大于浓度阈值时,根据第一获取单元402获取到的第一真实场景从服务端获取与该第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
投射单元405,用于将第二获取单元404获取到的第一虚拟场景投射至用户眼球,以使用户看到第一真实场景与第一虚拟场景叠加的混合场景。
在图5所描述的智能眼镜中,第一检测单元401可以先检测与该智能眼镜通信连接的用户车辆是否接收到开锁命令,如果检测出接收到该开锁命令,表明此时用户将要使用该用户车辆出行,此时第一获取单元402将获取该智能眼镜上摄像头捕捉到的第一真实场景以及该智能眼镜所处空气中酒精成分的浓度,然后第一判断单元403可以判断该酒精成分的浓度是否大于浓度阈值,如果大于,表明此时智能眼镜所处环境中酒精成分的浓度过高,用户存在酒驾行为,则第二获取单元404可以根据该第一真实场景从服务端获取与该第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景;进一步地,投射单元405将该第一虚拟场景投射至用户眼球,以使用户看到第一真实场景与第一虚拟场景叠加的混合场景。可见,实施图5所描述的智能眼镜,当检测出酒后驾车行为是,能够使用户看到在真实场景下融入酒后驾车事故演化的第一虚拟场景,让用户真切体验到酒后驾车造成的惨痛后果,有利于提高用户的防范意识,及时停止酒后驾车行为,减少了酒后驾车事故的发生,维护了用户的生命财产安全。
实施例五
请参阅图6,图6是本发明实施例公开的另一种智能眼镜的结构示意图。其中,图6所示的智能眼镜是由图5所示的智能眼镜进行优化得到的。图5所示的第二获取单元404包括:
第一子单元4041,用于当第一判断单元403判断出该酒精成分的浓度大于浓度阈值时,识别第一获取单元402获取到的第一真实场景中所有的车辆,并提取每个车辆的车辆特征;还用于将与该智能眼镜通信连接的用户车辆的车辆特征分别与每个车辆的车辆特征进行比较,获得每一个车辆特征比较结果;还用于根据获得的所有车辆特征比较结果,从所有的车辆中确定出用户车辆;还用于在第一真实场景中确定出用户车辆的区域,并根据用户车辆的区域在第一真实场景中确定出第一融合区域,该车辆特征包括车辆的类型、车辆的颜色、车辆的品牌类别以及车辆的标识物中的一种或者多种。
本发明实施例中,第一子单元4041可以在第一真实场景中识别出与该智能眼镜通信连接的用户车辆。
第二子单元4042,用于从服务端获取与第一子单元4041确定出的第一融合区域相匹配的酒后驾车事故演化的第一虚拟场景,作为与第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
作为一种可选的实施方式,如图6所示,该智能眼镜还可以包括:
第二检测单元406,用于当第一判断单元403判断出酒精成分的浓度小于或者等于该浓度阈值时,检测用户车辆的驾驶座位是否处于受压状态;以及当检测出用户车辆的驾驶座位处于受压状态时,检测用户的当前脑波频率。
匹配单元407,用于将第二检测单元406检测出的当前脑波频率与预存脑波频率表进行匹配,得到当前脑波频率对应的当前精神疲劳指数,该预存脑波频率表包括脑波频率范围以及与脑波频率范围对应的精神疲劳指数。
第二判断单元408,用于判断匹配单元407得到的当前精神疲劳指数是否大于安全指数阈值。
第二检测单元406,还用于当第二判断单元408判断出当前精神疲劳指数大于安全指数阈值时,检测用户车辆是否处于行驶状态。
投射单元405,还用于当第二检测单元406检测出用户车辆不处于行驶状态时,向用户眼球投射表示禁止驾驶的虚拟形象。
本发明实施例中,投射单元405在向用户眼球投射表示禁止驾驶的虚拟形象之后,还可以触发启动第一获取单元402。
第一获取单元402,还用于获取智能眼镜上摄像头捕捉到的第二真实场景。
识别单元409,用于识别第一获取单元402获取到的第二真实场景中是否存在汽车挡风玻璃。
确定单元410,用于当识别单元409识别出第二真实场景中存在汽车挡风玻璃时,确定出汽车挡风玻璃的区域,作为第二融合区域。
第三获取单元411,还用于根据匹配单元407得到的当前精神疲劳指数,从服务端获取与确定单元410确定出的第二融合区域相匹配的第二虚拟场景,该第二虚拟场景包括在当前精神疲劳指数下驾车引发交通事故的场景。
投射单元405,还用于将第三获取单元411获取到的第二虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加第二虚拟场景的混合场景。
可见,实施图3所描述的方法,第一获取单元402可以及时检测该智能眼镜所处空气中酒精成分的浓度,并能够在第一判断单元403判断出用户存在酒驾行为时,通过在真实场景下融入酒后驾车事故演化的第一虚拟场景,让用户真切体验到酒后驾车造成的惨痛后果,有利于提高用户的防范意识,及时停止酒后驾车行为,减少了酒后驾车事故的发生;同时,第二判断单元408还可以通过检测用户脑波频率来检测用户的当前精神状态,来判断用户是否处于疲劳状态,当检测到用户的当前精神状态处于疲劳时,投射单元405能够向用户眼球投射相应的预警虚拟场景,即时预警用户疲劳驾驶的危害,有利于即时提醒用户的危险驾车行为,维护了用户的生命财产安全。
实施例六
请参阅图7,图7是本发明实施例公开的另一种智能眼镜的结构示意图。其中,图7所示的智能眼镜是由图6所示的智能眼镜进行优化得到的。如图7所示:
投射单元405,还用于在将第二虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加第二虚拟场景的混合场景之后,将与确定单元410确定出的第二融合区域相匹配的出行建议虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加出行建议虚拟场景的混合场景,该出行建议虚拟场景包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的一种或者多种。
如图7所示,该智能眼镜还可以包括:
第三检测单元412,用于检测用户针对出行建议虚拟场景中虚拟形象触发的操作指令,并识别该操作指令对应的虚拟形象的类型,该虚拟形象的类型包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的任意一种。
本发明实施例中,投射单元405在将出行建议虚拟场景投射至用户眼球,以 使用户看到在第二融合区域处叠加出行建议虚拟场景的混合场景之后,还可以触发启动第三检测单元412。
投射单元405,还用于根据第三检测单元412识别出的虚拟形象的类型,生成与确定单元410确定出的第二融合区域相匹配的目标虚拟场景,并将该目标虚拟场景投射至用户眼球,以使用户看到在第二融合区域处叠加目标虚拟场景的混合场景。
作为一种可选的实施方式,如图7所示,该智能眼镜还可以包括:
发送单元413,用于当第二判断单元408判断出当前精神疲劳指数小于或者等于安全指数阈值时,向用户车辆发送故障检测指令,以使用户车辆根据该故障检测指令检测其制动系统以及其转向系统。
接收单元414,用于接收该用户车辆发送的检测报告,该检测报告包括用户车辆的制动系统的故障系数以及用户车辆的转向系统的故障系数。
第三获取单元411,还用于根据接收单元414接收到的检测报告,从服务端获取故障提醒虚拟场景,该故障提醒虚拟场景包括与用户车辆的制动系统的故障系数相匹配的制动系统故障虚拟形象以及与该用户车辆的转向系统的故障系数相匹配的转向系统故障虚拟形象。
投射单元405,还用于将第三获取单元411获取到的该故障提醒虚拟场景投射至用户眼球。
可见,实施图4所描述的方法,不仅能够实时监控用户的酒后驾车行为,以及实时监控用户的疲劳驾车行为,提高了用户的防范意识,及时警示用户的危险驾车行为,减少了驾车事故的发生,同时,第三获取单元411还能够通过用户车辆自检得到检测报告从服务端获取相应的故障提醒虚拟场景,及时为用户提示用户车辆重要系统的故障状况,减少了潜在危险,维护了用户的生命财产安全。
请参阅图8,图8是本发明实施例公开的另一种智能眼镜的结构示意图。如图8所示,该智能眼镜可以包括:
存储有可执行程序代码的存储器501。
与存储器501耦合的处理器502。
其中,处理器502调用存储器501中存储的可执行程序代码,执行图2~图4任意一种基于智能眼镜的驾车安全预警方法。
本发明实施例公开一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序使得计算机执行图2~图4任意一种基于智能眼镜的驾车安全预警方法。
本发明实施例公开一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得该计算机执行图2~图4任意一种基于智能眼镜的驾车安全预警方法。
本发明实施例公开一种应用发布平台,该应用发布平台用于发布计算机程序产品,其中,当该计算机程序产品在计算机上运行时,使得该计算机执行图2~图4任意一种基于智能眼镜的驾车安全预警方法。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质包括只读存储器(Read-Only Memory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、一次可编程只读存储器(One-time Programmable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够 用于携带或存储数据的计算机可读的任何其他介质。
以上对本发明实施例公开的一种基于智能眼镜的驾车安全预警方法及智能眼镜进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种基于智能眼镜的驾车安全预警方法,其特征在于,所述方法包括:
    所述智能眼镜检测与所述智能眼镜通信连接的用户车辆是否接收到开锁命令,如果接收到,获取所述智能眼镜上摄像头捕捉到的第一真实场景以及所述智能眼镜所处空气中酒精成分的浓度;
    所述智能眼镜判断所述酒精成分的浓度是否大于浓度阈值,如果大于所述浓度阈值,根据所述第一真实场景从服务端获取与所述第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景;
    所述智能眼镜将所述第一虚拟场景投射至用户眼球,以使用户看到所述第一真实场景与所述第一虚拟场景叠加的混合场景。
  2. 根据权利要求1所述的方法,其特征在于,所述智能眼镜根据所述第一真实场景从服务端获取与所述第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景,包括:
    所述智能眼镜识别所述第一真实场景中所有的车辆,并提取每个车辆的车辆特征,所述车辆特征包括车辆的类型、车辆的颜色、车辆的品牌类别以及车辆的标识物中的一种或者多种;
    所述智能眼镜将与所述智能眼镜通信连接的用户车辆的车辆特征分别与所述每个车辆的车辆特征进行比较,获得每一个车辆特征比较结果;
    所述智能眼镜根据获得的所有车辆特征比较结果,从所述所有的车辆中确定出所述用户车辆;
    所述智能眼镜在所述第一真实场景中确定出所述用户车辆的区域,并根据所述用户车辆的区域在所述第一真实场景中确定出第一融合区域;
    所述智能眼镜从服务端获取与所述第一融合区域相匹配的酒后驾车事故演化的第一虚拟场景,作为与所述第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    如果所述酒精成分的浓度小于或者等于所述浓度阈值,所述智能眼镜检测所述用户车辆的驾驶座位是否处于受压状态;
    如果处于所述受压状态,所述智能眼镜检测用户的当前脑波频率;
    所述智能眼镜将所述当前脑波频率与预存脑波频率表进行匹配,得到所述当前脑波频率对应的当前精神疲劳指数,所述预存脑波频率表包括脑波频率范围以及与所述脑波频率范围对应的精神疲劳指数;
    所述智能眼镜判断所述当前精神疲劳指数是否大于安全指数阈值,如果大于所述安全指数阈值,检测所述用户车辆是否处于行驶状态;
    如果不处于所述行驶状态,所述智能眼镜向用户眼球投射表示禁止驾驶的虚拟形象,并获取所述智能眼镜上摄像头捕捉到的第二真实场景;
    所述智能眼镜识别所述第二真实场景中是否存在汽车挡风玻璃,如果存在,确定出所述汽车挡风玻璃的区域,作为第二融合区域;
    所述智能眼镜根据所述当前精神疲劳指数,从所述服务端获取与所述第二融合区域相匹配的第二虚拟场景,所述第二虚拟场景包括在所述当前精神疲劳指数 下驾车引发交通事故的场景;
    所述智能眼镜将所述第二虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述第二虚拟场景的混合场景。
  4. 根据权利要求3所述的方法,其特征在于,所述智能眼镜将所述第二虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述第二虚拟场景的混合场景之后,所述方法还包括:
    所述智能眼镜将出行建议虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述出行建议虚拟场景的混合场景,所述出行建议虚拟场景包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的一种或者多种;
    所述智能眼镜检测用户针对所述出行建议虚拟场景中虚拟形象触发的操作指令,并识别所述操作指令对应的所述虚拟形象的类型,所述虚拟形象的类型包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的任意一种;
    所述智能眼镜根据所述虚拟形象的类型,生成与所述第二融合区域相匹配的目标虚拟场景,并将所述目标虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述目标虚拟场景的混合场景。
  5. 根据权利要求3或4所述的方法,其特征在于,所述方法还包括:
    如果所述当前精神疲劳指数小于或者等于所述安全指数阈值,所述智能眼镜向所述用户车辆发送故障检测指令,以使所述用户车辆根据所述故障检测指令检测所述用户车辆的制动系统以及所述用户车辆的转向系统;
    所述智能眼镜接收所述用户车辆发送的检测报告,所述检测报告包括所述用户车辆的制动系统的故障系数以及所述用户车辆的转向系统的故障系数;
    所述智能眼镜根据所述检测报告,从所述服务端获取故障提醒虚拟场景,所述故障提醒虚拟场景包括与所述用户车辆的制动系统的故障系数相匹配的制动系统故障虚拟形象以及与所述用户车辆的转向系统的故障系数相匹配的转向系统故障虚拟形象;
    所述智能眼镜将所述故障提醒虚拟场景投射至用户眼球。
  6. 一种智能眼镜,其特征在于,包括:
    第一检测单元,用于检测与所述智能眼镜通信连接的用户车辆是否接收到开锁命令;
    第一获取单元,用于当所述第一检测单元检测出所述用户车辆接收到所述开锁命令时,获取所述智能眼镜上摄像头捕捉到的第一真实场景以及所述智能眼镜所处空气中酒精成分的浓度;
    第一判断单元,用于判断所述第一获取单元获取到的所述酒精成分的浓度是否大于浓度阈值;
    第二获取单元,用于当所述第一判断单元判断出所述酒精成分的浓度大于所述浓度阈值时,根据所述第一获取单元获取到的所述第一真实场景从服务端获取与所述第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景;
    投射单元,用于将所述第二获取单元获取到的所述第一虚拟场景投射至用户眼球,以使用户看到所述第一真实场景与所述第一虚拟场景叠加的混合场景。
  7. 根据权利要求6所述的智能眼镜,其特征在于,所述第二获取单元包括:
    第一子单元,用于当所述第一判断单元判断出所述酒精成分的浓度大于所述 浓度阈值时,识别所述第一获取单元获取到的所述第一真实场景中所有的车辆,并提取每个车辆的车辆特征;还用于将与所述智能眼镜通信连接的用户车辆的车辆特征分别与所述每个车辆的车辆特征进行比较,获得每一个车辆特征比较结果;还用于根据获得的所有车辆特征比较结果,从所述所有的车辆中确定出所述用户车辆;还用于在所述第一真实场景中确定出所述用户车辆的区域,并根据所述用户车辆的区域在所述第一真实场景中确定出第一融合区域,所述车辆特征包括车辆的类型、车辆的颜色、车辆的品牌类别以及车辆的标识物中的一种或者多种;
    第二子单元,用于从服务端获取与所述第一子单元确定出的所述第一融合区域相匹配的酒后驾车事故演化的第一虚拟场景,作为与所述第一真实场景相匹配的酒后驾车事故演化的第一虚拟场景。
  8. 根据权利要求6或7所述的智能眼镜,其特征在于,所述智能眼镜还包括:
    第二检测单元,用于当所述第一判断单元判断出所述酒精成分的浓度小于或者等于所述浓度阈值时,检测所述用户车辆的驾驶座位是否处于受压状态;以及当检测出所述用户车辆的驾驶座位处于所述受压状态时,检测用户的当前脑波频率;
    匹配单元,用于将所述第二检测单元检测出的所述当前脑波频率与预存脑波频率表进行匹配,得到所述当前脑波频率对应的当前精神疲劳指数,所述预存脑波频率表包括脑波频率范围以及与所述脑波频率范围对应的精神疲劳指数;
    第二判断单元,用于判断所述匹配单元得到的所述当前精神疲劳指数是否大于安全指数阈值;
    所述第二检测单元,还用于当所述第二判断单元判断出所述当前精神疲劳指数大于所述安全指数阈值时,检测所述用户车辆是否处于行驶状态;
    所述投射单元,还用于当所述第二检测单元检测出所述用户车辆不处于所述行驶状态时,向用户眼球投射表示禁止驾驶的虚拟形象;
    所述第一获取单元,还用于获取所述智能眼镜上摄像头捕捉到的第二真实场景;
    识别单元,用于识别所述第一获取单元获取到的所述第二真实场景中是否存在汽车挡风玻璃;
    确定单元,用于当所述识别单元识别出所述第二真实场景中存在所述汽车挡风玻璃时,确定出所述汽车挡风玻璃的区域,作为第二融合区域;
    第三获取单元,用于根据所述匹配单元得到的所述当前精神疲劳指数,从所述服务端获取与所述第二融合区域相匹配的第二虚拟场景,所述第二虚拟场景包括在所述当前精神疲劳指数下驾车引发交通事故的场景;
    所述投射单元,还用于将所述第三获取单元获取到的所述第二虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述第二虚拟场景的混合场景。
  9. 根据权利要求8所述的智能眼镜,其特征在于,所述投射单元,还用于将所述第二虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述第二虚拟场景的混合场景之后,将与所述第二融合区域相匹配的出行建议虚 拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述出行建议虚拟场景的混合场景,所述出行建议虚拟场景包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的一种或者多种;
    所述智能眼镜还包括:
    第三检测单元,用于检测用户针对所述出行建议虚拟场景中虚拟形象触发的操作指令,并识别所述操作指令对应的所述虚拟形象的类型,所述虚拟形象的类型包括导航虚拟形象、电话簿虚拟形象以及休息建议虚拟形象中的任意一种;
    所述投射单元,还用于根据所述第三检测单元识别出的所述虚拟形象的类型,生成与所述第二融合区域相匹配的目标虚拟场景,并将所述目标虚拟场景投射至用户眼球,以使用户看到在所述第二融合区域处叠加所述目标虚拟场景的混合场景。
  10. 根据权利要求8或9所述的智能眼镜,其特征在于,还包括:
    发送单元,用于当所述第二判断单元判断出所述当前精神疲劳指数小于或者等于所述安全指数阈值时,向所述用户车辆发送故障检测指令,以使所述用户车辆根据所述故障检测指令检测所述用户车辆的制动系统以及所述用户车辆的转向系统;
    接收单元,用于接收所述用户车辆发送的检测报告,所述检测报告包括所述用户车辆的制动系统的故障系数以及所述用户车辆的转向系统的故障系数;
    所述第三获取单元,还用于根据所述接收单元接收到的所述检测报告,从所述服务端获取故障提醒虚拟场景,所述故障提醒虚拟场景包括与所述用户车辆的制动系统的故障系数相匹配的制动系统故障虚拟形象以及与所述用户车辆的转向系统的故障系数相匹配的转向系统故障虚拟形象;
    所述投射单元,还用于将所述第三获取单元获取到的所述故障提醒虚拟场景投射至用户眼球。
PCT/CN2017/117679 2017-12-11 2017-12-21 一种基于智能眼镜的驾车安全预警方法及智能眼镜 WO2019114014A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711308650.2 2017-12-11
CN201711308650.2A CN107933306B (zh) 2017-12-11 2017-12-11 一种基于智能眼镜的驾车安全预警方法及智能眼镜

Publications (1)

Publication Number Publication Date
WO2019114014A1 true WO2019114014A1 (zh) 2019-06-20

Family

ID=61946475

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117679 WO2019114014A1 (zh) 2017-12-11 2017-12-21 一种基于智能眼镜的驾车安全预警方法及智能眼镜

Country Status (2)

Country Link
CN (1) CN107933306B (zh)
WO (1) WO2019114014A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109087485B (zh) * 2018-08-30 2021-06-08 Oppo广东移动通信有限公司 驾驶提醒方法、装置、智能眼镜及存储介质
CN113989466B (zh) * 2021-10-28 2022-09-20 江苏濠汉信息技术有限公司 一种基于事态认知的超视距辅助驾驶系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102097003A (zh) * 2010-12-31 2011-06-15 北京星河易达科技有限公司 基于人况识别的智能交通安全系统
CN102658776A (zh) * 2012-05-17 2012-09-12 河南工业大学 一种用于电动车驾驶员酒精检测及电动车控制的装置
CN202491687U (zh) * 2012-02-14 2012-10-17 涂强 酒后驾车、开疲劳车电子监控仪
WO2013132521A1 (en) * 2012-03-06 2013-09-12 Azzarini Antonio Remote detector of the alcoholic, smoke or other chemical or natural product level, negatively impairing the psychophysical state of a person driving a vehicle
CN205562966U (zh) * 2016-04-28 2016-09-07 郑州道喆科技有限公司 一款基于gps的智能车辆平视仪
CN106257543A (zh) * 2016-09-23 2016-12-28 珠海市杰理科技股份有限公司 基于虚拟现实视角的行车记录系统
US20170196451A1 (en) * 2016-01-11 2017-07-13 Heptagon Micro Optics Pte. Ltd. Compact remote eye tracking system including depth sensing capacity

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102163362B (zh) * 2010-02-22 2013-03-20 谢国华 一种防酒后驾驶和安全健康行车方法
CN104627078B (zh) * 2015-02-04 2017-03-08 上海咔酷咔新能源科技有限公司 基于柔性透明oled的汽车驾驶虚拟系统及其控制方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102097003A (zh) * 2010-12-31 2011-06-15 北京星河易达科技有限公司 基于人况识别的智能交通安全系统
CN202491687U (zh) * 2012-02-14 2012-10-17 涂强 酒后驾车、开疲劳车电子监控仪
WO2013132521A1 (en) * 2012-03-06 2013-09-12 Azzarini Antonio Remote detector of the alcoholic, smoke or other chemical or natural product level, negatively impairing the psychophysical state of a person driving a vehicle
CN102658776A (zh) * 2012-05-17 2012-09-12 河南工业大学 一种用于电动车驾驶员酒精检测及电动车控制的装置
US20170196451A1 (en) * 2016-01-11 2017-07-13 Heptagon Micro Optics Pte. Ltd. Compact remote eye tracking system including depth sensing capacity
CN205562966U (zh) * 2016-04-28 2016-09-07 郑州道喆科技有限公司 一款基于gps的智能车辆平视仪
CN106257543A (zh) * 2016-09-23 2016-12-28 珠海市杰理科技股份有限公司 基于虚拟现实视角的行车记录系统

Also Published As

Publication number Publication date
CN107933306B (zh) 2019-08-09
CN107933306A (zh) 2018-04-20

Similar Documents

Publication Publication Date Title
JP7288911B2 (ja) 情報処理装置、移動装置、および方法、並びにプログラム
CN112041910B (zh) 信息处理装置、移动设备、方法和程序
JP6656079B2 (ja) 情報提示装置の制御方法、及び、情報提示装置
CN106562793B (zh) 信息提示装置的控制方法、以及信息提示装置
EP3655834B1 (en) Vehicle control device and vehicle control method
US10327705B2 (en) Method and device to monitor at least one vehicle passenger and method to control at least one assistance device
US11194405B2 (en) Method for controlling information display apparatus, and information display apparatus
WO2020078462A1 (zh) 乘客状态分析方法和装置、车辆、电子设备、存储介质
WO2019017216A1 (ja) 車両制御装置及び車両制御方法
KR20210088565A (ko) 정보 처리 장치, 이동 장치 및 방법, 그리고 프로그램
WO2015122158A1 (ja) 運転支援装置
US20120112879A1 (en) Apparatus and method for improved vehicle safety
WO2021145131A1 (ja) 情報処理装置、情報処理システム、情報処理方法及び情報処理プログラム
JP7357006B2 (ja) 情報処理装置、移動装置、および方法、並びにプログラム
US20200247422A1 (en) Inattentive driving suppression system
WO2019114014A1 (zh) 一种基于智能眼镜的驾车安全预警方法及智能眼镜
CN102785574A (zh) 一种疲劳或醉酒驾驶检测和控制方法及相应系统
US20220153302A1 (en) Predictive impairment monitor system and method
JP6971490B2 (ja) 自動車及び自動車用プログラム
CN109087480A (zh) 车载安全事件追溯的方法及系统
US20210064029A1 (en) Autonomous safe stop feature for a vehicle
CN105196867A (zh) 基于体感技术的酒精含量监测的汽车控制电路
WO2019114018A1 (zh) 一种基于智能眼镜的酒驾预警方法及智能眼镜
Parandhaman An Automated Efficient and Robust Scheme in Payment Protocol Using the Internet of Things
CN112437246A (zh) 一种基于智能座舱的视频会议方法和智能座舱

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17934600

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17934600

Country of ref document: EP

Kind code of ref document: A1