WO2019114013A1 - 一种用于自动驾驶车辆的场景显示方法及智能眼镜 - Google Patents

一种用于自动驾驶车辆的场景显示方法及智能眼镜 Download PDF

Info

Publication number
WO2019114013A1
WO2019114013A1 PCT/CN2017/117678 CN2017117678W WO2019114013A1 WO 2019114013 A1 WO2019114013 A1 WO 2019114013A1 CN 2017117678 W CN2017117678 W CN 2017117678W WO 2019114013 A1 WO2019114013 A1 WO 2019114013A1
Authority
WO
WIPO (PCT)
Prior art keywords
tunnel
scene
smart glasses
self
driving vehicle
Prior art date
Application number
PCT/CN2017/117678
Other languages
English (en)
French (fr)
Inventor
蔡任轩
Original Assignee
广州德科投资咨询有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州德科投资咨询有限公司 filed Critical 广州德科投资咨询有限公司
Publication of WO2019114013A1 publication Critical patent/WO2019114013A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to the field of smart glasses, and in particular to a scene display method for automatically driving a vehicle and smart glasses.
  • the mature autopilot technology allows occupants to freely do what they want in an autonomous vehicle.
  • the occupant will feel sleepy and weak due to the dimly sealed tunnel environment, so that the occupant cannot take the time when the self-driving vehicle encounters an emergency.
  • Emergency measures thereby increasing the probability of a traffic accident.
  • the embodiment of the invention discloses a scene display method for an autonomous driving vehicle and smart glasses, which can display other scenes for an occupant of an autonomous driving vehicle when the self-driving vehicle enters the tunnel, so that the driver remains awake, thereby When an autonomous vehicle encounters an emergency, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • a first aspect of the embodiments of the present invention discloses a scene display method for an autonomous driving vehicle, including:
  • the smart glasses detect whether the current environment in which the self-driving vehicle is located is a tunnel environment
  • the smart glasses acquire a tunnel real scene corresponding to the tunnel environment
  • the smart glasses mix the tunnel real scene with the tunnel virtual scene to obtain a tunnel hybrid scene.
  • the smart glasses output the tunnel mixing scene to a wearer of the smart glasses.
  • the smart glasses acquire a tunnel real scene corresponding to the tunnel environment, including:
  • the smart glasses are connected to a cloud database of the self-driving vehicle;
  • the cloud database is a database used by the self-driving vehicle in real time during automatic driving;
  • the smart glasses acquire a tunnel real scene corresponding to the tunnel environment in the cloud database;
  • the tunnel real scene is a scene collected by the scene collecting system of the self-driving vehicle and stored in the cloud database.
  • the tunnel real scene is composed of a tunnel wall real scene and a tunnel pavement real scene
  • the smart glasses will use the tunnel real scene and the tunnel virtual scene.
  • Mix and get a tunnel blending scenario including:
  • the smart glasses acquire location information of the tunnel environment
  • the smart glasses acquire an administrative area to which the location information belongs according to the location information;
  • the smart glasses acquire a first virtual scene corresponding to the first attraction in the administrative area as a first tunnel virtual scene;
  • the smart glasses cover the real scene of the tunnel wall by the first tunnel virtual scene to obtain a tunnel hybrid scene.
  • the smart glasses cover the first virtual scene of the tunnel to cover the real scene of the tunnel wall, obtain a tunnel mixed scene, and the smart glasses Before the wearer of the smart glasses outputs the tunnel mixing scenario, the method further includes:
  • the smart glasses determine whether a scene change instruction is received
  • the smart glasses determine whether the self-driving vehicle is driving away from the tunnel environment, and if not, acquiring a second virtual scene corresponding to the second attraction in the administrative area as the second tunnel virtual scene;
  • the smart glasses cover the first tunnel virtual scene by the second tunnel virtual scene to obtain a tunnel hybrid scene.
  • the smart glasses output the tunnel mixing scenario to a wearer of the smart glasses, including:
  • the smart glasses determine the current degree of the wearer's eyes according to the wearer's pupil size
  • the smart glasses perform light adjustment on the tunnel mixed scene according to the lighting degree to obtain a mixed scene of a non-differential tunnel
  • the smart glasses project the indiscriminate tunnel mixing scene into the wearer's eyes through the projection device.
  • a second aspect of the embodiments of the present invention discloses a smart glasses, where the smart glasses include:
  • a detecting unit configured to detect whether a current environment in which the self-driving vehicle is located is a tunnel environment
  • An acquiring unit configured to acquire a tunnel real scene corresponding to the tunnel environment when the detection result of the detecting unit is YES;
  • a mixing unit configured to mix the tunnel real scene with the tunnel virtual scene to obtain a tunnel hybrid scene
  • an output unit configured to output the tunnel mixing scene to a wearer of the smart glasses.
  • the acquiring unit includes:
  • connection subunit configured to connect to a cloud database of the self-driving vehicle;
  • the cloud database is a database used by the self-driving vehicle in real-time during an automatic driving process;
  • a first acquiring sub-unit configured to acquire, in the cloud database, a tunnel real scene corresponding to the tunnel environment; the tunnel real scene is collected by the scene collecting system of the self-driving vehicle and stored in the cloud database Scenes.
  • the tunnel real scene is composed of a tunnel wall real scene and a tunnel pavement real scene
  • the mixing unit includes:
  • a second acquiring subunit configured to acquire location information of the tunnel environment
  • a third obtaining subunit configured to acquire an administrative area to which the location information belongs according to the location information
  • a fourth acquiring sub-unit configured to acquire a first virtual scene corresponding to the first attraction in the administrative area, as the first tunnel virtual scene
  • the overlaying sub-unit is configured to cover the real scene of the tunnel wall by using the first tunnel virtual scene to obtain a tunnel hybrid scenario.
  • the mixing unit further includes:
  • a first determining subunit configured to determine whether a scene change instruction is received
  • a second determining subunit configured to determine, when the determination result of the first determining subunit is YES, whether the self-driving vehicle is driving away from the tunnel environment;
  • a fifth acquiring sub-unit configured to acquire a second virtual scene corresponding to the second attraction in the administrative area, as the second tunnel virtual scene, when the determining result of the second determining sub-unit is negative;
  • the overlay sub-unit is further configured to cover the first tunnel virtual scene by using the second tunnel virtual scene to obtain a tunnel hybrid scenario.
  • the output unit includes:
  • a dimming sub-unit configured to perform light adjustment on the tunnel mixed scene according to the dimming degree, to obtain a mixed scene of a non-differential tunnel
  • Adjusting a subunit configured to adjust a projection device built in the smart glasses according to the indiscriminate tunnel mixing scenario, such that a projection range of the projection device is the same as a visible range of the wearer;
  • a projection subunit for projecting the indiscriminate tunnel mixing scene into the wearer's eyes by the projection device.
  • a third aspect of the embodiments of the present invention discloses a smart glasses, including:
  • a processor coupled to the memory
  • the processor invokes the executable program code stored in the memory to execute a scene display method for automatically driving a vehicle disclosed in the first aspect of the embodiments of the present invention.
  • the embodiment of the invention has the following beneficial effects:
  • the smart glasses detect whether the self-driving vehicle is in a tunnel environment, and when detecting that the self-driving vehicle is in the tunnel environment, mix the tunnel real scene with the tunnel virtual scene, obtain a tunnel mixed scene, and apply to the smart glasses wearer. Output tunnel mixed scene.
  • the smart glasses can display other scenes for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the driver can look at other scenes to maintain the awake state, thereby driving the vehicle automatically. In the event of an emergency, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • FIG. 1 is a schematic flow chart of a scene display method of an autonomous vehicle according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of another scene display method of an autonomous vehicle according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a smart glasses disclosed in an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the embodiment of the invention discloses a scene display method for an autonomous driving vehicle and smart glasses, which can display other scenes for an occupant of an autonomous driving vehicle when the self-driving vehicle enters the tunnel, so that the driver remains awake, thereby When an autonomous vehicle encounters an emergency, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • FIG. 1 is a schematic flowchart diagram of a scene display method for an automatically driving vehicle according to an embodiment of the present invention. As shown in FIG. 1, the scene display method for automatically driving a vehicle may include the following steps:
  • the smart glasses detect whether the current environment in which the self-driving vehicle is located is a tunnel environment. If yes, step 102 is performed; if not, the process ends.
  • the smart glasses may further include:
  • the smart glasses sense whether the light intensity is abrupt by the light sensor. If the light intensity is detected to be abrupt, the smart glasses are executed to detect whether the current environment in which the self-driving vehicle is located is a tunnel environment.
  • the smart glasses detecting whether the current environment in which the self-driving vehicle is located is a tunnel environment may include:
  • the smart glasses collect surrounding environmental information through the built-in camera of the smart glasses, and look for the same or similar scenes in the database according to the surrounding environmental information, and judge the current environment according to the labels of the same or similar scenes. Whether it is a tunnel environment, if yes, executing step 102; if not, ending the process; wherein the database may be a storage unit of the smart glasses may be an external database connected by the smart glasses through a network, wherein the network connection mode is implemented by the present invention
  • the example is not limited.
  • the smart glasses can provide a method for matching the identification scene, so that the smart glasses can quickly identify whether the current scene is a tunnel scene, thereby improving the speed of the smart glasses to determine the current environment, and reducing the judgment delay. , in turn, to bring a better interactive experience for users.
  • the smart glasses detecting whether the current environment in which the self-driving vehicle is located is a tunnel environment may include:
  • the smart glasses collect surrounding environmental information through a built-in camera of the smart glasses, the environmental information includes environmental information inside the self-driving vehicle and external environment information that can be seen through the glass of the self-driving vehicle; the smart glasses are based on the collected surrounding
  • the environment information obtains the illuminance information and the road information of the environment information to determine whether the current environment is a tunnel environment. If yes, step 102 is performed; if not, the process ends.
  • the smart glasses can provide a method for judging whether the current environment is a tunnel environment, so that the smart glasses can judge the scene according to actual conditions, thereby avoiding the problem that the traditional scene matching is easy to be mistaken, thereby improving the problem.
  • Smart glasses determine the accuracy of the scene.
  • the smart glasses acquire a tunnel real scene corresponding to the tunnel environment.
  • the smart glasses acquiring the tunnel real scene corresponding to the tunnel environment may include:
  • the smart glasses simulate the tunnel environment in the cache space and generate a virtual tunnel reality scene.
  • the virtual tunnel reality scene is exactly the same as the tunnel environment.
  • a method for acquiring a tunnel real scene can be provided, so that the smart glasses can acquire the tunnel environment by using powerful computing capabilities, and avoid the traditional steps of acquiring data from the server, thereby improving the efficiency of the smart glasses work. .
  • the smart glasses modeling the tunnel environment in the cache space may include:
  • the smart glasses analyzes the focus range of the user's pupil, and determines the regional priority of the simulation modeling according to the focus range of the pupil, and performs simulation modeling on the focus range of the user's pupil first. After the above-mentioned pupil focus range simulation modeling, the smart The glasses were simulated in other areas.
  • Implementing such an implementation manner can provide a method for hierarchical simulation modeling of smart glasses when the smart glasses simulate the tunnel environment, so that the instantaneous workload of the smart glasses is reduced, thereby improving the overall work of the smart glasses. Efficiency, avoiding the occurrence of excessive instantaneous power of smart glasses (avoiding component damage caused by excessive instantaneous power and the effects of sudden changes in power consumption).
  • the smart glasses determine the regional priority of the simulation modeling, and the simulation modeling of the focus range of the user's pupil may include:
  • the smart glasses determine the regional priority of the simulation modeling, and prioritize the rough simulation modeling in the focus range of the user's pupil. After the rough simulation modeling is completed, the simulation modeling is optimized to obtain a complete simulation modeling.
  • the rough simulation modeling is above the recognition of the human eye (ie, the human eye cannot distinguish the difference between the simulation modeling and the real scene).
  • the speed of the smart glasses outputting the tunnel real scene can be improved, so that the smart glasses can get the tunnel real scene more quickly, thereby improving the work of the smart glasses.
  • the smart glasses mix the tunnel real scene with the tunnel virtual scene to obtain a tunnel mixed scene.
  • the smart glasses mix the tunnel real scene with the tunnel virtual scene, and the obtained tunnel hybrid scene may include:
  • the smart glasses display the tunnel real scene in the cache space, and cover the tunnel virtual scene in the tunnel real scene, thereby achieving the mixed effect and obtaining the tunnel mixed scene.
  • the smart glasses can obtain a tunnel mixed scene on the basis of the tunnel real scene, and do not need to compare and select all the data to obtain a mixed scene, thereby improving the working efficiency of the smart glasses.
  • the smart glasses output a tunnel mixed scene to the wearer of the smart glasses.
  • the smart glasses worn by the user can detect that the current environment is a tunnel environment, and after detecting the determined result, look for mural information on the network connected with the smart glasses. And mixing the mural information as a tunnel virtual scene with the tunnel real scene collected by the smart glasses, so that the user sees the scenery in the self-driving vehicle is not a monotonous tunnel scenery, but depicts a colorful mural tunnel scenery This makes the user's eyes shine and avoids the user's drowsiness.
  • the road and surrounding traffic flow is still the actual road and traffic flow (real information and reality of the tunnel mixed scene)
  • the height is the same or even the same), so that when the self-driving vehicle is in a situation, the user can grasp the situation in time, thereby controlling the vehicle to avoid the occurrence of a traffic accident.
  • the smart glasses can detect the situation that the self-driving vehicle enters the tunnel, and when the self-driving vehicle enters the tunnel, the smart glasses acquire the real scene of the tunnel corresponding to the current tunnel environment, and the tunnel is a realistic scene.
  • the tunnel virtual scene (the tunnel virtual scene can be pre-stored in the smart glasses) is mixed to obtain a tunnel hybrid scene, and the tunnel hybrid scene is output to the user after the tunnel hybrid scene is obtained. It can be seen that the method described in FIG.
  • the 1 can display other scenes for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the driver remains awake, so that when the self-driving vehicle encounters an emergency, the occupant can Take emergency measures in a timely manner to reduce the probability of traffic accidents.
  • FIG. 2 is a schematic flowchart diagram of another method for displaying a scene for an autonomous vehicle according to an embodiment of the present invention.
  • the tunnel reality scene is composed of a tunnel wall real scene and a tunnel road real scene
  • the scene display method for automatically driving the vehicle may include the following steps:
  • the smart glasses detect whether the current environment in which the self-driving vehicle is located is a tunnel environment. If yes, step 202 is performed; if not, the process ends.
  • the smart glasses may further include:
  • the smart glasses sense whether the light intensity is abrupt by the light sensor. If the light intensity is detected to be abrupt, the smart glasses are executed to detect whether the current environment in which the self-driving vehicle is located is a tunnel environment.
  • the smart glasses detecting whether the current environment in which the self-driving vehicle is located is a tunnel environment may include:
  • the smart glasses collect surrounding environmental information through the built-in camera of the smart glasses, and look for the same or similar scenes in the database according to the surrounding environmental information, and judge the current environment according to the labels of the same or similar scenes. Whether it is a tunnel environment, if yes, executing step 202; if not, ending the process; wherein the database may be a storage unit of the smart glasses may be an external database connected by the smart glasses through a network, wherein the network connection mode is implemented by the present invention
  • the example is not limited.
  • the smart glasses can provide a method for matching the identification scene, so that the smart glasses can quickly identify whether the current scene is a tunnel scene, thereby improving the speed of the smart glasses to determine the current environment, and reducing the judgment delay. , in turn, to bring a better interactive experience for users.
  • the smart glasses detecting whether the current environment in which the self-driving vehicle is located is a tunnel environment may include:
  • the smart glasses collect surrounding environmental information through a built-in camera of the smart glasses, the environmental information includes environmental information inside the self-driving vehicle and external environment information that can be seen through the glass of the self-driving vehicle; the smart glasses are based on the collected surrounding
  • the environment information obtains the illuminance information and the road information of the environment information to determine whether the current environment is a tunnel environment. If yes, step 102 is performed; if not, the process ends.
  • the smart glasses can provide a method for judging whether the current environment is a tunnel environment, so that the smart glasses can judge the scene according to actual conditions, thereby avoiding the problem that the traditional scene matching is easy to be mistaken, thereby improving the problem.
  • Smart glasses determine the accuracy of the scene.
  • the smart glasses are connected to a cloud database of the self-driving vehicle; the cloud database is a database used by the self-driving vehicle in real-time during the automatic driving process.
  • the self-driving vehicle is installed with a large number of sensors, and the information is transmitted to the cloud database through a large number of sensors in real time.
  • the data in the cloud database is acquired and stored by the autonomous vehicle through a large number of sensors.
  • the smart glasses acquire a tunnel real scene corresponding to the tunnel environment in the cloud database; the tunnel real scene is a scene collected by the scene collecting system of the self-driving vehicle and stored in the cloud database.
  • Steps 202 to 203 are implemented, and the cloud database can provide a first-level buffering platform for the smart glasses, so that the smart glasses can directly connect to the cloud database and perform a single acquisition operation, thereby avoiding directly acquiring the tunnel real scene through various sensors.
  • the original variable gets the current directional transmission, which simplifies the workflow of the smart glasses.
  • Steps 202 to 203 are implemented, and the smart glasses can directly acquire scene information other than the self-driving vehicle, so that the user can ignore the viewing angle barrier brought by the vehicle when viewing, and bring a better visual experience to the user.
  • the smart glasses acquire location information of the tunnel environment.
  • the location information of the smart glasses to obtain the tunnel environment may include:
  • the smart glasses detect the current location by GPS positioning and determine the current location information.
  • the location information of the smart glasses to obtain the tunnel environment may be obtained by using the GPS positioning, or may be obtained by the network to identify the scene, which is not limited in the embodiment of the present invention.
  • the smart glasses acquire an administrative area to which the location information belongs according to the location information.
  • the smart glasses acquire the first virtual scene corresponding to the first attraction in the administrative area, as the first tunnel virtual scene.
  • the smart glasses cover the real scene of the tunnel wall by the virtual scene of the first tunnel, and obtain a mixed scene of the tunnel.
  • Steps 204 to 207 are performed, and the smart glasses can output the scenic spot information of the current environment for the user, and use the scenic spot information as the attraction introduction coverage and the tunnel wall real scene, thereby obtaining a tunnel mixed scene, so that the user can drive the vehicle automatically. View images of the attractions in the current administrative area.
  • the smart glasses can also provide the user with 3D effect scene information, so that the user can view the near-reality scene in the self-driving vehicle, thereby experiencing the immersive feeling, thereby improving the user's interactive experience. .
  • the smart glasses determine whether a scene change instruction is received. If yes, execute step 209; if not, end the process.
  • the scenario change command may be a gesture command or a voice command, which is not limited in the embodiment of the present invention.
  • the smart glasses determine whether the self-driving vehicle is driving away from the tunnel environment. If yes, the process ends; if not, step 210 is performed.
  • the smart glasses determining whether the self-driving vehicle is driving away from the tunnel environment may include:
  • the smart glasses determine whether the self-driving vehicle is driving away from the tunnel environment by the light intensity. If it is detected that the self-driving vehicle is driving away from the tunnel environment, the process ends; if the self-driving vehicle is not detected to leave the tunnel environment, step 210 is performed.
  • the smart glasses acquire a second virtual scene corresponding to the second attraction in the administrative area, as the second tunnel virtual scene.
  • the smart glasses cover the virtual scene of the second tunnel to cover the virtual scene of the first tunnel, and obtain a mixed scene of the tunnel.
  • Steps 208 to 211 are implemented, and the smart glasses can appropriately switch the scene when receiving the instruction to switch the scene, thereby satisfying the experience requirement of the user multiple scenes.
  • the smart glasses determine the brightness of the current wearer's eyes according to the size of the wearer's pupil.
  • the smart glasses determining the brightness of the current wearer's eyes according to the size of the wearer's pupil may include:
  • the smart glasses search for the daylighting corresponding to the pupil size on the network connected to the smart glasses according to the size of the wearer's pupil, and whether the lightness has high reliability on the network, and if so, determine the daylighting degree.
  • the smart glasses can determine the degree of illumination of the wearer's glasses based on the network data, thereby improving the accuracy of the determination of the degree of light.
  • the smart glasses adjust the light of the tunnel mixed scene according to the lighting degree, and obtain a mixed scene of the indiscriminate tunnel.
  • the indifference tunnel mixing scene is a scene in which the illumination intensity is the same as the display illumination intensity.
  • the smart glasses adjust the projection device built in the smart glasses according to the indiscriminate tunnel mixing scene, so that the projection range of the projection device is the same as the visible range of the wearer.
  • Implementation step 214 may prevent the wearer from seeing that the scene is incomplete or that the viewing scene has an edge interface.
  • the smart glasses project the indiscriminate tunnel mixing scene into the wearer's eyes through the projection device.
  • Steps 212 to 215 are implemented, and the smart glasses can determine the light information parameters (such as brightness and illumination intensity) of the generated scene according to the display scene and the user's photosensitive experience, so that the user can better view the scene information without causing discomfort. Users provide a better sense of substitution, thereby improving the user experience.
  • the light information parameters such as brightness and illumination intensity
  • the smart glasses can detect the situation that the self-driving vehicle enters the tunnel, and when the self-driving vehicle enters the tunnel, the smart glasses acquire the tunnel corresponding to the current tunnel environment through the implementation cloud database of the self-driving vehicle.
  • the actual scene is obtained, and the current location information is obtained, the current administrative area is determined, and the information of the current administrative area is further obtained, and the information of the attraction is determined as a virtual scene of the tunnel, and the actual scene of the tunnel is mixed with the virtual scene of the tunnel to obtain a tunnel.
  • the smart glasses can also receive a switching instruction input by the user to switch the scene. It can be seen that the method described in FIG. 2 can display the famous scenic spot of the current location for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the driver stays awake and when the occupant wants to watch other scenes.
  • the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • FIG. 3 is a schematic structural diagram of a smart glasses according to an embodiment of the present invention.
  • the smart glasses may include:
  • the detecting unit 301 is configured to detect whether the current environment in which the self-driving vehicle is located is a tunnel environment.
  • the smart glasses may further include a light intensity detecting unit, configured to sense by the light sensor before the detecting unit 301 detects whether the current environment in which the self-driving vehicle is located is a tunnel environment. Whether or not the illumination intensity is abrupt, and if a sudden change in the illumination intensity is detected, the trigger detecting unit 301 performs the above-described operation of detecting whether the current environment in which the self-driving vehicle is located is a tunnel environment.
  • a light intensity detecting unit configured to sense by the light sensor before the detecting unit 301 detects whether the current environment in which the self-driving vehicle is located is a tunnel environment.
  • the detecting unit 301 may include:
  • the collection subunit is configured to collect surrounding environmental information through a camera built in the smart glasses;
  • the sub-unit is configured to search for a scene in the database that is the same as or similar to the current environment according to the surrounding environment information; wherein the database may be an external database in which the storage unit of the smart glasses is connected through the network, where The network connection manner is not limited in the embodiment of the present invention;
  • the environment judging subunit is configured to determine whether the current environment is a tunnel environment according to the label carried by the same or similar scene.
  • the smart glasses can provide a method for matching the identification scene, so that the smart glasses can quickly identify whether the current scene is a tunnel scene, thereby improving the speed of the smart glasses to determine the current environment, and reducing the judgment delay. , in turn, to bring a better interactive experience for users.
  • the detecting subunit 301 may include:
  • the collecting sub-unit is configured to collect surrounding environmental information by using a camera built in the smart glasses, the environmental information including environmental information inside the self-driving vehicle and external environmental information that can be seen through the auto-driving vehicle configuration glass;
  • the environment determining sub-unit is further configured to obtain the illuminance information and the road information of the environment information according to the collected surrounding environment information, and determine whether the current environment is a tunnel environment.
  • the smart glasses can provide a method for judging whether the current environment is a tunnel environment, so that the smart glasses can judge the scene according to actual conditions, thereby avoiding the problem that the traditional scene matching is easy to be mistaken, thereby improving the problem.
  • Smart glasses determine the accuracy of the scene.
  • the obtaining unit 302 is configured to acquire a tunnel real scene corresponding to the tunnel environment when the detection result of the detecting unit 301 is YES.
  • the obtaining unit 302 may be specifically configured to perform simulation modeling on the tunnel environment in the cache space, and generate a virtual tunnel real scene, where the virtual tunnel real scene is exactly the same as the tunnel environment.
  • a method for acquiring a tunnel real scene can be provided, so that the smart glasses can acquire the tunnel environment by using powerful computing capabilities, and avoid the traditional steps of acquiring data from the server, thereby improving the efficiency of the smart glasses work. .
  • the obtaining unit 302 may include:
  • the analysis modeling sub-unit is configured to analyze the focusing range of the user's pupil, and determine the regional priority of the simulation modeling according to the focusing range of the pupil, and perform simulation modeling on the focusing range of the user's pupil first;
  • Implementing such an implementation manner can provide a method for hierarchical simulation modeling of smart glasses when the smart glasses simulate the tunnel environment, so that the instantaneous workload of the smart glasses is reduced, thereby improving the overall work of the smart glasses. Efficiency, avoiding the occurrence of excessive instantaneous power of smart glasses (avoiding component damage caused by excessive instantaneous power and the effects of sudden changes in power consumption).
  • analysis modeling subunit may include:
  • a rough modeling module for determining the regional priority of the simulation modeling, and prioritizing the rough simulation modeling in the focus range of the user's pupil;
  • the complete modeling module is used to complete the rough simulation modeling after the rough modeling module, and then optimize the simulation modeling to obtain a complete simulation modeling, wherein the rough simulation modeling is in the recognition of the human eye. On the top (that is, the human eye can't tell the difference between the simulation modeling and the real scene).
  • the speed of the smart glasses outputting the tunnel real scene can be improved, so that the smart glasses can get the tunnel real scene more quickly, thereby improving the work of the smart glasses.
  • the mixing unit 303 is configured to mix the tunnel real scene acquired by the obtaining unit 302 with the tunnel virtual scene to obtain a tunnel hybrid scene.
  • the mixing unit 303 is specifically configured to display a tunnel real scene in a cache space and cover the tunnel virtual scene in a tunnel real scene, thereby achieving a mixed effect and obtaining a tunnel hybrid scene.
  • the smart glasses can obtain a tunnel mixed scene on the basis of the tunnel real scene, and do not need to compare and select all the data to obtain a mixed scene, thereby improving the working efficiency of the smart glasses.
  • the output unit 304 is configured to output, to the wearer of the smart glasses, a tunnel mixed scene obtained by mixing the mixing unit 303.
  • the smart glasses worn by the user can detect that the current environment is a tunnel environment, and after detecting the determined result, look for mural information on the network connected with the smart glasses. And mixing the mural information as a tunnel virtual scene with the tunnel real scene collected by the smart glasses, so that the user sees the scenery in the self-driving vehicle is not a monotonous tunnel scenery, but depicts a colorful mural tunnel scenery This makes the user's eyes shine and avoids the user's drowsiness.
  • the road and surrounding traffic flow is still the actual road and traffic flow (real information and reality of the tunnel mixed scene)
  • the height is the same or even the same), so that when the self-driving vehicle is in a situation, the user can grasp the situation in time, thereby controlling the vehicle to avoid the occurrence of a traffic accident.
  • the smart glasses described in FIG. 3 can display other scenes for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the driver remains awake, so that the occupant is in an emergency situation when the self-driving vehicle encounters an emergency. Emergency measures can be taken in time to reduce the probability of traffic accidents.
  • FIG. 4 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses shown in FIG. 4 are optimized by the smart glasses shown in FIG. 3.
  • the obtaining unit 302 may include:
  • connection sub-unit 3021 is configured to connect a cloud database of the self-driving vehicle when the detection result of the detection unit 301 is YES; the cloud database is a database used by the self-driving vehicle in real-time during the automatic driving process.
  • the self-driving vehicle is installed with a large number of sensors, and the information is transmitted to the cloud database through a large number of sensors in real time.
  • the data in the cloud database is acquired and stored by the autonomous vehicle through a large number of sensors.
  • the first obtaining sub-unit 3022 is configured to acquire a tunnel real scene corresponding to the tunnel environment in the cloud database connected to the sub-unit 3021; the tunnel real scene is a scene collected by the scene collecting system of the self-driving vehicle and stored in the cloud database.
  • the cloud database can provide a first-level buffering platform for the smart glasses, so that the smart glasses can directly connect to the cloud database and perform a single acquisition operation, thereby avoiding directly acquiring the tunnel real scene through various sensors, and the process is original.
  • the variable gets the current directional transmission, which simplifies the workflow of the smart glasses.
  • the smart glasses can also directly obtain scene information other than the self-driving vehicle, so that when viewing, the user can ignore the viewing angle blocked by the body, and bring a better visual experience for the user.
  • the tunnel real scene is composed of a tunnel wall real scene and a tunnel pavement real scene, wherein the mixing unit 303 may include:
  • the second obtaining sub-unit 3031 is configured to acquire location information of the tunnel environment.
  • the second obtaining sub-unit 3031 is specifically configured to detect a current location by using GPS positioning, and determine current location information.
  • the location information of the smart glasses to obtain the tunnel environment may be obtained by using the GPS positioning, or may be obtained by the network to identify the scene, which is not limited in the embodiment of the present invention.
  • the third obtaining sub-unit 3032 is configured to obtain an administrative area to which the location information belongs according to the location information acquired by the second obtaining sub-unit 3031.
  • the fourth obtaining sub-unit 3033 is configured to obtain the first virtual scene corresponding to the first attraction in the administrative area acquired by the third obtaining sub-unit 3032, as the first tunnel virtual scene.
  • the coverage sub-unit 3034 is configured to cover the tunnel virtual scene of the first tunnel virtual scene acquired by the fourth obtaining unit 3033 to obtain a tunnel hybrid scenario.
  • the smart glasses described in FIG. 4 can output the scenic spot information of the current environment for the user, and use the scenic spot information as a scenic spot to cover the real scene of the tunnel wall, thereby obtaining a tunnel mixed scene, so that the user can drive the vehicle automatically. View images of the attractions in the current administrative area.
  • the mixing unit 303 may further include:
  • the first determining sub-unit 3035 is configured to determine, after the overlay sub-unit 3034 obtains the tunnel mixing scenario, whether a scene change instruction is received.
  • the tunnel hybrid scene obtained by the overlay sub-unit 3034 is obtained by mixing the first tunnel virtual scene and the tunnel pavement real scene.
  • the second determining sub-unit 3036 is configured to determine whether the self-driving vehicle is driving away from the tunnel environment when the determination result of the first determining sub-unit 3035 is YES.
  • the second determining sub-unit 3036 may be specifically configured to determine, by the light intensity, whether the self-driving vehicle is driving away from the tunnel environment.
  • the fifth acquisition sub-unit 3037 is configured to acquire, when the determination result of the second determination sub-unit 3036 is negative, the second virtual scene corresponding to the second attraction in the administrative area, as the second tunnel virtual scene.
  • the coverage sub-unit 3034 is further configured to cover the second tunnel virtual scene acquired by the fifth acquisition sub-unit 3037 to obtain the tunnel hybrid scenario.
  • the smart glasses described in FIG. 4 can acquire the tunnel real scene through the cloud database of the self-driving vehicle, so that only the operation of acquiring is needed, and the operation of directly acquiring the tunnel real scene through various sensors is avoided, and the process is changed from the original variable. Get the current directional transmission, simplifying the workflow of smart glasses.
  • the smart glasses can also directly obtain scene information other than the self-driving vehicle, so that when viewing, the user can ignore the viewing angle blocked by the body, and bring a better visual experience for the user.
  • the smart glasses described in FIG. 4 can output the scenic spot information of the current environment for the user, and use the scenic spot information as a scenic spot to cover the real scene of the tunnel wall, thereby obtaining a tunnel mixed scene, so that the user can drive the vehicle automatically. View images of the attractions in the current administrative area.
  • the smart glasses described in FIG. 4 can appropriately switch the scene when receiving the instruction to switch the scene, thereby satisfying the experience requirement of the user multiple scenes.
  • the smart glasses described in FIG. 4 can display other scenes for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the driver remains awake, so that when the self-driving vehicle encounters an emergency, the occupant Emergency measures can be taken in time to reduce the probability of traffic accidents.
  • FIG. 5 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses shown in FIG. 5 are optimized by the smart glasses shown in FIG. 4.
  • the output unit 304 includes:
  • the determining subunit 3041 is configured to determine the brightness of the current wearer's eyes according to the size of the wearer's pupil.
  • the determining subunit 3041 may be specifically configured to search for the dimming degree corresponding to the pupil size on the network connected to the smart glasses according to the size of the pupil of the wearer, and the dimming degree on the network. Whether there is high reliability, and if so, it is determined that the degree of light is the degree of light of the wearer's eyes.
  • the smart glasses can determine the degree of illumination of the wearer's glasses based on the network data, thereby improving the accuracy of the determination of the degree of light.
  • the dimming sub-unit 3042 is configured to perform light adjustment on the tunnel hybrid scene obtained by the overlay sub-unit 3034 according to the determined dimming degree determined by the sub-unit 3041 to obtain an indiscriminate tunnel hybrid scenario.
  • the adjustment subunit 3043 is configured to adjust the projection device built in the smart glasses according to the indifference tunnel hybrid scene obtained after the light adjustment by the dimming subunit 3042, so that the projection range of the projection device is the same as the visible range of the wearer.
  • the projection subunit 3044 is configured to project the indiscriminate tunnel mixing scene into the wearer's eyes by the projection device adjusted by the adjustment subunit 3043.
  • the smart glasses described in FIG. 5 can output the scenic spot information of the current environment for the user, and use the scenic spot information as an attraction introduction cover and the tunnel wall real scene, thereby obtaining a tunnel mixed scene, so that the user can drive the vehicle automatically. View images of the attractions in the current administrative area.
  • the smart glasses described in FIG. 5 can appropriately switch the scene when receiving the instruction to switch the scene, thereby satisfying the experience requirement of the user multiple scenes.
  • the smart glasses described in FIG. 5 can display the famous scenic spot of the current location for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the driver stays awake and the occupant wants to watch other scenes.
  • the occupant switches other scenes to provide a variety of interactive experiences for the occupants of the self-driving vehicle, and when the autonomous vehicle encounters an emergency, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • FIG. 6 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention. As shown in FIG. 6, the smart glasses may include:
  • a memory 601 storing executable program code
  • processor 602 coupled to the memory 601;
  • the processor 602 calls the executable program code stored in the memory 601 to execute the scene display method for the self-driving vehicle of any of FIGS. 1 to 2.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • OTPROM One-Time Programmable Read-Only Memory
  • EEPROM Electronically-Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种用于自动驾驶车辆的场景显示方法及智能眼镜,方法包括:智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境(101);如果是,智能眼镜获取隧道环境对应的隧道现实场景(102);智能眼镜将隧道现实场景与智能眼镜中预先存储的隧道虚拟场景进行混合,得到隧道混合场景(103);智能眼镜向佩戴智能眼镜的用户输出隧道混合场景(104)。这种方式,能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示其他场景,使得驾驶者保持清醒状态,从而在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。

Description

一种用于自动驾驶车辆的场景显示方法及智能眼镜 技术领域
本发明涉及智能眼镜技术领域,具体涉及一种用于自动驾驶车辆的场景显示方法及智能眼镜。
背景技术
目前,逐渐成熟的自动驾驶技术使得乘坐者可以在自动驾驶车辆中自由地做他们自己想做的事情。然而,在实践中发现,当自动驾驶车辆驶入隧道时,乘坐者会因为处于昏暗密封的隧道环境而产生困倦、乏力的感觉,使得在自动驾驶车辆遇到紧急情况时,乘坐者不能及时采取紧急措施,从而增加了交通事故发生的概率。
发明内容
本发明实施例公开一种用于自动驾驶车辆的场景显示方法及智能眼镜,能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示其他场景,使得驾驶者保持清醒状态,从而在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
本发明实施例第一方面公开了一种用于自动驾驶车辆的场景显示方法,包括:
智能眼镜检测所述自动驾驶车辆所处的当前环境是否为隧道环境;
如果是,所述智能眼镜获取所述隧道环境对应的隧道现实场景;
所述智能眼镜将所述隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景;
所述智能眼镜向所述智能眼镜的佩戴者输出所述隧道混合场景。
作为一种可选的实施方式,在本发明实施例第一方面中,所述智能眼镜获取所述隧道环境对应的隧道现实场景,包括:
所述智能眼镜连接所述自动驾驶车辆的云数据库;所述云数据库为所述自动驾驶车辆在自动驾驶过程中实时使用的数据库;
所述智能眼镜在所述云数据库中获取所述隧道环境对应的隧道现实场景;所述隧道现实场景是由所述自动驾驶车辆的场景采集系统采集并存储至所述云数据库的场景。
作为一种可选的实施方式,在本发明实施例第一方面中,所述隧道现实场景由隧道壁现实场景和隧道路面现实场景组成,所述智能眼镜将所述隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景,包括:
所述智能眼镜获取所述隧道环境所处的位置信息;
所述智能眼镜根据所述位置信息获取所述位置信息所属的行政区域;
所述智能眼镜获取所述行政区域内的第一景点对应的第一虚拟场景,作为第 一隧道虚拟场景;
所述智能眼镜将所述第一隧道虚拟场景覆盖所述隧道壁现实场景,得到隧道混合场景。
作为一种可选的实施方式,在本发明实施例第一方面中,所述智能眼镜将所述第一隧道虚拟场景覆盖所述隧道壁现实场景,得到隧道混合场景之后,以及智能眼镜向所述智能眼镜的佩戴者输出所述隧道混合场景之前,所述方法还包括:
所述智能眼镜判断是否接收到场景更换指令;
如果是,所述智能眼镜判断所述自动驾驶车辆是否驶离所述隧道环境,如果否,获取所述行政区域内的第二景点对应的第二虚拟场景,作为第二隧道虚拟场景;
所述智能眼镜将所述第二隧道虚拟场景覆盖所述第一隧道虚拟场景,得到隧道混合场景。
作为一种可选的实施方式,在本发明实施例第一方面中,所述智能眼镜向所述智能眼镜的佩戴者输出所述隧道混合场景,包括:
所述智能眼镜根据所述佩戴者的瞳孔大小确定当前所述佩戴者的眼睛的采光度;
所述智能眼镜根据所述采光度对所述隧道混合场景进行光调整,得到无差别隧道混合场景;
所述智能眼镜根据所述无差别隧道混合场景调整所述智能眼镜内置的投影装置,使得所述投影装置的投影范围与所述佩戴者的可视范围相同;
所述智能眼镜通过所述投影装置投影所述无差别隧道混合场景至所述佩戴者的眼睛中。
本发明实施例第二方面公开一种智能眼镜,所述智能眼镜包括:
检测单元,用于检测所述自动驾驶车辆所处的当前环境是否为隧道环境;
获取单元,用于在所述检测单元的检测结果为是时,获取所述隧道环境对应的隧道现实场景;
混合单元,用于将所述隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景;
输出单元,用于向所述智能眼镜的佩戴者输出所述隧道混合场景。
作为一种可选的实施方式,在本发明实施例第二方面中,所述获取单元包括:
连接子单元,用于连接所述自动驾驶车辆的云数据库;所述云数据库为所述自动驾驶车辆在自动驾驶过程中实时使用的数据库;
第一获取子单元,用于在所述云数据库中获取所述隧道环境对应的隧道现实场景;所述隧道现实场景是由所述自动驾驶车辆的场景采集系统采集并存储至所述云数据库的场景。
作为一种可选的实施方式,在本发明实施例第二方面中,所述隧道现实场景由隧道壁现实场景和隧道路面现实场景组成,所述混合单元包括:
第二获取子单元,用于获取所述隧道环境所处的位置信息;
第三获取子单元,用于根据所述位置信息获取所述位置信息所属的行政区域;
第四获取子单元,用于获取所述行政区域内的第一景点对应的第一虚拟场景,作为第一隧道虚拟场景;
覆盖子单元,用于将所述第一隧道虚拟场景覆盖所述隧道壁现实场景,得到隧道混合场景。
作为一种可选的实施方式,在本发明实施例第二方面中,所述混合单元还包括:
第一判断子单元,用于判断是否接收到场景更换指令;
第二判断子单元,用于在所述第一判断子单元的判断结果为是时,判断所述自动驾驶车辆是否驶离所述隧道环境;
第五获取子单元,用于在所述第二判断子单元的判断结果为否时,获取所述行政区域内的第二景点对应的第二虚拟场景,作为第二隧道虚拟场景;
所述覆盖子单元,还用于将所述第二隧道虚拟场景覆盖所述第一隧道虚拟场景,得到隧道混合场景。
作为一种可选的实施方式,在本发明实施例第二方面中,所述输出单元包括:
确定子单元,用于根据所述佩戴者的瞳孔大小确定当前所述佩戴者的眼睛的采光度;
调光子单元,用于根据所述采光度对所述隧道混合场景进行光调整,得到无差别隧道混合场景;
调整子单元,用于根据所述无差别隧道混合场景调整所述智能眼镜内置的投影装置,使得所述投影装置的投影范围与所述佩戴者的可视范围相同;
投影子单元,用于通过所述投影装置投影所述无差别隧道混合场景至所述佩戴者的眼睛中。
本发明实施例第三方面公开一种智能眼镜,包括:
存储有可执行程序代码的存储器;
与所述存储器耦合的处理器;
所述处理器调用所述存储器中存储的所述可执行程序代码,执行本发明实施例第一方面公开的一种用于自动驾驶车辆的场景显示方法。
与现有技术相比,本发明实施例具有以下有益效果:
本发明实施例中,智能眼镜检测自动驾驶车辆是否处于隧道环境,并在检测到自动驾驶车辆处于隧道环境时,混合隧道现实场景与隧道虚拟场景,得到隧道混合场景,并向智能眼镜的佩戴者输出隧道混合场景。可见,实施本发明实施例,智能眼镜能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示其他场景,使驾驶者光看其他的场景以保持清醒的状态,从而在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例公开的一种自动驾驶车辆的场景显示方法的流程示意图;
图2是本发明实施例公开的另一种自动驾驶车辆的场景显示方法的流程示意图;
图3是本发明实施例公开的一种智能眼镜的结构示意图;
图4是本发明实施例公开的另一种智能眼镜的结构示意图;
图5是本发明实施例公开的另一种智能眼镜的结构示意图;
图6是本发明实施例公开的另一种智能眼镜的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,本发明实施例及附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
本发明实施例公开一种用于自动驾驶车辆的场景显示方法及智能眼镜,能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示其他场景,使得驾驶者保持清醒状态,从而在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
实施例一
请参阅图1,图1是本发明实施例公开的一种用于自动驾驶车辆的场景显示方法的流程示意图。如图1所示,该用于自动驾驶车辆的场景显示方法可以包括以下步骤:
101、智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境,若是,则执行步骤102;若否,则结束本流程。
作为一种可选的实施方式,智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境之前,还可以包括:
智能眼镜通过光传感器感应光照强度是否发生突变,若检测到光照强度发生突变,则执行上述的智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境。
实施这种实施方式,可以为智能眼镜工作提供一种前提条件,使得智能眼镜不必一直处于判断环境的状态,从而降低了智能眼镜工作的工作量。
作为一种可选的实施方式,智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境可以包括:
智能眼镜通过智能眼镜内置的摄像头采集周围的环境信息,并根据周围的环境信息在数据库中寻找与当前环境相同或相似的场景,并根据该相同或相似的场景所带有的标签来判断当前环境是否为隧道环境,若是,则执行步骤102;若否,则结束本流程;其中数据库可以是智能眼镜的存储单元可以为智能眼镜通过网络连接的外置数据库,其中对该网络连接方式本发明实施例不做限定。
实施这种实施方式,可以为智能眼镜提供一种匹配辨识场景的方法,使得智能眼镜可以快速地辨别出当前的场景是否为隧道场景,从而提高了智能眼镜判断当前环境的速度,减少了判断延迟,进而为用户带来更好的交互体验。
作为一种可选的实施方式,智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境可以包括:
智能眼镜通过智能眼镜内置的摄像头采集周围的环境信息,该环境信息包括自动驾驶车辆内部的环境信息和透过自动驾驶车辆配置玻璃所能看到的外部环境信息;智能眼镜根据采集到的周围的环境信息获取该环境信息的照度信息和道路信息判断当前环境是否为隧道环境,若是,则执行步骤102;若否则结束本流程。
实施这种实施方式,可以为智能眼镜提供一种判断当前环境是否为隧道环境的方法,使得智能眼镜可以根据实际情况来对场景进行判断,避免了传统的场景匹配容易出错的问题,从而提高了智能眼镜判断场景的准确程度。
102、智能眼镜获取隧道环境对应的隧道现实场景。
作为一种可选的实施方式,智能眼镜获取隧道环境对应的隧道现实场景可以包括:
智能眼镜在缓存空间中对隧道环境进行仿真建模,并生成虚拟的隧道现实场景,该虚拟的隧道现实场景与隧道环境完全相同。
实施这种实施方式,可以提供一种获取隧道现实场景的方法,使得智能眼镜可以运用强大的计算能力对隧道环境进行获取,避免传统的向服务器获取数据的步骤,从而提高了智能眼镜工作的效率。
进一步可选的,智能眼镜在缓存空间中对隧道环境进行仿真建模可以包括:
智能眼镜在分析用户瞳孔的聚焦范围,并根据该瞳孔的聚焦范围确定仿真建模的区域优先级,优先在用户瞳孔的聚焦范围进行仿真建模,在上述瞳孔的聚焦范围仿真建模之后,智能眼镜在其他区域进行仿真建模。
实施这种实施方式,可以在智能眼镜对隧道环境进行仿真建模的时候,为智能眼镜提供一种分级仿真建模的方法,使得智能眼镜的瞬间工作量减少,从而提高了智能眼镜总体的工作效率,避免了智能眼镜的瞬时功率过高的情况出现(避免瞬时功率过高引起的元器件损坏,以及功耗突变带来的影响)。
再进一步可选的,智能眼镜确定仿真建模的区域优先级,优先在用户的瞳孔的聚焦范围进行仿真建模可以包括:
智能眼镜确定仿真建模的区域优先级,并优先在用户瞳孔的聚焦范围进行粗略仿真建模,在粗略的仿真建模完成之后,再对该仿真建模进行优化得到完整的仿真建模,其中,该粗略的仿真建模在人眼的辨识度之上(即人眼分辨不出该仿真建模与现实场景的差异)。
实施这种实施方式,可以提高智能眼镜输出隧道现实场景的速度,使得智能眼镜可以更快的得到隧道现实场景,从而提高了智能眼镜的工作。
103、智能眼镜将隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景。
作为一种可选的实施方式,智能眼镜将隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景可以包括:
智能眼镜在缓存空间中显示隧道现实场景,并在隧道现实场景中覆盖隧道虚拟场景,从而达到混合的效果,得到隧道混合场景。
实施这种实施方式,智能眼镜可以在隧道现实场景的基础上得到隧道混合场景,不必全数据进行对比选取从而得到混合场景,从而提高了智能眼镜的工作效率。
104、智能眼镜向智能眼镜的佩戴者输出隧道混合场景。
举例来说,用户佩戴智能眼镜在自动驾驶车辆上通过隧道时,用户佩戴的智能眼镜可以检测出当前环境是隧道环境,并在检测得到确定结果之后,在与智能眼镜连接网络之上寻找壁画信息,并将壁画信息作为隧道虚拟场景与智能眼镜采集到的隧道现实场景进行混合,从而使得用户在自动驾驶车辆中,看到的景色不是单调的隧道景色,而是刻画有丰富多彩的壁画隧道景色,这就使得用户的眼前一亮,避免了用户困倦的情况产生;其中,用户在观看壁画隧道景色的同时,道路与周围的车流还是现实的道路和车流情况(隧道混合场景的现实信息与现实高度相同甚至完全相同),从而使得当自动驾驶车辆出现状况时,用户可以及时的掌握情况,进而控制车辆避免交通事故的发生。
在图1所描述的方法中,智能眼镜可以检测到自动驾驶车辆驶入隧道的情况,并在自动驾驶车辆驶入隧道时,智能眼镜获取当前隧道环境对应的隧道现实场景,将该隧道现实场景与隧道虚拟场景(该隧道虚拟场景可以为智能眼镜内预先存储好的场景)进行混合,得到隧道混合场景,并在得到隧道混合场景之后向用户输出隧道混合场景。可见,图1所描述的方法能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示其他场景,使得驾驶者保持清醒状态,从而在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
实施例二
请参阅图2,图2是本发明实施例公开的另一种用于自动驾驶车辆的场景显示方法的流程示意图。如图2所示,隧道现实场景由隧道壁现实场景和隧道路面 现实场景组成,该用于自动驾驶车辆的场景显示方法可以包括以下步骤:
201、智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境,若是,则执行步骤202;若否,则结束本流程。
作为一种可选的实施方式,智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境之前,还可以包括:
智能眼镜通过光传感器感应光照强度是否发生突变,若检测到光照强度发生突变,则执行上述的智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境。
实施这种实施方式,可以为智能眼镜工作提供一种前提条件,使得智能眼镜不必一直处于判断环境的状态,从而降低了智能眼镜工作的工作量。
作为一种可选的实施方式,智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境可以包括:
智能眼镜通过智能眼镜内置的摄像头采集周围的环境信息,并根据周围的环境信息在数据库中寻找与当前环境相同或相似的场景,并根据该相同或相似的场景所带有的标签来判断当前环境是否为隧道环境,若是,则执行步骤202;若否,则结束本流程;其中数据库可以是智能眼镜的存储单元可以为智能眼镜通过网络连接的外置数据库,其中对该网络连接方式本发明实施例不做限定。
实施这种实施方式,可以为智能眼镜提供一种匹配辨识场景的方法,使得智能眼镜可以快速地辨别出当前的场景是否为隧道场景,从而提高了智能眼镜判断当前环境的速度,减少了判断延迟,进而为用户带来更好的交互体验。
作为一种可选的实施方式,智能眼镜检测自动驾驶车辆所处的当前环境是否为隧道环境可以包括:
智能眼镜通过智能眼镜内置的摄像头采集周围的环境信息,该环境信息包括自动驾驶车辆内部的环境信息和透过自动驾驶车辆配置玻璃所能看到的外部环境信息;智能眼镜根据采集到的周围的环境信息获取该环境信息的照度信息和道路信息判断当前环境是否为隧道环境,若是,则执行步骤102;若否则结束本流程。
实施这种实施方式,可以为智能眼镜提供一种判断当前环境是否为隧道环境的方法,使得智能眼镜可以根据实际情况来对场景进行判断,避免了传统的场景匹配容易出错的问题,从而提高了智能眼镜判断场景的准确程度。
202、智能眼镜连接自动驾驶车辆的云数据库;该云数据库为自动驾驶车辆在自动驾驶过程中实时使用的数据库。
本发明实施例中,自动驾驶车辆安装有大量传感器,并实时通过大量传感器获取信息传送至云数据库中,换言之,该云数据库中的数据为自动驾驶车辆通过大量传感器获取并存储的。
203、智能眼镜在云数据库中获取隧道环境对应的隧道现实场景;该隧道现实场景是由自动驾驶车辆的场景采集系统采集并存储至云数据库的场景。
实施步骤202~步骤203,云数据库可以为智能眼镜提供出一级缓冲平台,使 得智能眼镜可以直接对接云数据库,执行单一的获取操作,避免了通过各类传感器直接获取隧道现实场景,该过程由原来的变量获取到现在的定向传输,简化了智能眼镜的工作流程。实施步骤202~步骤203,智能眼镜还可以直接获取到自动驾驶车辆以外的场景信息,使得用户在观看的时候,可以忽略车身带来的视角阻挡,为用户带来更好的视觉体验。
204、智能眼镜获取隧道环境所处的位置信息。
作为一种可选的实施方式,智能眼镜获取隧道环境所处的位置信息可以包括:
智能眼镜通过GPS定位检测当前所处的位置,并确定当前的位置信息。
实施这种实施方式,为智能眼镜提供了一种当前隧道环境位置信息的获取方法。
在本发明实施例中,智能眼镜获取隧道环境所处的位置信息可以通过GPS定位获取,也可以通过联网对场景识别获取,对此,本发明实施例中不做限定。
205、智能眼镜根据位置信息获取位置信息所属的行政区域。
206、智能眼镜获取行政区域内的第一景点对应的第一虚拟场景,作为第一隧道虚拟场景。
207、智能眼镜将第一隧道虚拟场景覆盖隧道壁现实场景,得到隧道混合场景。
实施步骤204~步骤207,智能眼镜可以为用户输出当前所处环境的景点信息,并将该景点信息作为景点介绍覆盖与隧道壁现实场景中,从而得到隧道混合场景,使得用户可以在自动驾驶车辆中观看到当前所在行政区域的景点图像。
实时步骤204~步骤207,智能眼镜还可以为用户提供3D效果的场景信息,使得用户在自动驾驶车辆中观看到近乎现实的场景,从而体验身临其境的感觉,进而提高了用户的交互体验。
208、智能眼镜判断是否接收到场景更换指令,若是,则执行步骤209;若否则结束本流程。
本发明实施例中,场景更换指令可以是手势指令也可以是语音指令,对此本发明实施例中不做限定。
209、智能眼镜判断自动驾驶车辆是否驶离隧道环境,若是,则结束本流程;若否,则执行步骤210。
作为一种可选的实施方式,智能眼镜判断自动驾驶车辆是否驶离隧道环境可以包括:
智能眼镜通过光照强度判断自动驾驶车辆是否驶离隧道环境,若检测到自动驾驶车辆驶离隧道环境,则结束本流程;若未检测到自动驾驶车辆驶离隧道环境,则执行步骤210。
实施这种实施方式,可以快速检测出用户是否驶离隧道环境,并根据检测结果执行后续步骤,从而提高了智能眼镜的工作效率。
210、智能眼镜获取行政区域内的第二景点对应的第二虚拟场景,作为第二 隧道虚拟场景。
211、智能眼镜将第二隧道虚拟场景覆盖第一隧道虚拟场景,得到隧道混合场景。
实施步骤208~步骤211,智能眼镜可以在接收到切换场景的指令时,对场景进行适当切换,从而满足用户多重场景的体验需求。
212、智能眼镜根据佩戴者的瞳孔大小确定当前佩戴者的眼睛的采光度。
作为一种可选的实施方式,智能眼镜根据佩戴者的瞳孔大小确定当前佩戴者的眼睛的采光度可以包括:
智能眼镜根据佩戴者的瞳孔大小在与智能眼镜连接的网络之上查找该瞳孔大小所对应的采光度,并在网络上对该采光度是否有较高的可靠性,若是,则确定该采光度为佩戴者的眼睛的采光度。
实施这种实施方式,智能眼镜可以根据网络数据确定佩戴者眼镜的采光度,从而提高采光度确定的准确程度。
213、智能眼镜根据采光度对隧道混合场景进行光调整,得到无差别隧道混合场景。
本发明实施例中,无差别隧道混合场景是光照强度与显示光照强度相同的场景。
214、智能眼镜根据无差别隧道混合场景调整智能眼镜内置的投影装置,使得投影装置的投影范围与佩戴者的可视范围相同。
实施步骤214可以避免佩戴者观看场景不完全或观看场景存在边缘界面的情况出现。
215、智能眼镜通过投影装置投影无差别隧道混合场景至佩戴者的眼睛中。
实施步骤212~步骤215,智能眼镜可以根据显示场景和用户的感光体验确定生成场景的光信息参数(如亮度,光照强度),使得用户可以更好的观看场景信息,不会产生不适感,为用户提供更好的代入感,从而提高用户的体验。
在图2所描述的方法中,智能眼镜可以检测到自动驾驶车辆驶入隧道的情况,并在自动驾驶车辆驶入隧道时,智能眼镜通过自动驾驶车辆的实施云数据库获取当前隧道环境对应的隧道现实场景,并获取当前的位置信息,确定当前所在的行政区域,并进一步获取当前行政区域的景点信息,确定该景点信息为隧道虚拟场景,将该隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景,并在得到隧道混合场景之后,检测用户当前的用眼情况,并根据该用眼情况调整输出细节,在调整完成后向用户输出隧道混合场景,其中,当用户想要切换场景观看时,智能眼镜还可以接收到用户输入的切换指令,从而对场景进行切换。可见,图2所描述的方法能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示当前位置的著名景点场景,使得驾驶者保持清醒状态,并在乘坐者想要观看其他场景时,为乘坐者切换其他场景,从而为自动驾驶车辆的乘坐者提供多样的交互体验,并且在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
实施例三
请参阅图3,图3是本发明实施例公开的一种智能眼镜的结构示意图。如图3所示,该智能眼镜可以包括:
检测单元301,用于检测自动驾驶车辆所处的当前环境是否为隧道环境。
作为一种可选的实施方式,智能眼镜还可以包括光强检测单元,该光强检测单元用于,在检测单元301检测自动驾驶车辆所处的当前环境是否为隧道环境之前,通过光传感器感应光照强度是否发生突变,若检测到光照强度发生突变,则触发检测单元301执行上述的检测自动驾驶车辆所处的当前环境是否为隧道环境的操作。
实施这种实施方式,可以为智能眼镜工作提供一种前提条件,使得智能眼镜不必一直处于判断环境的状态,从而降低了智能眼镜工作的工作量。
作为一种可选的实施方式,检测单元301可以包括:
采集子单元,用于通过智能眼镜内置的摄像头采集周围的环境信息;
查找子单元,用于并根据周围的环境信息在数据库中寻找与当前环境相同或相似的场景;其中,上述数据库可以是智能眼镜的存储单元可以为智能眼镜通过网络连接的外置数据库,其中对该网络连接方式本发明实施例不做限定;
环境判断子单元,用于并根据该相同或相似的场景所带有的标签来判断当前环境是否为隧道环境。
实施这种实施方式,可以为智能眼镜提供一种匹配辨识场景的方法,使得智能眼镜可以快速地辨别出当前的场景是否为隧道场景,从而提高了智能眼镜判断当前环境的速度,减少了判断延迟,进而为用户带来更好的交互体验。
作为一种可选的实施方式,检测子单元301可以包括:
采集子单元,用于通过智能眼镜内置的摄像头采集周围的环境信息,该环境信息包括自动驾驶车辆内部的环境信息和透过自动驾驶车辆配置玻璃所能看到的外部环境信息;
环境判断子单元,还用于根据采集到的周围的环境信息获取该环境信息的照度信息和道路信息判断当前环境是否为隧道环境。
实施这种实施方式,可以为智能眼镜提供一种判断当前环境是否为隧道环境的方法,使得智能眼镜可以根据实际情况来对场景进行判断,避免了传统的场景匹配容易出错的问题,从而提高了智能眼镜判断场景的准确程度。
获取单元302,用于在检测单元301的检测结果为是时,获取隧道环境对应的隧道现实场景。
作为一种可选的实施方式,获取单元302具体可以用于在缓存空间中对隧道环境进行仿真建模,并生成虚拟的隧道现实场景,该虚拟的隧道现实场景与隧道环境完全相同。
实施这种实施方式,可以提供一种获取隧道现实场景的方法,使得智能眼镜可以运用强大的计算能力对隧道环境进行获取,避免传统的向服务器获取数据的步骤,从而提高了智能眼镜工作的效率。
进一步可选的,获取单元302可以包括:
分析建模子单元,用于分析用户瞳孔的聚焦范围,并根据该瞳孔的聚焦范围确定仿真建模的区域优先级,优先在用户瞳孔的聚焦范围进行仿真建模;
完全建模子单元,用于在分析建模子单元仿真建模之后,智能眼镜在其他区域进行仿真建模。
实施这种实施方式,可以在智能眼镜对隧道环境进行仿真建模的时候,为智能眼镜提供一种分级仿真建模的方法,使得智能眼镜的瞬间工作量减少,从而提高了智能眼镜总体的工作效率,避免了智能眼镜的瞬时功率过高的情况出现(避免瞬时功率过高引起的元器件损坏,以及功耗突变带来的影响)。
再进一步可选的,分析建模子单元可以包括:
粗略建模模块,用于确定仿真建模的区域优先级,并优先在用户瞳孔的聚焦范围进行粗略仿真建模;
完全建模模块,用于在粗略建模模块完成粗略的仿真建模后,再对该仿真建模进行优化得到完整的仿真建模,其中,该粗略的仿真建模在人眼的辨识度之上(即人眼分辨不出该仿真建模与现实场景的差异)。
实施这种实施方式,可以提高智能眼镜输出隧道现实场景的速度,使得智能眼镜可以更快的得到隧道现实场景,从而提高了智能眼镜的工作。
混合单元303,用于将获取单元302获取到的隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景。
作为一种可选的实施方式,混合单元303具体可以用于在缓存空间中显示隧道现实场景,并在隧道现实场景中覆盖隧道虚拟场景,从而达到混合的效果,得到隧道混合场景。
实施这种实施方式,智能眼镜可以在隧道现实场景的基础上得到隧道混合场景,不必全数据进行对比选取从而得到混合场景,从而提高了智能眼镜的工作效率。
输出单元304,用于向智能眼镜的佩戴者输出混合单元303混合得到的隧道混合场景。
举例来说,用户佩戴智能眼镜在自动驾驶车辆上通过隧道时,用户佩戴的智能眼镜可以检测出当前环境是隧道环境,并在检测得到确定结果之后,在与智能眼镜连接网络之上寻找壁画信息,并将壁画信息作为隧道虚拟场景与智能眼镜采集到的隧道现实场景进行混合,从而使得用户在自动驾驶车辆中,看到的景色不是单调的隧道景色,而是刻画有丰富多彩的壁画隧道景色,这就使得用户的眼前一亮,避免了用户困倦的情况产生;其中,用户在观看壁画隧道景色的同时,道路与周围的车流还是现实的道路和车流情况(隧道混合场景的现实信息与现实高度相同甚至完全相同),从而使得当自动驾驶车辆出现状况时,用户可以及时的掌握情况,进而控制车辆避免交通事故的发生。
可见,图3所描述的智能眼镜能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示其他场景,使得驾驶者保持清醒状态,从而在自动驾驶车辆遇 到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
实施例四
请参阅图4,图4是本发明实施例公开的另一种智能眼镜的结构示意图。其中,图4所示的智能眼镜是由图3所示的智能眼镜进行优化得到的。与图3所示的智能眼镜相比,图4所示的智能眼镜中,获取单元302可以包括:
连接子单元3021,用于在检测单元301的检测结果为是时,连接自动驾驶车辆的云数据库;该云数据库为自动驾驶车辆在自动驾驶过程中实时使用的数据库。
本发明实施例中,自动驾驶车辆安装有大量传感器,并实时通过大量传感器获取信息传送至云数据库中,换言之,该云数据库中的数据为自动驾驶车辆通过大量传感器获取并存储的。
第一获取子单元3022,用于在连接子单元3021连接的云数据库中获取隧道环境对应的隧道现实场景;该隧道现实场景是由自动驾驶车辆的场景采集系统采集并存储至云数据库的场景。
本发明实施例中,云数据库可以为智能眼镜提供出一级缓冲平台,使得智能眼镜可以直接对接云数据库,执行单一的获取操作,避免了通过各类传感器直接获取隧道现实场景,该过程由原来的变量获取到现在的定向传输,简化了智能眼镜的工作流程。与此同时,智能眼镜还可以直接获取到自动驾驶车辆以外的场景信息,使得用户在观看的时候,可以忽略车身带来的视角阻挡,为用户带来更好的视觉体验。
作为一种可选的实施方式,在图4所示的智能眼镜中,隧道现实场景由隧道壁现实场景和隧道路面现实场景组成,其中,混合单元303可以包括:
第二获取子单元3031,用于获取隧道环境所处的位置信息。
作为一种可选的实施方式,第二获取子单元3031具体可以用于通过GPS定位检测当前所处的位置,并确定当前的位置信息。
实施这种实施方式,为智能眼镜提供了一种当前隧道环境位置信息的获取方法。
在本发明实施例中,智能眼镜获取隧道环境所处的位置信息可以通过GPS定位获取,也可以通过联网对场景识别获取,对此,本发明实施例中不做限定。
第三获取子单元3032,用于根据第二获取子单元3031获取到的位置信息获取位置信息所属的行政区域。
第四获取子单元3033,用于获取第三获取子单元3032获取到的行政区域内的第一景点对应的第一虚拟场景,作为第一隧道虚拟场景。
覆盖子单元3034,用于将第四获取单元3033获取到的第一隧道虚拟场景覆盖隧道壁现实场景,得到隧道混合场景。
可见,图4所描述的智能眼镜能够为用户输出当前所处环境的景点信息,并将该景点信息作为景点介绍覆盖与隧道壁现实场景中,从而得到隧道混合场景,使得用户可以在自动驾驶车辆中观看到当前所在行政区域的景点图像。
作为一种可选的实施方式,在图4所示的智能眼镜中,混合单元303还可以包括:
第一判断子单元3035,用于在覆盖子单元3034得到隧道混合场景之后,判断是否接收到场景更换指令。
本发明实施例中,覆盖子单元3034得到的隧道混合场景是由第一隧道虚拟场景和隧道路面现实场景混合得到的。
第二判断子单元3036,用于在第一判断子单元3035的判断结果为是时,判断自动驾驶车辆是否驶离隧道环境。
作为一种可选的实施方式,第二判断子单元3036具体可以用于,通过光照强度判断自动驾驶车辆是否驶离隧道环境。
实施这种实施方式,可以快速检测出用户是否驶离隧道环境,并根据检测结果执行后续步骤,从而提高了智能眼镜的工作效率。
第五获取子单元3037,用于在第二判断子单元3036的判断结果为否时,获取行政区域内的第二景点对应的第二虚拟场景,作为第二隧道虚拟场景。
覆盖子单元3034,还用于将第五获取子单元3037获取到的第二隧道虚拟场景覆盖第一隧道虚拟场景,得到隧道混合场景。
可见,图4所描述的智能眼镜能够通过自动驾驶车辆的云数据库获取隧道现实场景,从而只需要执行获取的操作,避免了通过各类传感器直接获取隧道现实场景的操作,该过程由原来的变量获取到现在的定向传输,简化了智能眼镜的工作流程。与此同时,智能眼镜还可以直接获取到自动驾驶车辆以外的场景信息,使得用户在观看的时候,可以忽略车身带来的视角阻挡,为用户带来更好的视觉体验。
可见,图4所描述的智能眼镜能够为用户输出当前所处环境的景点信息,并将该景点信息作为景点介绍覆盖与隧道壁现实场景中,从而得到隧道混合场景,使得用户可以在自动驾驶车辆中观看到当前所在行政区域的景点图像。
可见,图4所描述的智能眼镜能够在接收到切换场景的指令时,对场景进行适当切换,从而满足用户多重场景的体验需求。
可见,图4所描述的智能眼镜能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示其他场景,使得驾驶者保持清醒状态,从而在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
实施例五
请参阅图5,图5是本发明实施例公开的另一种智能眼镜的结构示意图。其中,图5所示的智能眼镜是由图4所示的智能眼镜进行优化得到的。与图4所示的智能眼镜相比,图5所示的智能眼镜中,输出单元304包括:
确定子单元3041,用于根据佩戴者的瞳孔大小确定当前佩戴者的眼睛的采光度。
作为一种可选的实施方式,确定子单元3041具体可以用于根据佩戴者的瞳孔大小在与智能眼镜连接的网络之上查找该瞳孔大小所对应的采光度,并在网络 上对该采光度是否有较高的可靠性,若是,则确定该采光度为佩戴者的眼睛的采光度。
实施这种实施方式,智能眼镜可以根据网络数据确定佩戴者眼镜的采光度,从而提高采光度确定的准确程度。
调光子单元3042,用于根据确定子单元3041确定出的采光度对覆盖子单元3034得到的隧道混合场景进行光调整,得到无差别隧道混合场景。
调整子单元3043,用于根据调光子单元3042进行光调整之后得到的无差别隧道混合场景调整智能眼镜内置的投影装置,使得投影装置的投影范围与佩戴者的可视范围相同。
投影子单元3044,用于通过调整子单元3043调整的投影装置投影无差别隧道混合场景至佩戴者的眼睛中。
可见,图5所描述的智能眼镜能够为用户输出当前所处环境的景点信息,并将该景点信息作为景点介绍覆盖与隧道壁现实场景中,从而得到隧道混合场景,使得用户可以在自动驾驶车辆中观看到当前所在行政区域的景点图像。
可见,图5所描述的智能眼镜能够在接收到切换场景的指令时,对场景进行适当切换,从而满足用户多重场景的体验需求。
可见,图5所描述的智能眼镜能够在自动驾驶车辆驶入隧道时,为自动驾驶车辆的乘坐者显示当前位置的著名景点场景,使得驾驶者保持清醒状态,并在乘坐者想要观看其他场景时,为乘坐者切换其他场景,从而为自动驾驶车辆的乘坐者提供多样的交互体验,并且在自动驾驶车辆遇到紧急情况时,乘坐者可以及时采取紧急措施,降低交通事故发生的概率。
实施例六
请参阅图6,图6是本发明实施例公开的另一种智能眼镜的结构示意图。如图6所示,该智能眼镜可以包括:
存储有可执行程序代码的存储器601;
与存储器601耦合的处理器602;
其中,处理器602调用存储器601中存储的可执行程序代码,执行图1~图2任意一种用于自动驾驶车辆的场景显示方法。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质包括只读存储器(Read-Only Memory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、一次可编程只读存储器(One-time Programmable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的任何其他介质。
以上对本发明实施例公开的一种用于自动驾驶车辆的场景显示方法及智能眼镜进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种用于自动驾驶车辆的场景显示方法,其特征在于,所述方法包括:
    智能眼镜检测所述自动驾驶车辆所处的当前环境是否为隧道环境;
    如果是,所述智能眼镜获取所述隧道环境对应的隧道现实场景;
    所述智能眼镜将所述隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景;
    所述智能眼镜向所述智能眼镜的佩戴者输出所述隧道混合场景。
  2. 根据权利要求1所述的方法,其特征在于,所述智能眼镜获取所述隧道环境对应的隧道现实场景,包括:
    所述智能眼镜连接所述自动驾驶车辆的云数据库;所述云数据库为所述自动驾驶车辆在自动驾驶过程中实时使用的数据库;
    所述智能眼镜在所述云数据库中获取所述隧道环境对应的隧道现实场景;所述隧道现实场景是由所述自动驾驶车辆的场景采集系统采集并存储至所述云数据库的场景。
  3. 根据权利要求2所述的方法,其特征在于,所述隧道现实场景由隧道壁现实场景和隧道路面现实场景组成,所述智能眼镜将所述隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景,包括:
    所述智能眼镜获取所述隧道环境所处的位置信息;
    所述智能眼镜根据所述位置信息获取所述位置信息所属的行政区域;
    所述智能眼镜获取所述行政区域内的第一景点对应的第一虚拟场景,作为第一隧道虚拟场景;
    所述智能眼镜将所述第一隧道虚拟场景覆盖所述隧道壁现实场景,得到隧道混合场景。
  4. 根据权利要求3所述的方法,其特征在于,所述智能眼镜将所述第一隧道虚拟场景覆盖所述隧道壁现实场景,得到隧道混合场景之后,以及智能眼镜向所述智能眼镜的佩戴者输出所述隧道混合场景之前,所述方法还包括:
    所述智能眼镜判断是否接收到场景更换指令;
    如果是,所述智能眼镜判断所述自动驾驶车辆是否驶离所述隧道环境,如果否,获取所述行政区域内的第二景点对应的第二虚拟场景,作为第二隧道虚拟场 景;
    所述智能眼镜将所述第二隧道虚拟场景覆盖所述第一隧道虚拟场景,得到隧道混合场景。
  5. 根据权利要求1~4任一项所述的方法,其特征在于,所述智能眼镜向所述智能眼镜的佩戴者输出所述隧道混合场景,包括:
    所述智能眼镜根据所述佩戴者的瞳孔大小确定当前所述佩戴者的眼睛的采光度;
    所述智能眼镜根据所述采光度对所述隧道混合场景进行光调整,得到无差别隧道混合场景;
    所述智能眼镜根据所述无差别隧道混合场景调整所述智能眼镜内置的投影装置,使得所述投影装置的投影范围与所述佩戴者的可视范围相同;
    所述智能眼镜通过所述投影装置投影所述无差别隧道混合场景至所述佩戴者的眼睛中。
  6. 一种智能眼镜,其特征在于,所述智能眼镜包括:
    检测单元,用于检测所述自动驾驶车辆所处的当前环境是否为隧道环境;
    获取单元,用于在所述检测单元的检测结果为是时,获取所述隧道环境对应的隧道现实场景;
    混合单元,用于将所述隧道现实场景与隧道虚拟场景进行混合,得到隧道混合场景;
    输出单元,用于向所述智能眼镜的佩戴者输出所述隧道混合场景。
  7. 根据权利要求6所述的智能眼镜,其特征在于,所述获取单元包括:
    连接子单元,用于连接所述自动驾驶车辆的云数据库;所述云数据库为所述自动驾驶车辆在自动驾驶过程中实时使用的数据库;
    第一获取子单元,用于在所述云数据库中获取所述隧道环境对应的隧道现实场景;所述隧道现实场景是由所述自动驾驶车辆的场景采集系统采集并存储至所述云数据库的场景。
  8. 根据权利要求7所述的智能眼镜,其特征在于,所述隧道现实场景由隧道壁现实场景和隧道路面现实场景组成,所述混合单元包括:
    第二获取子单元,用于获取所述隧道环境所处的位置信息;
    第三获取子单元,用于根据所述位置信息获取所述位置信息所属的行政区域;
    第四获取子单元,用于获取所述行政区域内的第一景点对应的第一虚拟场景,作为第一隧道虚拟场景;
    覆盖子单元,用于将所述第一隧道虚拟场景覆盖所述隧道壁现实场景,得到隧道混合场景。
  9. 根据权利要求8所述的智能眼镜,其特征在于,所述混合单元还包括:
    第一判断子单元,用于判断是否接收到场景更换指令;
    第二判断子单元,用于在所述第一判断子单元的判断结果为是时,判断所述自动驾驶车辆是否驶离所述隧道环境;
    第五获取子单元,用于在所述第二判断子单元的判断结果为否时,获取所述行政区域内的第二景点对应的第二虚拟场景,作为第二隧道虚拟场景;
    所述覆盖子单元,还用于将所述第二隧道虚拟场景覆盖所述第一隧道虚拟场景,得到隧道混合场景。
  10. 根据权利要求6~9任一项所述的智能眼镜,其特征在于,所述输出单元包括:
    确定子单元,用于根据所述佩戴者的瞳孔大小确定当前所述佩戴者的眼睛的采光度;
    调光子单元,用于根据所述采光度对所述隧道混合场景进行光调整,得到无差别隧道混合场景;
    调整子单元,用于根据所述无差别隧道混合场景调整所述智能眼镜内置的投影装置,使得所述投影装置的投影范围与所述佩戴者的可视范围相同;
    投影子单元,用于通过所述投影装置投影所述无差别隧道混合场景至所述佩戴者的眼睛中。
PCT/CN2017/117678 2017-12-11 2017-12-21 一种用于自动驾驶车辆的场景显示方法及智能眼镜 WO2019114013A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711309717.4 2017-12-11
CN201711309717.4A CN107945284B (zh) 2017-12-11 2017-12-11 一种用于自动驾驶车辆的场景显示方法及智能眼镜

Publications (1)

Publication Number Publication Date
WO2019114013A1 true WO2019114013A1 (zh) 2019-06-20

Family

ID=61946524

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117678 WO2019114013A1 (zh) 2017-12-11 2017-12-21 一种用于自动驾驶车辆的场景显示方法及智能眼镜

Country Status (2)

Country Link
CN (1) CN107945284B (zh)
WO (1) WO2019114013A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018210390B4 (de) 2018-06-26 2023-08-03 Audi Ag Verfahren zum Betreiben einer Anzeigeeinrichtung in einem Kraftfahrzeug und Anzeigesystem für ein Kraftfahrzeug
CN110864913B (zh) * 2019-11-28 2021-09-03 苏州智加科技有限公司 车辆测试方法、装置、计算机设备及存储介质
CN113989466B (zh) * 2021-10-28 2022-09-20 江苏濠汉信息技术有限公司 一种基于事态认知的超视距辅助驾驶系统
CN114942532B (zh) * 2022-05-25 2024-06-21 维沃移动通信有限公司 眼镜夹片、眼镜夹片的控制方法和眼镜夹片的控制装置
TWI811043B (zh) * 2022-07-28 2023-08-01 大陸商星宸科技股份有限公司 影像處理系統及其影像物件疊加裝置及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2846756A1 (fr) * 2002-11-04 2004-05-07 Pechon Stephane Jean Martin Le Dispositif de vision nocturne destine a la conduite
CN103185963A (zh) * 2013-03-29 2013-07-03 南京智真电子科技有限公司 多功能车辆辅助驾驶眼镜
CN103905805A (zh) * 2012-12-24 2014-07-02 天马微电子股份有限公司 车用电子眼镜系统
CN105629515A (zh) * 2016-02-22 2016-06-01 宇龙计算机通信科技(深圳)有限公司 导航眼镜、导航方法和导航系统
WO2016184541A1 (de) * 2015-05-21 2016-11-24 Audi Ag Verfahren zum betreiben einer datenbrille in einem kraftfahrzeug und system mit einer datenbrille
CN107065183A (zh) * 2017-03-21 2017-08-18 广东光阵光电科技有限公司 一种夜间行车增强可视度方法及便携眼镜式装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101685534B1 (ko) * 2014-12-16 2016-12-12 현대자동차주식회사 웨어러블 글래스를 이용한 차량 조명 제어 시스템 및 그 방법
CN104656257A (zh) * 2015-01-23 2015-05-27 联想(北京)有限公司 信息处理方法及电子设备
CN105045397B (zh) * 2015-08-31 2019-07-26 招商局重庆交通科研设计院有限公司 隧道内照明环境对在役隧道运营安全性影响的测试方法
WO2017166193A1 (zh) * 2016-03-31 2017-10-05 深圳多哚新技术有限责任公司 一种基于vr图像的显示屏驱动的方法和装置
WO2017180990A1 (en) * 2016-04-14 2017-10-19 The Research Foundation For The State University Of New York System and method for generating a progressive representation associated with surjectively mapped virtual and physical reality image data
CN106373197B (zh) * 2016-09-06 2020-08-07 广州视源电子科技股份有限公司 一种增强现实的方法及增强现实装置
CN107219920A (zh) * 2017-05-15 2017-09-29 北京小米移动软件有限公司 基于场景的ar眼镜识别方法、装置和ar眼镜

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2846756A1 (fr) * 2002-11-04 2004-05-07 Pechon Stephane Jean Martin Le Dispositif de vision nocturne destine a la conduite
CN103905805A (zh) * 2012-12-24 2014-07-02 天马微电子股份有限公司 车用电子眼镜系统
CN103185963A (zh) * 2013-03-29 2013-07-03 南京智真电子科技有限公司 多功能车辆辅助驾驶眼镜
WO2016184541A1 (de) * 2015-05-21 2016-11-24 Audi Ag Verfahren zum betreiben einer datenbrille in einem kraftfahrzeug und system mit einer datenbrille
CN105629515A (zh) * 2016-02-22 2016-06-01 宇龙计算机通信科技(深圳)有限公司 导航眼镜、导航方法和导航系统
CN107065183A (zh) * 2017-03-21 2017-08-18 广东光阵光电科技有限公司 一种夜间行车增强可视度方法及便携眼镜式装置

Also Published As

Publication number Publication date
CN107945284B (zh) 2020-03-06
CN107945284A (zh) 2018-04-20

Similar Documents

Publication Publication Date Title
WO2019114013A1 (zh) 一种用于自动驾驶车辆的场景显示方法及智能眼镜
CN112114429B (zh) 基于用户的上下文敏感全息图反应
CN102566756B (zh) 用于扩展现实显示的基于理解力和意图的内容
CN105378632B (zh) 用户焦点控制的有向用户输入
EP3011419B1 (en) Multi-step virtual object selection
CN102591016B (zh) 用于扩展现实显示的优化聚焦区
CN106327584B (zh) 一种用于虚拟现实设备的图像处理方法及装置
US9696798B2 (en) Eye gaze direction indicator
US20130007668A1 (en) Multi-visor: managing applications in head mounted displays
KR20160123346A (ko) 초점 이동에 반응하는 입체적 디스플레이
JP6693223B2 (ja) 情報処理装置、情報処理方法、及びプログラム
CN104656257A (zh) 信息处理方法及电子设备
US11906737B2 (en) Methods and systems for controlling media content presentation on a smart glasses display
US11747622B2 (en) Methods and systems for controlling media content presentation on a smart glasses display
CN108108018A (zh) 基于虚拟现实的指挥训练方法、设备及系统
CN109084748B (zh) 一种ar导航方法及电子设备
CN117891340A (zh) 视觉交互辅助方法及装置
KR20140003107A (ko) 증강 현실 표현 장치 및 방법
WO2017169272A1 (ja) 情報処理装置、情報処理方法、及びプログラム
CN116400805A (zh) 车载娱乐交互方法、装置、车辆及存储介质
CN116610212A (zh) 一种多模态娱乐交互方法、装置、设备及介质
CN113960788A (zh) 图像显示方法、装置、ar眼镜及存储介质
CN118377380B (zh) 一种基于时空数据的智能眼镜装置和系统
US11966278B2 (en) System and method for logging visible errors in a videogame
CN117433559A (zh) 基于增强现实的可视化全链路引导方法与装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17934837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17934837

Country of ref document: EP

Kind code of ref document: A1