WO2019114019A1 - Procédé de génération de scène pour véhicule à conduite autonome et lunettes intelligentes - Google Patents

Procédé de génération de scène pour véhicule à conduite autonome et lunettes intelligentes Download PDF

Info

Publication number
WO2019114019A1
WO2019114019A1 PCT/CN2017/117686 CN2017117686W WO2019114019A1 WO 2019114019 A1 WO2019114019 A1 WO 2019114019A1 CN 2017117686 W CN2017117686 W CN 2017117686W WO 2019114019 A1 WO2019114019 A1 WO 2019114019A1
Authority
WO
WIPO (PCT)
Prior art keywords
tunnel
scene
virtual
smart glasses
self
Prior art date
Application number
PCT/CN2017/117686
Other languages
English (en)
Chinese (zh)
Inventor
蔡任轩
Original Assignee
广州德科投资咨询有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州德科投资咨询有限公司 filed Critical 广州德科投资咨询有限公司
Publication of WO2019114019A1 publication Critical patent/WO2019114019A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Definitions

  • the present invention relates to the field of smart glasses, and in particular to a scene generating method and smart glasses for automatically driving a vehicle.
  • the mature autopilot technology allows occupants to freely do what they want in an autonomous vehicle.
  • the occupant will feel sleepy and weak due to the dimly sealed tunnel environment, so that the occupant cannot take the time when the self-driving vehicle encounters an emergency.
  • Emergency measures thereby increasing the probability of a traffic accident.
  • the embodiment of the invention discloses a scene generating method for an autonomous driving vehicle and smart glasses, which can generate a racing scene for an occupant of an autonomous driving vehicle when the self-driving vehicle enters the tunnel, so that the occupant can view the racing scene.
  • the occupant In order to remain awake, and in the event of an emergency in an autonomous vehicle, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • a first aspect of an embodiment of the present invention discloses a method for generating a scene for an autonomous vehicle, including:
  • the smart glasses When the self-driving vehicle enters the tunnel, the smart glasses acquire a virtual tunnel scene corresponding to the tunnel, and acquire tunnel information of the tunnel and driving information of the self-driving vehicle; wherein the tunnel information includes at least a tunnel Length, the driving information includes at least a driving speed;
  • the smart glasses increase the length of the tunnel according to a preset increase multiple, and obtain an adjusted length
  • the smart glasses extend the length of the virtual tunnel scenario to the adjusted length to obtain an extended tunnel scenario
  • the smart glasses calculate a travel time required for the self-driving vehicle to leave the tunnel according to the tunnel length and the travel speed, and calculate a virtual speed according to the adjusted length and the travel time;
  • the smart glasses add the adjusted length and the virtual speed in the extended tunnel scene to generate a virtual racing scene.
  • the smart glasses when the self-driving vehicle enters the tunnel, acquire the virtual tunnel scenario corresponding to the tunnel, including:
  • the smart glasses When the self-driving vehicle enters the tunnel, the smart glasses acquire a tunnel real scene corresponding to the tunnel;
  • the tunnel real scene includes a tunnel structure scene, an internal scene of the self-driving vehicle, and the An external vehicle scene outside the self-driving vehicle;
  • the smart glasses virtualize the tunnel structure scenario to obtain a virtual tunnel structure scenario
  • the smart glasses generate an internal scene of the virtual favorite vehicle according to the preference of the wearer of the smart glasses; the favorite vehicle of the smart glasses is pre-stored by the smart glasses;
  • the smart glasses virtualize an internal scene of the self-driving vehicle to obtain a virtual internal scene of the self-driving vehicle;
  • the smart glasses replace the virtual internal scene with an internal scene of the virtual favorite vehicle as a virtual vehicle internal scene;
  • the smart glasses virtualize an external vehicle scene other than the self-driving vehicle in the tunnel to obtain a virtual external vehicle scene;
  • the smart glasses generate a virtual tunnel scenario corresponding to the tunnel; wherein the virtual tunnel scenario includes the virtual tunnel structure scenario, the virtual vehicle interior scenario, and the virtual external vehicle scenario.
  • the smart glasses virtualize the tunnel structure scenario to obtain a virtual tunnel structure scenario, including:
  • the smart glasses virtualize the tunnel structure scenario to obtain a first tunnel structure scenario
  • the smart glasses acquire location information of the tunnel
  • the smart glasses acquire an administrative area to which the location information belongs according to the location information;
  • the smart glasses acquire a first virtual scene corresponding to the first attraction in the administrative area;
  • the smart glasses superimpose the first virtual scene into the first tunnel structure scene to obtain a virtual tunnel structure scene.
  • the smart glasses add the adjusted length and the virtual speed in the extended tunnel scenario, and after generating a virtual racing scene, the method Also includes:
  • the smart glasses calculate a magnification of the virtual speed and the traveling speed
  • the smart glasses change a speed of backward movement of the extended tunnel structure scene included in the extended tunnel scene according to the magnification
  • the smart glasses blur-render the extended tunnel structure scene such that the wearer generates a feeling of advancing at the virtual speed when riding the self-driving vehicle.
  • the smart glasses add the adjusted length and the virtual speed in the extended tunnel scenario to generate a virtual racing scene, including:
  • the smart glasses locate a current location of the tunnel according to a GPS positioning system
  • the smart glasses acquire a tunnel map corresponding to the tunnel according to the current location, and adjust the tunnel map to a racing map;
  • the smart glasses acquire a position of the self-driving vehicle and a position of other vehicles in the tunnel through a real-time cloud database of the self-driving vehicle, and add a position and a location of the self-driving vehicle to the racing map Generate a new racing map by referring to the signs corresponding to other vehicles in the tunnel;
  • the smart glasses add the new racing map, the adjusted length, and the virtual speed in the extended tunnel scene to generate a virtual racing scene.
  • a second aspect of the embodiments of the present invention discloses a smart glasses, where the smart glasses include:
  • a first acquiring unit configured to acquire a virtual tunnel scenario corresponding to the tunnel when the wearer of the smart glasses rides into the tunnel by the self-driving vehicle;
  • a second acquiring unit configured to acquire tunnel information of the tunnel and driving information of the self-driving vehicle; wherein the tunnel information includes at least a tunnel length, and the driving information includes at least a traveling speed;
  • a length adjustment unit configured to increase the length of the tunnel according to a preset multiple, to obtain an adjusted length
  • An extension unit configured to extend a length of the virtual tunnel scenario to the adjusted length, to obtain an extended tunnel scenario
  • a calculating unit configured to calculate a travel time required for the self-driving vehicle to leave the tunnel according to the tunnel length and the travel speed, and calculate a virtual speed according to the adjusted length and the travel time;
  • an adding unit configured to add the adjusted length and the virtual speed in the extended tunnel scene to generate a virtual racing scene.
  • the first acquiring unit includes:
  • a first acquisition subunit configured to acquire a tunnel real scene corresponding to the tunnel when the wearer of the smart glasses rides into the tunnel by the autopilot vehicle;
  • the tunnel real scene includes a tunnel structure scene, and the autopilot vehicle An internal scene and an external vehicle scene in the tunnel other than the self-driving vehicle;
  • a first processing sub-unit configured to perform virtualization processing on the tunnel structure scenario to obtain a virtual tunnel structure scenario
  • a first generating subunit configured to generate an internal scene of the virtual favorite vehicle according to the wearer's favorite vehicle; the favorite vehicle of the smart glasses is pre-stored by the smart glasses;
  • a second processing sub-unit configured to perform virtualization processing on an internal scene of the self-driving vehicle to obtain a virtual internal scene of the self-driving vehicle
  • a replacement subunit configured to replace the virtual internal scene with an internal scene of the virtual favorite vehicle as a virtual vehicle internal scene
  • a third processing subunit configured to virtualize an external vehicle scene other than the autopilot vehicle in the tunnel to obtain a virtual external vehicle scene
  • a second generation sub-unit configured to generate a virtual tunnel scenario corresponding to the tunnel, where the virtual tunnel scenario includes the virtual tunnel structure scenario, the virtual vehicle interior scenario, and the virtual external vehicle scenario.
  • the first processing subunit includes:
  • a processing module configured to perform virtualization processing on the tunnel structure scenario to obtain a first tunnel structure scenario
  • a first acquiring module configured to acquire location information of the tunnel
  • a second acquiring module configured to acquire, according to the location information, an administrative area to which the location information belongs;
  • a third acquiring module configured to acquire a first virtual scene corresponding to the first attraction in the administrative area
  • a superimposing module configured to superimpose the first virtual scene into the first tunnel structure scenario to obtain a virtual tunnel structure scenario.
  • the smart glasses further includes:
  • the calculating unit is further configured to calculate a magnification of the virtual speed and the traveling speed;
  • a speed adjustment unit configured to change a speed of backward movement of the extended tunnel structure scene included in the extended tunnel scene according to the magnification
  • a rendering unit configured to perform fuzzy rendering on the extended tunnel structure scene, so that the wearer generates a feeling of advancing at the virtual speed when riding the self-driving vehicle.
  • the adding unit includes:
  • a positioning subunit configured to locate a current location of the tunnel according to a GPS positioning system
  • a second acquiring subunit configured to acquire a tunnel map corresponding to the tunnel according to the current location, and adjust the tunnel map to a racing map
  • a third generation subunit configured to acquire, by a real-time cloud database of the self-driving vehicle, a location of the autopilot vehicle and a location of other vehicles in the tunnel, and add the autopilot vehicle to the racing map a location corresponding to other vehicles in the tunnel to generate a new racing map;
  • Adding a subunit for adding the new racing map, the adjusted length, and the virtual speed in the extended tunnel scene to generate a virtual racing scene Adding a subunit for adding the new racing map, the adjusted length, and the virtual speed in the extended tunnel scene to generate a virtual racing scene.
  • a third aspect of the embodiments of the present invention discloses a smart glasses, including:
  • a processor coupled to the memory
  • the processor invokes the executable program code stored in the memory to execute a scene generation method for automatically driving a vehicle disclosed in the first aspect of the embodiments of the present invention.
  • a fourth aspect of the embodiments of the present invention discloses a computer readable storage medium storing a computer program, the computer program causing a computer to generate a scene generation method for an autonomous vehicle disclosed in the first aspect of the embodiments of the present invention.
  • the embodiment of the invention has the following beneficial effects:
  • the smart glasses when the smart glasses determine that the self-driving vehicle enters the tunnel, the smart glasses acquire the tunnel scene of the tunnel, and the tunnel information of the tunnel and the driving information of the self-driving vehicle, and the smart glasses are based on the pre-stored magnification.
  • the length of the tunnel is enlarged to obtain an adjusted length, and the tunnel scene is extended according to the adjusted length to obtain an extended tunnel scene, and the virtual speed is obtained according to the tunnel information and the driving information, so that the smart glasses can add the above adjustment in the extended tunnel scene. Length and virtual speed to generate a virtual racing scene.
  • the smart glasses can acquire the scene and tunnel information of the current tunnel and the driving information of the current self-driving vehicle when the self-driving vehicle enters the tunnel, so that according to the above scenario, the tunnel information and the driving information,
  • the occupant of the self-driving vehicle generates a racing scene, so that the occupant can watch the racing scene and feel the experience brought by the racing scene, so as to remain awake, and then the occupant can take timely when the self-driving vehicle encounters an emergency situation.
  • FIG. 1 is a schematic flow chart of a method for generating a scene for an automatically driving vehicle according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of another method for generating a scene for an autonomous vehicle according to an embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a smart glasses disclosed in an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the embodiment of the invention discloses a scene generating method for an autonomous driving vehicle and smart glasses, which can generate a racing scene for an occupant of an autonomous driving vehicle when the self-driving vehicle enters the tunnel, so that the occupant can view the racing scene.
  • the occupant In order to remain awake, and in the event of an emergency in an autonomous vehicle, the occupant can take emergency measures in time to reduce the probability of a traffic accident. The details are described below separately.
  • FIG. 1 is a schematic flowchart diagram of a method for generating a scene for an automatically driven vehicle according to an embodiment of the present invention.
  • the scene generation method for automatically driving a vehicle may include the following steps:
  • the smart glasses acquire the virtual tunnel scene corresponding to the tunnel, and acquire tunnel information of the tunnel and driving information of the self-driving vehicle; wherein the tunnel information includes at least a tunnel length, and the driving The information includes at least the speed of travel.
  • the smart glasses acquiring the virtual tunnel scenario corresponding to the tunnel may include:
  • the smart glasses detect whether the self-driving vehicle enters the tunnel, and if so, obtain the virtual tunnel scene corresponding to the tunnel; if not, the process ends.
  • the smart glasses detecting whether the self-driving vehicle enters the tunnel may include:
  • the smart glasses acquire the field of view information of the smart glasses wearer through the camera device built in the smart glasses, and determine whether the self-driving vehicle enters the tunnel based on the field of view information.
  • a method of using the built-in camera to determine whether the current self-driving vehicle is driving into the tunnel can be added to the smart glasses.
  • the smart glasses detect whether the self-driving vehicle enters the tunnel, and may include:
  • the smart glasses establish a data connection with the self-driving vehicle, and the smart glasses acquire image information obtained by imaging the built-in camera of the self-driving vehicle, and the smart glasses determine whether the self-driving vehicle enters the tunnel according to the image information.
  • the smart glasses can be connected with the self-driving vehicle, so that the smart glasses can acquire various information acquired by the self-driving vehicle, and judge whether the self-driving vehicle enters the tunnel according to the various information, thereby making the smart The glasses determine whether the self-driving vehicle enters the tunnel with a higher degree of accuracy, which improves the reliability of the smart glasses.
  • the smart glasses establish a data connection with the self-driving vehicle, and the smart glasses acquire image information obtained by the camera device built in the self-driving vehicle, and the smart glasses determine, according to the image information, whether the self-driving vehicle enters the tunnel.
  • the smart glasses establish a data connection with the self-driving vehicle, wherein the data connection between the smart glasses and the self-driving vehicle is a smart glasses data connection data cloud, the data cloud data connection automatically drives the vehicle, and the smart glasses acquire the built-in camera device of the self-driving vehicle through the data cloud.
  • the data cloud is a real-time database of the self-driving vehicle.
  • the smart glasses are connected to the data cloud of the self-driving vehicle, and judge whether the self-driving vehicle enters the tunnel according to the data in the data cloud, thereby providing the first-level buffer for the smart glasses to determine whether the self-driving vehicle enters the tunnel.
  • the smart glasses are judged based on the data in the data cloud, thereby improving the reliability of the smart glasses to determine whether the automatic timed vehicle enters the tunnel.
  • the smart glasses increase the length of the tunnel according to a preset multiple increase, and obtain an adjustment length.
  • the smart glasses increase the length of the tunnel according to a preset multiple, and before the adjustment length is obtained, the smart glasses may further include:
  • the smart glasses get the car ride and get the ratio between the above car ride and the tunnel length as a preset increase factor.
  • the smart glasses can be calculated according to the actual racing distance, so that the tunnel length can be more similar to the driving of the car when the processing is increased, thereby improving the reduction degree of the racing scene and enriching the intelligence.
  • the functional details of the glasses enhance the user experience.
  • the smart glasses acquiring the driving distance of the car may include:
  • the smart glasses get a lot of racing car data through the network, and the smart glasses get the average racing car data based on the above-mentioned large number of racing car data, as a racing car.
  • the smart glasses can be averaged by a large amount of data to obtain a most suitable car ride, thereby improving the intelligence of the smart glasses and the effect experience provided to the user.
  • the smart glasses extend the length of the virtual tunnel scene to the adjusted length to obtain an extended tunnel scenario.
  • the smart glasses extend the length of the virtual tunnel scene to the adjusted length, and the length of the virtual tunnel structure in the virtual tunnel scene is adjusted, and the lengths of the self-driving vehicles and other vehicles are not changed.
  • the smart glasses calculate the travel time required for the self-driving vehicle to leave the tunnel according to the tunnel length and the traveling speed, and calculate the virtual speed according to the adjusted length and the travel time.
  • the smart glasses add an adjustment length and a virtual speed in the extended tunnel scene to generate a virtual racing scene.
  • the adjustment length and the virtual speed added by the smart glasses in the extended tunnel scene may be avatars, and may be placed at any position of the extended tunnel scene, and the position of the adjusted length and the virtual speed avatar is
  • the embodiments of the invention are not limited.
  • the smart glasses may acquire the tunnel information of the tunnel and the driving information of the autonomous driving vehicle when detecting that the self-driving vehicle enters the tunnel, and the smart glasses increase the tunnel length of the tunnel information.
  • the processing length is adjusted, and the length of the virtual tunnel scene is extended to the length of the adjustment.
  • the extended tunnel scene is obtained.
  • the virtual speed is calculated, and the virtual image of the virtual speed and the adjusted length is displayed in the extended tunnel scene. .
  • the method described in FIG. 1 can display the racing scene for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the occupant can experience the feeling of the racing car and keep the occupant awake while being entertained. Therefore, when an autonomous vehicle encounters an emergency, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • FIG. 2 is a schematic flowchart diagram of another method for generating a scene for an autonomous vehicle according to an embodiment of the present invention.
  • the scene generation method for the self-driving vehicle may include the following steps:
  • the smart glasses acquire a tunnel real scene corresponding to the tunnel;
  • the tunnel real scene includes a tunnel structure scene, an internal scene of the self-driving vehicle, and an external vehicle scene other than the self-driving vehicle in the tunnel.
  • the tunnel structure scenario may include a structural frame of the tunnel, a coating structure on the structural frame, and a decoration in a tunnel scene such as a decoration on the mechanism frame, except for the vehicle, which is not limited in this embodiment of the present invention.
  • the smart glasses virtualize the tunnel structure scenario to obtain a first tunnel structure scenario.
  • the smart glasses acquire location information of the tunnel.
  • the smart glasses acquire an administrative area to which the location information belongs according to the location information.
  • the smart glasses acquire a first virtual scene corresponding to the first attraction in the administrative area.
  • the smart glasses superimpose the first virtual scene into the first tunnel structure scenario to obtain a virtual tunnel structure scenario.
  • Steps 202 to 206 are performed, and the smart glasses can cover the virtual scene of the corresponding attraction in the current city in the tunnel structure scene, thereby enriching the structure of the tunnel structure, and enabling the wearer of the smart glasses to watch that they are in the current city.
  • the racing experience in the scenic spots enhances the user's interactive experience and provides users with a new type of entertainment.
  • the smart glasses generate an internal scene of the virtual favorite vehicle according to the preference of the wearer of the smart glasses; the favorite vehicle of the wearer of the smart glasses is pre-stored by the smart glasses.
  • the smart glasses virtualize an internal scene of the self-driving vehicle to obtain a virtual internal scene of the self-driving vehicle.
  • the smart glasses replace the virtual internal scene with an internal scene of the virtual favorite vehicle as a virtual vehicle internal scene.
  • the smart glasses virtualize an external vehicle scene other than the self-driving vehicle in the tunnel to obtain a virtual external vehicle scene.
  • the smart glasses generate a virtual tunnel scenario corresponding to the tunnel.
  • the virtual tunnel scenario includes a virtual tunnel structure scenario, a virtual vehicle interior scenario, and a virtual external vehicle scenario.
  • Steps 207 to 211 are implemented, and the smart glasses can change the internal scene of the self-driving vehicle that the occupant rides according to the occupant's preference, so that the occupant can more easily obtain satisfaction when wearing the smart glasses to view the scene, thereby increasing the intelligence.
  • the smart glasses acquire tunnel information of the tunnel and driving information of the self-driving vehicle, wherein the tunnel information includes at least a tunnel length, and the driving information includes at least a driving speed.
  • the smart glasses increase the length of the tunnel according to a preset multiple increase, and obtain an adjustment length.
  • the smart glasses extend the length of the virtual tunnel scene to the adjusted length to obtain an extended tunnel scenario.
  • the smart glasses calculate the travel time required for the self-driving vehicle to leave the tunnel according to the tunnel length and the traveling speed, and calculate the virtual speed according to the adjusted length and the travel time.
  • the smart glasses locate the current location of the tunnel according to the GPS positioning system.
  • the smart glasses acquire a tunnel map corresponding to the tunnel according to the current location, and adjust the tunnel map to a racing map.
  • the smart glasses acquire a tunnel map corresponding to the tunnel according to the current location, and adjust the tunnel map to a racing map, which may include:
  • the smart glasses acquire the tunnel map corresponding to the tunnel according to the current location, the smart glasses extract the route of the tunnel map, and perform the map adjustment of the extracted route (widening the line adjustment and the protruding line adjustment, etc.) to obtain the racing map.
  • the smart glasses can generate a corresponding racing map according to the actual tunnel map, and provide the user with map information of the current tunnel, so that the user knows the current location and the length of the tunnel, thereby improving the information provided by the smart device. Amount, which is convenient for the user to understand the current scene.
  • the smart glasses acquire the position of the self-driving vehicle and the position of other vehicles in the tunnel through the real-time cloud database of the self-driving vehicle, and add a flag corresponding to the position of the self-driving vehicle and other vehicles in the tunnel in the racing map to generate a new racing car. map.
  • the smart glasses when the smart glasses acquire the position information of the racing map and the vehicle, the smart glasses mark the points in the racing map according to the position information of the above-mentioned vehicles, wherein the self-driving vehicle is a red dot, and the other vehicles are black dots.
  • a new racing map is thus formed, and the new racing map is a real-time map, thereby enabling the user to clearly understand the current tunnel vehicle situation based on the new racing map.
  • the smart glasses add a new racing map, adjust the length and the virtual speed in the extended tunnel scene to generate a virtual racing scene.
  • Steps 216 to 219 are implemented, and the smart glasses can obtain the tunnel map of the current tunnel, adjust the tunnel map of the current tunnel, obtain a new racing map, and add the new racing map in the extended tunnel scene, so that the user can automatically Watching the new racing map in the timed vehicle and knowing where you are, thus increasing the richness of the user's information from the smart glasses.
  • the smart glasses calculate the virtual speed and the speed of the driving speed.
  • the smart glasses change the rate of backward movement of the extended tunnel structure scene included in the tunnel scene according to the magnification change.
  • the smart glasses blur-render the extended tunnel structure scene, so that the wearer generates a feeling of advancing at a virtual speed when riding the self-driving vehicle.
  • the smart glasses can acquire the real scene of the tunnel when the self-driving vehicle enters the tunnel, and virtualize the real scene of the tunnel, generate a virtual scene of the tunnel, and acquire the tunnel.
  • the tunnel information and the driving information of the self-driving vehicle, the smart glasses increase the tunnel length of the tunnel information to obtain an adjustment length, and extend the length of the virtual tunnel scene to the adjustment length to obtain an extended tunnel scene.
  • the smart glasses acquire the map information of the tunnel, generate a new racing map, and display the virtual speed, the adjusted length avatar and the new racing map in the extended tunnel scene, and extend the extension after the extended tunnel scene is output.
  • the tunnel scene is speed-adjusted and rendered so that the user feels that the self-driving vehicle is moving at a high speed, creating a racing scene.
  • the method described in FIG. 2 enables the smart glasses to virtualize the tunnel scene when the self-driving vehicle enters the tunnel, thereby to the virtual racing scene and display the racing scene for the occupant of the self-driving vehicle, and further
  • the fuzzy rendering process is performed in the racing scene, so that the smart glasses can make the wearer feel the real racing scene by changing the visual information of the wearer, and enable the occupant to remain awake while being entertained, thereby encountering an emergency in the self-driving vehicle.
  • the occupants can take emergency measures in time to reduce the probability of traffic accidents.
  • FIG. 3 is a schematic structural diagram of a smart glasses according to an embodiment of the present invention.
  • the smart glasses may include:
  • the first obtaining unit 301 is configured to acquire a virtual tunnel scenario corresponding to the tunnel when the wearer of the smart glasses rides into the tunnel by the self-driving vehicle.
  • the first obtaining unit 301 may include:
  • the detection subunit is configured to detect whether the self-driving vehicle enters the tunnel, and if so, obtain the virtual tunnel scenario corresponding to the tunnel; if not, the process ends.
  • the detecting subunit may include:
  • a field of view information acquiring module configured to acquire, by using an image capturing device built in the smart glasses, field of view information of the smart glasses wearer;
  • the judging module is configured to judge, according to the field of view information, whether the self-driving vehicle enters the tunnel.
  • a method of using the built-in camera to determine whether the current self-driving vehicle is driving into the tunnel can be added to the smart glasses.
  • the detecting subunit may include:
  • connection module for establishing a data connection with an autonomous vehicle
  • An image acquisition module configured to acquire image information obtained by imaging an imaging device built in the self-driving vehicle
  • the judging module is configured to judge, according to the image information, whether the self-driving vehicle enters the tunnel.
  • the smart glasses can be connected with the self-driving vehicle, so that the smart glasses can acquire various information acquired by the self-driving vehicle, and judge whether the self-driving vehicle enters the tunnel according to the various information, thereby making the smart The glasses determine whether the self-driving vehicle enters the tunnel with a higher degree of accuracy, which improves the reliability of the smart glasses.
  • connection module is specifically configured to establish a data connection with the self-driving vehicle, wherein the data connection between the smart glasses and the self-driving vehicle is a smart glasses data connection data cloud, the data cloud data connection automatically drives the vehicle;
  • the image acquisition module specific The image information obtained by acquiring the built-in camera device of the self-driving vehicle through the data cloud;
  • the determining module is specifically configured to determine, according to the image information, whether the self-driving vehicle enters the tunnel; wherein the data cloud is a real-time database of the self-driving vehicle.
  • the smart glasses are connected to the data cloud of the self-driving vehicle, and judge whether the self-driving vehicle enters the tunnel according to the data in the data cloud, thereby providing the first-level buffer for the smart glasses to determine whether the self-driving vehicle enters the tunnel.
  • the smart glasses are judged based on the data in the data cloud, thereby improving the reliability of the smart glasses to determine whether the automatic timed vehicle enters the tunnel.
  • the second obtaining unit 302 is configured to acquire tunnel information of the tunnel and driving information of the self-driving vehicle; wherein the tunnel information includes at least a tunnel length, and the driving information includes at least a traveling speed.
  • the length adjustment unit 303 is configured to perform an increase process on the tunnel length acquired by the second obtaining unit 302 according to the preset increase multiple to obtain an adjusted length.
  • the smart glasses may further include:
  • a ride acquisition unit for obtaining a car ride
  • the multiplier calculating unit is configured to obtain a ratio between the driving distance of the above-mentioned car and the length of the tunnel as a preset increasing multiple.
  • the smart glasses can be calculated according to the actual racing distance, so that the tunnel length can be more similar to the driving of the car when the processing is increased, thereby improving the reduction degree of the racing scene and enriching the intelligence.
  • the functional details of the glasses enhance the user experience.
  • the driving acquisition unit can be specifically used to obtain a large amount of racing driving data through the network, and the smart glasses obtain the average racing driving data according to the above-mentioned large number of racing driving data as a racing driving.
  • the smart glasses can be averaged by a large amount of data to obtain a most suitable car ride, thereby improving the intelligence of the smart glasses and the effect experience provided to the user.
  • the extension unit 304 is configured to extend the length of the virtual tunnel scenario acquired by the first acquiring unit 301 to the adjustment length adjusted by the length adjustment unit 303 to obtain an extended tunnel scenario.
  • the smart glasses extend the length of the virtual tunnel scene to the adjusted length, and the length of the virtual tunnel structure in the virtual tunnel scene is adjusted, and the lengths of the self-driving vehicles and other vehicles are not changed.
  • the calculation unit 305 is configured to calculate a travel time required for the self-driving vehicle to leave the tunnel according to the tunnel length and the travel speed acquired by the second acquisition unit 302, and calculate the virtual speed according to the adjusted length and the travel time.
  • the adding unit 306 is configured to add the adjustment length obtained by the length adjusting unit 303 and the virtual speed calculated by the calculating unit 305 in the extended tunnel scene adjusted by the extension unit 304 to generate a virtual racing scene.
  • the adjustment length and the virtual speed added by the smart glasses in the extended tunnel scene may be avatars, and may be placed at any position of the extended tunnel scene, and the position of the adjusted length and the virtual speed avatar is
  • the embodiments of the invention are not limited.
  • the smart glasses described in FIG. 3 can display the racing scene for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the occupant can experience the feeling of the racing car and keep the occupant awake while being entertained. Therefore, when an autonomous vehicle encounters an emergency, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • FIG. 4 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses shown in FIG. 4 are optimized by the smart glasses shown in FIG. 3.
  • the first obtaining unit 301 may include:
  • the first obtaining sub-unit 3011 is configured to acquire a tunnel real scene corresponding to the tunnel when the wearer of the smart glasses rides into the tunnel by the self-driving vehicle;
  • the tunnel real scene includes a tunnel structure scene, an internal scene of the self-driving vehicle, and a tunnel An external vehicle scene other than an autonomous vehicle.
  • the first processing sub-unit 3012 is configured to perform virtualization processing on the tunnel structure scenario acquired by the first obtaining sub-unit 3011 to obtain a virtual tunnel structure scenario.
  • the tunnel structure scenario may include a structural frame of the tunnel, a coating structure on the structural frame, and a decoration in a tunnel scene such as a decoration on the mechanism frame, except for the vehicle, which is not limited in this embodiment of the present invention.
  • the first generation subunit 3013 is configured to generate an internal scene of the virtual favorite vehicle according to the preference of the wearer; the favorite vehicle of the wearer of the smart glasses is pre-stored by the smart glasses.
  • the second processing sub-unit 3014 is configured to perform virtualization processing on the internal scene of the self-driving vehicle acquired by the first acquiring sub-unit 3011 to obtain a virtual internal scene of the self-driving vehicle.
  • the replacement sub-unit 3015 is configured to replace the virtual internal scene processed by the second processing sub-unit 3014 with the internal scene of the virtual favorite vehicle generated by the first generation sub-unit 3013 as a virtual vehicle internal scene.
  • the third processing sub-unit 3016 is configured to perform virtualization processing on an external vehicle scene other than the self-driving vehicle in the tunnel acquired by the first acquiring unit 3011 to obtain a virtual external vehicle scene.
  • the second generation sub-unit 3017 is configured to generate a virtual tunnel scenario corresponding to the tunnel, where the virtual tunnel scenario includes the virtual tunnel structure scenario processed by the first processing sub-unit 3012, the virtual vehicle internal scenario obtained by the replacement sub-unit 3015, and the first The three processing sub-units 3016 process the resulting virtual external vehicle scene.
  • the smart glasses described in the embodiments of the present invention can change the internal scene of the self-driving vehicle that the occupant rides according to the occupant's preference, so that the occupant can more easily obtain satisfaction when wearing the smart glasses to view the scene, thereby High the functional richness of smart glasses.
  • the first processing subunit 3012 may include:
  • the processing module 30121 is configured to perform virtualization processing on the tunnel structure scenario to obtain a first tunnel structure scenario.
  • the first obtaining module 30122 is configured to acquire location information where the tunnel is located.
  • the second obtaining module 30123 is configured to obtain an administrative area to which the location information belongs according to the location information acquired by the first obtaining module 30122.
  • the third obtaining module 30124 is configured to acquire the first virtual scene corresponding to the first attraction in the administrative area acquired by the second obtaining module 30123.
  • the superimposition module 30125 is configured to superimpose the first virtual scene acquired by the third obtaining module 30124 into the first tunnel structure scene processed by the processing module 30121 to obtain a virtual tunnel structure scene.
  • the smart glasses described in the embodiments of the present invention can cover the virtual scenes of the corresponding spots in the current city in the tunnel structure scenario, thereby enriching the tunnel structure scene, and enabling the wearer of the smart glasses to watch that they are A racing experience in the current city's attractions enhances the user's interactive experience and provides users with a new form of entertainment.
  • the smart glasses described in FIG. 3 can display the racing scene for the occupant of the self-driving vehicle when the self-driving vehicle enters the tunnel, so that the occupant can experience the feeling of the racing car and keep the occupant awake while being entertained. Therefore, when an autonomous vehicle encounters an emergency, the occupant can take emergency measures in time to reduce the probability of a traffic accident.
  • FIG. 5 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention.
  • the smart glasses shown in FIG. 5 are optimized by the smart glasses shown in FIG. 4.
  • the smart glasses may further include:
  • the calculating unit 305 is further configured to calculate a virtual speed and a magnification of the traveling speed.
  • the speed adjustment unit 307 is configured to perform a backward movement speed of the extended tunnel structure scene included in the extended tunnel scene obtained by the magnification change extension unit 304 calculated by the calculation unit 305.
  • the extended tunnel scene is included in a virtual racing scene, wherein the virtual racing scene further includes an adjustment length and a virtual speed.
  • the virtual racing scene may be a dynamic scene
  • the extended tunnel scene included in the virtual racing scene may also be a dynamic scene
  • the extended tunnel structure included in the extended tunnel scene may also be a dynamic scene
  • the adjusted back speed may be For the above, the backing speed of the tunnel structure scene is extended.
  • the rendering unit 308 is configured to perform fuzzy rendering on the extended tunnel structure scene obtained by the extension unit 304, so that the smart glasses wearer generates a feeling of advancing at a virtual speed when riding the self-driving vehicle.
  • the adding unit 306 may include:
  • the positioning subunit 3061 is configured to locate a current location of the tunnel according to the GPS positioning system.
  • the second obtaining subunit 3062 is configured to obtain a tunnel map corresponding to the tunnel according to the current location located by the positioning subunit 3061, and adjust the tunnel map to a racing map.
  • the third generation subunit 3063 is configured to acquire the location of the autopilot vehicle and the location of other vehicles in the tunnel by the real-time cloud database of the autopilot vehicle, and add the autopilot vehicle to the racing map obtained by the second acquisition subunit 3062.
  • the location corresponds to the signs of other vehicles in the tunnel, generating a new racing map.
  • the adding sub-unit 3064 is configured to add a new racing map generated by the third generating sub-unit 3063, adjust the length and the virtual speed in the extended unit 304 extended tunnel scene to generate a virtual racing scene.
  • the smart glasses described in the embodiments of the present invention can acquire the tunnel map of the current tunnel, adjust the tunnel map of the current tunnel, obtain a new racing map, and add the new racing map to the extended tunnel scene, so that the user You can view the new racing map in the auto-timed vehicle and learn where you are, which increases the richness of the user's information from the smart glasses.
  • the smart glasses described in FIG. 5 can virtualize the tunnel scene when the self-driving vehicle enters the tunnel, thereby to the virtual racing scene, and display the racing scene for the occupant of the self-driving vehicle, and further The blur rendering process is performed in the racing scene, so that the smart glasses can make the wearer feel the real racing scene by changing the visual information of the wearer, and enable the occupant to remain awake while being entertained, thereby encountering an emergency in the self-driving vehicle. In the case of the situation, the occupant can take emergency measures in time to reduce the probability of traffic accidents.
  • FIG. 6 is a schematic structural diagram of another smart glasses disclosed in an embodiment of the present invention. As shown in FIG. 6, the smart glasses may include:
  • a memory 601 storing executable program code
  • processor 602 coupled to the memory 601;
  • the processor 602 calls the executable program code stored in the memory 601 to execute a scene generation method for the self-driving vehicle of any of FIGS. 1 to 2.
  • the embodiment of the invention discloses a computer readable storage medium storing a computer program, wherein the computer program causes the computer to execute the scene generation method for the self-driving vehicle of any of FIGS. 1 to 2.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • PROM Programmable Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • OTPROM One-Time Programmable Read-Only Memory
  • EEPROM Electronically-Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Traffic Control Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un procédé de génération de scène pour un véhicule à conduite autonome et des lunettes intelligentes, comprenant les étapes suivantes : lorsqu'un véhicule à conduite autonome se déplace dans un tunnel, les lunettes intelligentes obtiennent une scène de tunnel virtuelle correspondant au tunnel et obtiennent des informations de tunnel du tunnel et des informations de conduite du véhicule à conduite autonome, les informations de tunnel comprenant au moins une longueur de tunnel et les informations de conduite comprenant au moins une vitesse de conduite (101) ; et les lunettes intelligentes génèrent une scène de type course de voiture virtuelle en fonction de la scène de tunnel virtuelle, des informations de tunnel et des informations de conduite. Selon la présente invention, lorsque le véhicule à conduite autonome se déplace dans un tunnel, la scène de type course de voiture peut être générée pour un passager du véhicule à conduite autonome, celui-ci pouvant alors observer la scène de type course de voiture, ce qui permet de maintenir un état conscient. Par conséquent, le passager peut prendre des mesures d'urgence en temps voulu lorsque le véhicule à conduite autonome rencontre une situation d'urgence, et la probabilité d'accidents de trafic est ainsi réduite.
PCT/CN2017/117686 2017-12-15 2017-12-21 Procédé de génération de scène pour véhicule à conduite autonome et lunettes intelligentes WO2019114019A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711349451.6 2017-12-15
CN201711349451.6A CN108109210B (zh) 2017-12-15 2017-12-15 一种用于自动驾驶车辆的场景生成方法及智能眼镜

Publications (1)

Publication Number Publication Date
WO2019114019A1 true WO2019114019A1 (fr) 2019-06-20

Family

ID=62216257

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117686 WO2019114019A1 (fr) 2017-12-15 2017-12-21 Procédé de génération de scène pour véhicule à conduite autonome et lunettes intelligentes

Country Status (2)

Country Link
CN (1) CN108109210B (fr)
WO (1) WO2019114019A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111821695A (zh) * 2019-04-19 2020-10-27 上海博泰悦臻网络技术服务有限公司 基于地图富翁游戏驾驶评判方法及装置、存储介质和终端
CN110785718B (zh) * 2019-09-29 2021-11-02 驭势科技(北京)有限公司 一种车载自动驾驶测试系统及测试方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102007A (zh) * 2013-04-12 2014-10-15 聚晶半导体股份有限公司 头戴式显示器及其控制方法
CN105807922A (zh) * 2016-03-07 2016-07-27 湖南大学 一种虚拟现实娱乐驾驶的实现方法、装置及系统
US20160358477A1 (en) * 2015-06-05 2016-12-08 Arafat M.A. ANSARI Smart vehicle
CN106740871A (zh) * 2016-10-25 2017-05-31 上海悉德信息科技有限公司 机动车驾驶人二分之一智能化远程驾驶系统
CN107328424A (zh) * 2017-07-12 2017-11-07 三星电子(中国)研发中心 导航方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07294842A (ja) * 1994-04-26 1995-11-10 Toyota Motor Corp 自動車用情報表示装置
JP6337534B2 (ja) * 2014-03-17 2018-06-06 セイコーエプソン株式会社 頭部装着型表示装置および頭部装着型表示装置の制御方法
JP6349744B2 (ja) * 2014-01-28 2018-07-04 株式会社Jvcケンウッド 表示装置、表示方法および表示プログラム
KR101867915B1 (ko) * 2015-01-16 2018-07-19 현대자동차주식회사 차량에서 웨어러블 기기를 이용한 기능 실행 방법 및 이를 수행하는 차량
US10366290B2 (en) * 2016-05-11 2019-07-30 Baidu Usa Llc System and method for providing augmented virtual reality content in autonomous vehicles
CN106648060A (zh) * 2016-10-12 2017-05-10 大连文森特软件科技有限公司 一种基于虚拟现实技术的车辆驾驶训练动作实时监测系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102007A (zh) * 2013-04-12 2014-10-15 聚晶半导体股份有限公司 头戴式显示器及其控制方法
US20160358477A1 (en) * 2015-06-05 2016-12-08 Arafat M.A. ANSARI Smart vehicle
CN105807922A (zh) * 2016-03-07 2016-07-27 湖南大学 一种虚拟现实娱乐驾驶的实现方法、装置及系统
CN106740871A (zh) * 2016-10-25 2017-05-31 上海悉德信息科技有限公司 机动车驾驶人二分之一智能化远程驾驶系统
CN107328424A (zh) * 2017-07-12 2017-11-07 三星电子(中国)研发中心 导航方法和装置

Also Published As

Publication number Publication date
CN108109210B (zh) 2019-04-16
CN108109210A (zh) 2018-06-01

Similar Documents

Publication Publication Date Title
JP7454544B2 (ja) 拡張現実及び仮想現実画像を生成するためのシステム及び方法
JP7331696B2 (ja) 情報処理装置、情報処理方法、プログラム、および移動体
US8963916B2 (en) Coherent presentation of multiple reality and interaction models
KR20210011416A (ko) 차량 탑승자 및 원격 사용자를 위한 공유 환경
US20190355019A1 (en) Information processing apparatus and information processing system
JP7450287B2 (ja) 再生装置および再生方法ならびにそのプログラムならびに記録装置および記録装置の制御方法等
US20220347567A1 (en) Inter-vehicle electronic games
CN114401414A (zh) 沉浸式直播的信息显示方法及系统、信息推送方法
US20180182261A1 (en) Real Time Car Driving Simulator
WO2019114019A1 (fr) Procédé de génération de scène pour véhicule à conduite autonome et lunettes intelligentes
Topliss et al. Establishing the role of a virtual lead vehicle as a novel augmented reality navigational aid
JP2022047580A (ja) 情報処理装置
EP0899691A2 (fr) Technique d'amélioration pour réalité virtuelle tridimensionnelle
CN111016787A (zh) 驾驶中防止视觉疲劳的方法、装置、存储介质及电子设备
US20230319426A1 (en) Traveling in time and space continuum
CN116499489A (zh) 基于地图导航应用的人机交互方法、装置、设备及产品
CN114089890B (zh) 车辆模拟驾驶方法、设备、存储介质及程序产品
US20220253197A1 (en) Method of providing real-time vr service through avatar
CN113409432A (zh) 一种基于虚拟现实的图像信息生成方法、装置和可读介质
CN118331676B (zh) 群体对象建模方法、车载显示方法、电子设备、存储介质及车辆
EP4086102B1 (fr) Procédé et appareil de navigation, dispositif électronique, support de stockage lisible et produit de programme informatique
US11694377B2 (en) Editing device and editing method
KR102596322B1 (ko) 차량 내부 영상을 기반으로 콘텐츠를 저작하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체
GB2614292A (en) Method, apparatus and computer program product for selecting content for display during a journey to alleviate motion sickness
WO2023118780A1 (fr) Procédé, appareil et produit programme informatique d'atténuation du mal des transports chez un utilisateur visualisant une image sur un écran

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17934857

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17934857

Country of ref document: EP

Kind code of ref document: A1