CN113260430A - Scene processing method, device and system and related equipment - Google Patents

Scene processing method, device and system and related equipment Download PDF

Info

Publication number
CN113260430A
CN113260430A CN202180001054.8A CN202180001054A CN113260430A CN 113260430 A CN113260430 A CN 113260430A CN 202180001054 A CN202180001054 A CN 202180001054A CN 113260430 A CN113260430 A CN 113260430A
Authority
CN
China
Prior art keywords
scene
vehicle
image
hud
sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202180001054.8A
Other languages
Chinese (zh)
Other versions
CN113260430B (en
Inventor
彭惠东
张宇腾
于海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN113260430A publication Critical patent/CN113260430A/en
Application granted granted Critical
Publication of CN113260430B publication Critical patent/CN113260430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/803Driving vehicles or craft, e.g. cars, airplanes, ships, robots or tanks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8017Driving on land or water; Flying
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/18Details relating to CAD techniques using virtual or augmented reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Computer Hardware Design (AREA)
  • Biophysics (AREA)
  • Computer Graphics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a scene processing method, a device, a system and related equipment, wherein the method comprises the following steps: acquiring a first scene outside a vehicle; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model; acquiring a first AR image corresponding to the first off-vehicle scene; fusing the first off-vehicle scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene; enabling a display screen to display the second scene. By adopting the embodiment of the application, the real information and the virtual information can be more efficiently and conveniently fused, so that the visual AR effect is obtained.

Description

Scene processing method, device and system and related equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a scene processing method, a scene processing device, a scene processing system and related equipment.
Background
A Head Up Display (HUD) is a real device that projects an image into the field of view in front of the driver. Compare in traditional instrument and well accuse screen, the driver need not to bow when observing the HUD image, has avoided the switching back and forth of people's eye focus between image and road surface, has reduced the time of crisis reaction, has improved driving safety nature. In recent years, an augmented reality head-up display (AR-HUD) is proposed, which can further fuse images projected by the HUD with real road information, so as to realize various functions such as Augmented Reality (AR) navigation and AR early warning, greatly enhance the acquisition of road information by a driver, and ensure the safety and comfort of driving.
However, at present, research on imaging effect of the HUD is mostly focused on optical analysis, and if an AR augmented reality function (such as an AR map, AR navigation, and point of interest (POI) display) implemented by using a vehicle-mounted HUD is to be analyzed and tested, an actual AR projection effect often needs to be observed after the HUD is installed and calibrated. Therefore, at present, in order to verify the AR imaging effect of the HUD and continuously optimize the AR function, a real vehicle test method is mostly adopted, that is, a vehicle loaded with the AR-HUD is driven to a real environment for testing, a large amount of time resources, manpower and material resources are consumed, the cost is high, and the efficiency is low. In addition, because the real scenes change all the time, a large number of test scenes cannot reappear, and then the reliability of the test result is low, so that the development and the test of a series of AR algorithms related in the AR-HUD are influenced, and the use experience of a user cannot be effectively guaranteed.
Therefore, how to efficiently and conveniently fuse the real information and the virtual information so as to obtain an intuitive AR effect is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a scene processing method, a scene processing device, a scene processing system and related equipment, which can efficiently and conveniently fuse real information and virtual information to obtain an intuitive AR effect.
The scene processing method provided by the embodiment of the application can be executed by an electronic device and the like. The electronic device refers to a device capable of being abstracted as a computer system, wherein the electronic device supporting a scene processing function may also be referred to as a scene processing device. The scene processing device may be a complete machine of the electronic device, for example: smart phones, tablet computers, notebook computers, desktop computers, car machines, vehicle-mounted computers or servers, and the like; or a vehicle-mounted system/device consisting of a plurality of complete machines; but also parts of the devices in the electronic apparatus, such as: the chip related to the scene processing function, such as a system chip or a scene processing chip, where the system chip is also called a system on chip, or called an SoC chip. Specifically, the scene processing device may be a terminal device such as a vehicle machine in an intelligent vehicle, an on-board computer, or the like, or may be a system chip or a scene processing chip that can be set in a computer system or a surround view system of an intelligent terminal.
In addition, the scene processing method provided by the embodiment of the application can be applied to the following scenes: the method comprises the following steps of vehicle-mounted simulation system, vehicle-mounted games, terminal cloud vehicle watching, live broadcasting, vehicle field testing and the like.
In a first aspect, an embodiment of the present application provides a scene processing method, where the method includes: acquiring a first scene outside a vehicle; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model; acquiring a first AR image corresponding to the first off-vehicle scene; fusing the first off-vehicle scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene; enabling a display screen to display the second scene.
With the method provided by the first aspect, embodiments of the present application may build a large number of two-dimensional or three-dimensional simulation scenes (each simulation scene may include scene elements such as vehicles, roads, or pedestrians) through existing computing devices (such as a laptop computer, a desktop computer, and the like) by a software-based method; and then, based on the simulation scene and the developed model, generating an AR image corresponding to the simulation scene, and fusing the simulation scene and the corresponding AR image, thereby rapidly and efficiently obtaining a large amount of augmented reality scenes comprising real information and virtual information. Wherein, each AR image can include AR navigation guide arrows, AR early warning and other AR icons. Therefore, compared with the prior art, the embodiment of the application can rapidly construct a large number of reproducible scenes by a software method without depending on a real scene and HUD hardware equipment, and has wide coverage and high reusability. Therefore, not only can various augmented reality scenes be rapidly and efficiently displayed, but also visual AR effects can be obtained subsequently based on the simulation of the embodiment of the application under various augmented reality scenes, the AR function of the HUD is continuously optimized and improved, and the use experience of a user is guaranteed.
In one possible implementation manner, the acquiring a first AR image corresponding to the first off-vehicle scene includes: acquiring the first AR image corresponding to the first off-vehicle scene according to the first off-vehicle scene and a preset model; wherein the first AR image comprises one or more AR icons.
In the embodiment of the application, an AR image matched with a simulation scene can be generated through a model obtained through pre-training so as to obtain virtual information in augmented reality. For example, the scene includes an intersection, and based on the navigation information, if the current intersection should turn right, a corresponding right-turn guiding arrow may be generated, and for example, if the scene is a cultural sight, corresponding sight indication information and sight introduction may be generated, and so on.
In a possible implementation manner, the preset model is a neural network model, and the neural network model is obtained by training according to a plurality of scenes, a plurality of AR icons, and different matching degrees of the scenes and the AR icons.
In the embodiment of the application, the neural network model can be trained in advance through a plurality of scenes, a plurality of AR icons and different matching degrees of the plurality of scenes and the plurality of AR icons, so that a large number of simulation scenes can be identified based on the neural network model subsequently, AR images corresponding to the simulation scenes are generated, and a large number of augmented reality scenes can be obtained quickly and efficiently.
In one possible implementation, the fusing the first off-board scene and the first AR image to obtain a second scene includes: determining a corresponding HUD virtual image surface in the first vehicle-mounted scene based on a preset HUD parameter set of the head-up display; the HUD virtual image surface is a corresponding area in the first vehicle-exterior scene; rendering the first AR image into the HUD virtual image plane to obtain a second scene.
In this embodiment of the present application, a software-based method may be used to simulate an in-vehicle HUD hardware device through an existing computing device, so as to determine a corresponding virtual image plane in a large number of scenes, where the virtual image plane may be an area in a scene. It will be appreciated that the virtual image plane is the plane projected by the HUD for displaying the AR image. Then, an AR image matched with the scene is generated through a model obtained through pre-training, and the AR image is rendered into the virtual image surface. Therefore, the embodiment of the application can simulate the AR images projected by the HUD under various scenes by a software method, complete the test of the AR function (the function includes the preset model for identifying the scenes and generating the corresponding AR images, for example), and the method can be used for testing the AR function. The time resource and the manpower and material resources that the real vehicle test needs to be consumed are greatly saved, the test efficiency of the AR function is improved, the user use experience of the HUD is further guaranteed, and the driving comfort and the safety of the user are improved.
In one possible implementation, the set of HUD parameters includes at least one of windshield curvature, eye box position, eye observation position, HUD installation position, and HUD virtual image plane size.
In this application embodiment, can simulate HUD through the method of software based on HUD's relevant hardware parameter, windshield curvature, eye box position, people's eye observation position, HUD installation position and HUD virtual image face size etc. to construct corresponding HUD virtual image face in a large amount of scenes, provide support for subsequent AR functional test. So, this application embodiment can not rely on hardware equipment, thereby the manpower and the material resources that the tester went out to carry out the real car test have been saved, and can be more high-efficient and low-cost carry out the higher test of coverage rate to relevant AR function etc. in the HUD through software simulation HUD's method, the development and the test work of effectual support AR function, the reliability of efficiency of software testing and test result has been improved greatly, thereby can carry out better improvement optimization to a series of software algorithm in the HUD, further guarantee user's use and experience, promote user's driving comfort and security.
In one possible implementation, the acquiring the first off-board scene includes: acquiring data collected by a first sensor; the first sensor is a vehicle-mounted sensor; the data collected by the first sensor are data collected aiming at the surrounding environment of the target vehicle in the running process of the target vehicle, and comprise at least one of image data, point cloud data, temperature data and humidity data; the first sensor comprises at least one of a camera, a laser radar, a millimeter wave radar, a temperature sensor and a humidity sensor; and constructing the first off-board scene based on the data acquired by the first sensor, wherein the first off-board scene is a real scene simulation scene.
In the embodiment of the application, only by driving a vehicle provided with a vehicle-mounted sensor (such as a camera, a laser radar, a millimeter wave radar and the like) to an actual road, the data of the surrounding environment is collected through the one or more vehicle-mounted sensors. In this way, the large number of simulation scenes may be constructed in a computing device such as a computer by a software method based on the large number of collected data (for example, the simulation scenes may be constructed by point cloud data collected by a laser radar, or may be constructed by image data collected by a camera, or may be constructed by fusing data collected by the laser radar and the camera, and the like, which is not specifically limited in the embodiment of the present application). Obviously, the large number of simulation scenes are live-action simulation scenes. As described above, the large number of simulation scenes can be used for supporting subsequent AR function display, test and the like, so that the coverage rate of the scenes is greatly improved, and the reproducibility of the scenes is ensured, thereby ensuring that the improved AR functions can be tested again in the same scene, the improved AR effect is verified, and the reliability of the test result is effectively improved.
In a possible implementation manner, the acquiring the target scene set includes: the acquiring of the first off-board scene includes: acquiring data collected by a second sensor; the second sensor is a sensor constructed by a preset simulation system; the data collected by the second sensor is data set by the preset simulation system and comprises at least one of weather, roads, pedestrians, vehicles, plants and traffic signals; and constructing the first vehicle-exterior scene based on the data acquired by the second sensor, wherein the first vehicle-exterior scene is a virtual simulation scene.
In the embodiment of the application, a plurality of virtual sensors can be obtained by simulating on a computer or other computing equipment through a software method, and the data of each virtual sensor is set, so that a large number of simulation scenes can be constructed based on the data of the plurality of virtual sensors, and obviously, the large number of simulation scenes are virtual simulation scenes (similar to virtual game scenes). Therefore, the time resource, the manpower, the material resources and the like required to be consumed by the data acquired by the plurality of sensors in the vehicle can be acquired, and the test cost is further reduced. As described above, the large number of simulation scenes can be used for supporting subsequent AR function display and test, so that the coverage rate of the scenes is greatly improved, and the reproducibility of the scenes is ensured, thereby ensuring that the improved AR functions can be tested again in the same scene, the improved AR effect is verified, and the reliability of the test result is effectively improved.
In a possible implementation manner, the method further includes: performing first preprocessing on the first AR image to obtain a second AR image corresponding to the first off-vehicle scene; the first preprocessing includes at least one of distortion processing and dithering processing; the distortion treatment comprises at least one of radial distortion, tangential distortion, virtual image distance increase and virtual image distance decrease; the dithering processing comprises the superposition of preset rotation displacement and/or dithering amount; fusing the first off-board scene and the second AR image to obtain a third scene; enabling the display screen to display the third scene.
In the embodiment of the application, it can be understood that, because the HUD is installed on the vehicle in the process often because of the problem of installation accuracy, make there is the deviation between HUD's mounted position and the preset ideal position, perhaps make car windshield's camber nonconformity standard because of the problem of production manufacturing accuracy, thereby cause the AR image that HUD throws to have more or less distortion, seriously influence user's visual perception, reduce user's use experience and driving comfort to a great extent, harm user's driving safety even. Therefore, the embodiment of the application further, the method that can also be through software carries out distortion processing to the AR image that generates in all kinds of scenes, for example including radial distortion, tangential distortion, virtual image apart from the increase and virtual image apart from reducing etc, the wide coverage, high efficiency, thereby can understand the influence of all kinds of distortions to AR imaging effect directly perceivedly, in order to compensate the AR imaging effect degradation that all kinds of distortions caused, for example, can provide effective, accurate support for development and the test of follow-up distortion removal function, promote user's use experience. In addition, after the HUD is assembled in a real vehicle, the vehicle always bumps to a certain degree in the driving process, so that the positions of the eyes of the driver or the HUD are jittered and changed, and accordingly, certain alignment distortion exists between the AR image observed by the driver and the object (such as a road, a vehicle and the like) in the real scene, and the AR image is not attached any more, so that the visual perception of the user is affected. In the prior art, often need drive the real car to multiple road surface to AR imaging under the shake condition tests the analysis, but because often can be accompanied with multiple shake noise in the real car test procedure, also probably there are people's eye shake and HUD position shake etc. simultaneously, leads to being difficult to take out single shake factor and tests the modeling analysis, and then brings the difficulty for development and the test of follow-up anti-shake function, can't guarantee that the user is experienced in the use of actual driving. Therefore, compared with the prior art, the embodiment of the application further can perform dithering processing on the AR images generated in various scenes through a software method, for example, different degrees of rotational displacement and dithering amount can be sequentially superposed according to actual needs, or a certain rotational displacement amount or dithering amount is separately superposed, and the like, so that a single dithering factor can be taken away to perform test analysis on the AR imaging effect, the influence of various dithering conditions on the AR imaging effect can be intuitively understood, effective and accurate support is provided for development and test of a subsequent anti-dithering function, and the use experience of a user is improved.
In a possible implementation manner, the method further includes: performing second preprocessing on the second AR image to obtain a third AR image corresponding to the first off-vehicle scene; the second preprocessing includes at least one of a distortion removal processing and an anti-shake processing; fusing the first extra-vehicular scene and the third AR image to obtain a fourth scene; enabling the display screen to display the fourth scene.
In a possible implementation manner, the method further includes: and acquiring the processing effect of the distortion removal processing and/or the anti-shaking processing based on the third scene and the fourth scene so as to optimize the corresponding distortion removal function and/or anti-shaking function.
In the embodiment of the application, further, after the distortion processing is performed, the distortion-removed algorithm to be tested can be superimposed on the distorted AR image, so that the current distortion removal effect can be intuitively known, and the distortion removal algorithm is continuously improved correspondingly. Guarantee then to have the AR formation of image effect under the unsuitable condition of skew or windshield camber in HUD mounted position, guarantee user's visual perception and use experience, further promote user's driving comfort and security. Furthermore, after the dithering is carried out, the AR image after the dithering can be superposed with the anti-dithering algorithm to be tested, so that the anti-dithering effect of the current anti-dithering algorithm can be intuitively known, and the anti-dithering algorithm can be continuously and correspondingly improved. Then guarantee that the vehicle jolts and cause the AR formation of image effect under the shake of people's eye position or the HUD position shake condition in actual driving, guarantee user's visual perception and use and experience, further promote user's driving comfort and security.
In one possible implementation, the first off-board scene includes one or more scene elements; the one or more scene elements include one or more of weather, roads, pedestrians, vehicles, vegetation, and traffic signals; the one or more AR icons include one or more of a left turn, a right turn, and a straight navigation identification; the method further comprises the following steps: and correspondingly correcting the preset model based on the position relation and/or the logic relation between the one or more scene elements and the one or more AR icons in each second scene.
In the embodiment of the present application, each simulation scene may include one or more scene elements, such as roads, pedestrians, vehicles, plants, and traffic signals (such as traffic lights, etc.), such as road signs, overpasses, buildings, animals, etc., which is not limited in this embodiment. In this way, according to the matching degree (for example, including the positional relationship and/or the logical relationship between the two parties) between the one or more scene elements in the simulation scene and the one or more AR icons in the generated AR image, the embodiment of the present application may analyze the AR effect of the preset model, and correspondingly correct the preset model. For example, the simulation scene may include an intersection and some trees beside the road, and based on the navigation information, if the current intersection should turn right, a corresponding right turn navigation identifier (i.e., a right turn guide arrow) needs to be generated, and if the right turn guide arrow generated in the AR image is not well attached to the road surface but is displayed on the tree beside the road, or an incorrect straight-going guide arrow is generated, it may be considered that the current AR effect is not ideal, the positional relationship and/or logical relationship between the scene element and the corresponding AR icon is not well matched, and the preset model still needs to be improved. Therefore, the embodiment of the application can obviously observe the AR effect of the current preset model on the computing equipment by a tester through the simulation method, or visually grasp the AR function condition of the current HUD, so that the existing problems can be positioned efficiently and accurately, the preset model and the like can be better improved and optimized, the use effect of a user is ensured, and the driving safety and the comfort of the user are improved.
In a second aspect, an embodiment of the present application provides a scene processing apparatus, including:
a first acquisition unit for acquiring a first off-board scene; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model;
a second acquisition unit configured to acquire a first AR image corresponding to the first off-vehicle scene;
a fusion unit for fusing the first extra-vehicular scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene;
and the first display unit is used for enabling the display screen to display the second scene.
In a possible implementation manner, the second obtaining unit is specifically configured to:
acquiring the first AR image corresponding to the first off-vehicle scene according to the first off-vehicle scene and a preset model; wherein the first AR image comprises one or more AR icons.
In a possible implementation manner, the preset model is a neural network model, and the neural network model is obtained by training according to a plurality of scenes, a plurality of AR icons, and different matching degrees of the scenes and the AR icons.
In a possible implementation manner, the fusion unit is specifically configured to:
determining a corresponding HUD virtual image surface in the first vehicle-mounted scene based on a preset HUD parameter set of the head-up display; the HUD virtual image surface is a corresponding area in the first vehicle-exterior scene;
rendering the first AR image into the HUD virtual image plane to obtain a second scene.
In one possible implementation, the set of HUD parameters includes at least one of windshield curvature, eye box position, eye observation position, HUD installation position, and HUD virtual image plane size.
In a possible implementation manner, the first obtaining unit is specifically configured to:
acquiring data collected by a first sensor; the first sensor is a vehicle-mounted sensor; the data collected by the first sensor are data collected aiming at the surrounding environment of the target vehicle in the running process of the target vehicle, and comprise at least one of image data, point cloud data, temperature data and humidity data; the first sensor comprises at least one of a camera, a laser radar, a millimeter wave radar, a temperature sensor and a humidity sensor;
and constructing the first off-board scene based on the data acquired by the first sensor, wherein the first off-board scene is a real scene simulation scene.
In a possible implementation manner, the first obtaining unit is specifically configured to:
acquiring data collected by a second sensor; the second sensor is a sensor constructed by a preset simulation system; the data collected by the second sensor is data set by the preset simulation system and comprises at least one of weather, roads, pedestrians, vehicles, plants and traffic signals;
and constructing the first vehicle-exterior scene based on the data acquired by the second sensor, wherein the first vehicle-exterior scene is a virtual simulation scene.
In one possible implementation, the apparatus further includes:
the first preprocessing unit is used for performing first preprocessing on the first AR image to acquire a second AR image corresponding to the first off-vehicle scene; the first preprocessing includes at least one of distortion processing and dithering processing; the distortion treatment comprises at least one of radial distortion, tangential distortion, virtual image distance increase and virtual image distance decrease; the dithering processing comprises the superposition of preset rotation displacement and/or dithering amount;
a second fusion unit for fusing the first off-vehicle scene and the second AR image to obtain a third scene;
and the second display unit is used for enabling the display screen to display the third scene.
In one possible implementation, the apparatus further includes:
the second preprocessing unit is used for performing second preprocessing on the second AR image to acquire a third AR image corresponding to the first off-vehicle scene; the second preprocessing includes at least one of a distortion removal processing and an anti-shake processing;
a third fusion unit, configured to fuse the first extra-vehicle scene and the third AR image to obtain a fourth scene;
and the third display unit is used for enabling the display screen to display the fourth scene.
In one possible implementation, the apparatus further includes:
and the optimization unit is used for acquiring the processing effect of the distortion removal processing and/or the anti-jitter processing based on the third scene and the fourth scene so as to optimize the corresponding distortion removal function and/or anti-jitter function.
In one possible implementation, the first off-board scene includes one or more scene elements; the one or more scene elements include one or more of weather, roads, pedestrians, vehicles, vegetation, and traffic signals; the one or more AR icons include one or more of a left turn, a right turn, and a straight navigation identification; the device further comprises:
and a correcting unit, configured to perform corresponding correction on the preset model based on a positional relationship and/or a logical relationship between the one or more scene elements and the one or more AR icons in each second scene.
In a third aspect, an embodiment of the present application provides a scene processing system, including: a terminal and a server;
the terminal is used for sending a first scene outside the vehicle; the first vehicle-exterior scene is sensing information acquired by a sensor of the terminal;
the server is used for receiving the first vehicle-outside scene from the terminal;
the server is further used for acquiring a first AR image corresponding to the first off-board scene;
the server is further used for fusing the first off-board scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene;
the server is further configured to send the second scene;
the terminal is further used for receiving the second scene and displaying the second scene.
In one possible implementation, the sensor includes at least one of a temperature sensor, a humidity sensor, a global positioning system, a camera, and a lidar; the sensing information includes at least one of temperature, humidity, weather, location, image, and point cloud.
In a possible implementation manner, the server is specifically configured to:
acquiring the first AR image corresponding to the first off-vehicle scene according to the first off-vehicle scene and a preset model; the first AR image includes one or more AR icons.
In a possible implementation manner, the preset model is a neural network model, and the neural network model is obtained by training according to a plurality of scenes, a plurality of AR icons, and different matching degrees of the scenes and the AR icons.
In a possible implementation manner, the server is specifically configured to:
determining a corresponding HUD virtual image surface in the first vehicle-mounted scene based on a preset HUD parameter set of the head-up display; the HUD virtual image surface is a corresponding area in the first vehicle-exterior scene;
rendering the first AR image into the HUD virtual image plane to obtain a second scene.
In one possible implementation, the set of HUD parameters includes at least one of windshield curvature, eye box position, eye observation position, HUD installation position, and HUD virtual image plane size.
In one possible implementation manner, the server is further configured to:
performing first preprocessing on the first AR image to obtain a second AR image corresponding to the first off-vehicle scene; the first preprocessing includes at least one of distortion processing and dithering processing; the distortion treatment comprises at least one of radial distortion, tangential distortion, virtual image distance increase and virtual image distance decrease; the dithering processing comprises the superposition of preset rotation displacement and/or dithering amount;
fusing the first off-board scene and the second AR image to obtain a third scene;
transmitting the third scene;
the terminal is further configured to receive the third scene and display the third scene.
In one possible implementation manner, the server is further configured to:
performing second preprocessing on the second AR image to obtain a third AR image corresponding to the first off-vehicle scene; the second preprocessing includes at least one of a distortion removal processing and an anti-shake processing;
fusing the first extra-vehicular scene and the third AR image to obtain a fourth scene;
and sending the fourth scene.
The terminal is further configured to receive the fourth scene and display the fourth scene.
In one possible implementation manner, the server is further configured to:
and acquiring the processing effect of the distortion removal processing and/or the anti-shaking processing based on the third scene and the fourth scene so as to optimize the corresponding distortion removal function and/or anti-shaking function.
In one possible implementation, the first off-board scene includes one or more scene elements; the one or more scene elements include one or more of weather, roads, pedestrians, vehicles, vegetation, and traffic signals; the one or more AR icons include one or more of a left turn, a right turn, and a straight navigation identification; the server is further configured to:
and correspondingly correcting the preset model based on the position relation and/or the logic relation between the one or more scene elements and the one or more AR icons in each second scene.
In a fourth aspect, the present application provides a computing device, where the computing device includes a processor, and the processor is configured to support the computing device to implement corresponding functions in the scene processing method provided in the first aspect. The computing device may also include a memory, coupled to the processor, that retains program instructions and data necessary for the computing device. The computing device may also include a communication interface for the computing device to communicate with other devices or communication networks.
The computing device may be a terminal, such as a mobile phone, a car machine, a vehicle-mounted device such as a car PC, a vehicle such as an automobile, and a server. The server may be a virtual server or an entity server. But also a chip or an electronic system, etc.
In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements a flow of the scene processing method according to any one of the above first aspects. The processor may be one or more processors, among others.
In a sixth aspect, the present application provides a computer program, where the computer program includes instructions, and when the computer program is executed by a computer, the computer may execute the flow of the scene processing method in any one of the above first aspects.
In a seventh aspect, an embodiment of the present application provides a chip system, where the chip system may include the scene processing apparatus in any of the second aspects, and is configured to implement a function related to the scene processing method flow in any of the first aspects. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the scene processing method. The chip system may be constituted by a chip, or may include a chip and other discrete devices.
In an eighth aspect, an embodiment of the present application provides an electronic device, where the electronic device may include the scene processing device according to any one of the second aspects, and is configured to implement the function related to the scene processing method flow according to any one of the first aspects. In one possible design, the electronic device further includes a memory for storing program instructions and data necessary for the scene processing method. The electronic device may be a terminal, for example, a mobile phone, a car machine, a vehicle-mounted device such as a car PC, a vehicle such as an automobile, or a server. The server may be a virtual server or an entity server. But also a chip or an electronic system, etc.
In summary, the embodiments of the present application provide a scene processing method, and a large number of two-dimensional or three-dimensional simulation scenes (each of which may include scene elements such as vehicles, roads, or pedestrians) may be constructed by using existing computing devices (such as mobile phones, car machines, servers, and the like) based on a software simulation method. And then, generating an AR image corresponding to the simulation scene based on the developed model, and fusing the simulation scene and the corresponding AR image, thereby rapidly and efficiently obtaining a large amount of augmented reality scenes comprising real information and virtual information.
In the following, the beneficial effects of the embodiments of the present application will be explained in combination with several more common scenarios.
Firstly, to carrying out the trade company that HUD relevant product sold or show, need not to be equipped with HUD hardware equipment through the scene processing method that this application embodiment provided, also need not to drive the real car, only need just can demonstrate HUD's AR effect directly perceivedly through the display screen in the shop. Accordingly, the customer can simply and directly experience the AR effect of the HUD through the display screen in the shop. Even, the merchant can upload a large amount of augmented reality scenes obtained through simulation to the cloud, and the user can check the augmented reality scenes through a corresponding website or application software by using the mobile phone of the user, so that the functions of the product can be known more quickly and conveniently, and the product can be purchased according to the requirements of the user. Therefore, the embodiment of the application not only saves a large amount of manpower and material resources for merchants, but also provides convenience for customers.
In addition, for developers, various models or algorithms related to the simulation scene can be analyzed quickly and intuitively based on scene elements and AR icons in the simulation scene fused with the AR images, so that corresponding functions can be optimized. Therefore, compared with the prior art, the embodiment of the application can be independent of scenes of the real world and HUD hardware equipment, a large number of reproducible scenes required by various models or algorithm tests can be quickly constructed through a software method, the coverage is wide, the reusability is high, and then various models or algorithms can be tested in the large number of scenes, the AR images projected by the HUD under various scenes can be quickly and intuitively obtained, the AR function and the corresponding distortion removal and anti-shaking functions are continuously optimized, and the use experience of a user is guaranteed. Therefore, for developers, the testing efficiency of the AR function of the HUD, the scene coverage rate and the reliability of the testing result can be greatly improved, time resources consumed by real vehicle testing can be saved to a great extent, manpower and material resources for the testers to go out to carry out the real vehicle testing can be saved, and the testing cost is greatly reduced.
Also, for a live scene in real time, for example, in some live activities for promoting HUD-related products or live activities for playing AR games. The anchor can acquire information (such as weather, roads, vehicles, pedestrians and the like) of the surrounding environment in real time through live broadcast equipment (such as a mobile phone terminal and can also comprise other cameras and the like), the information is sent to the server in real time, the server builds a simulation scene in real time based on the received information, the real environment around the anchor is restored, then a corresponding AR image is generated and fused with the real-time simulation scene, a real-time augmented reality scene is obtained and sent to the live broadcast equipment, and the live broadcast equipment displays the real-time augmented reality scene. Thus, remote audiences can intuitively feel real-time AR effect through a network live broadcast room by using devices such as mobile phones or tablets in hands. Optionally, in the above real-time live broadcast scene, a server may also be not used, and the live broadcast device may directly perform the construction of the simulation scene, the generation and the fusion of the AR image, the display of the augmented reality scene, and the like, which is not specifically limited in this embodiment of the present application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required to be used in the embodiments or the background of the present application will be described below.
FIG. 1 is a schematic diagram of the effect of AR-HUD imaging.
Fig. 2 is a functional block diagram of an intelligent vehicle according to an embodiment of the present application.
Fig. 3a is a schematic system architecture diagram of a scene processing method according to an embodiment of the present application.
Fig. 3b is a schematic system architecture diagram of another scene processing method according to the embodiment of the present application.
Fig. 3c is a functional block diagram of a computing device according to an embodiment of the present application.
Fig. 4 is an application scenario schematic diagram of a scenario processing method according to an embodiment of the present application.
Fig. 5 is an application scenario diagram of another scenario processing method provided in the embodiment of the present application.
Fig. 6a is a schematic flowchart of a scene processing method according to an embodiment of the present application.
Fig. 6b is a schematic flowchart of another scene processing method according to the embodiment of the present application.
Fig. 7 is a schematic flowchart of another scene processing method according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a scenario provided in an embodiment of the present application.
Fig. 9 is a schematic diagram of a camera coordinate system and a world coordinate system provided in an embodiment of the present application.
Fig. 10 is a schematic diagram of scene reconstruction provided in an embodiment of the present application.
Fig. 11 is a schematic diagram of a HUD virtual image plane according to an embodiment of the present application.
Fig. 12a is a schematic diagram of an AR imaging effect provided in an embodiment of the present application.
Fig. 12b is a schematic diagram of another AR imaging effect provided by the embodiment of the present application.
Fig. 12c is a schematic diagram of another AR imaging effect provided by the embodiment of the present application.
Fig. 13 is a schematic diagram of a HUD simulation effect under an external viewing angle according to an embodiment of the present application.
Fig. 14 is a schematic diagram of still another AR imaging effect provided by the embodiment of the present application.
Fig. 15 is a schematic diagram of a distortion type and an AR imaging effect according to an embodiment of the present application.
Fig. 16 is a schematic diagram illustrating comparison of AR imaging effects provided by an embodiment of the present application.
Fig. 17 is a schematic diagram of a human eye sight line coordinate system and a HUD virtual image plane coordinate system according to an embodiment of the present application.
Fig. 18 is a schematic diagram illustrating an effect of a shake condition on an AR imaging effect according to an embodiment of the present application.
Fig. 19 is a schematic structural diagram of another scene processing apparatus according to an embodiment of the present application.
Fig. 20 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings.
First, some terms in the present application are explained so as to be easily understood by those skilled in the art.
(1) Augmented Reality (AR), also known as mixed reality. Virtual information is applied to the real world through a computer technology, and a real environment and a virtual object are superposed on the same picture or space in real time and exist at the same time. Augmented reality provides information that is generally different from what human beings can perceive. The method not only shows the information of the real world, but also displays the virtual information at the same time, and the two kinds of information are mutually supplemented and superposed.
(2) A Head Up Display (HUD), i.e., a head-up display, is a display device that projects an image into a field of view in front of a driver. HUDs were first applied to military aircraft and aimed at reducing the frequency with which pilots need to look down at the instruments. Originally, the HUD can be through optics principle, throws driving relevant information on pilot's helmet for the pilot can pay close attention to each item index of flight and receive ground transmission's information etc. when guaranteeing normal driving, thereby promotes the security and the convenience of driving. Nowadays, HUD begins to be applied to the car, and the safety of driving is guaranteed to the drive power of using HUD in the design initial stage for the driver need not to hang down to shift the sight to panel board or well accuse in driving the process, has avoided the switching that the people's eye focus makes a round trip between panel board or well accuse and road surface, has reduced the time of crisis reaction. Therefore, the information projected by the HUD at the initial design stage is mainly the running condition index of the automobile, and is relatively simple information such as the vehicle speed and the oil amount displayed on the instrument panel.
A commercially available HUD may include components such as a projector, a mirror (alternatively referred to as a secondary mirror), and a projection mirror (alternatively referred to as a primary mirror). The imaging principle of the HUD is similar to that of a slide projection, by projecting an image onto the windshield of a car, so that the driver can acquire an image in front of the field of view. Specifically, light information can be emitted by a projector, reflected by a reflector onto a projection lens, and then reflected by the projection lens onto a windshield, so that human eyes see a virtual image about 2-2.5 meters in front of the eyes, and the information is suspended in the front of the windshield.
It should be noted that the position of the HUD image projected on the windshield is adjustable, and the position of the HUD image can be adjusted by changing the angle of the projection lens in general. Furthermore, it is understood that since the windshield of the automobile is curved, if the image is projected directly on the curved glass surface, the image may be distorted. This requires a corrective action and therefore the projection and reflection mirrors are often designed to be curved.
As described above, the HUD can project and display information such as overspeed warning, vehicle condition monitoring, fuel consumption, and speed per hour on the windshield by using the optical reflection principle, and can concentrate the driver's attention on the front road surface to realize active driving safety. Meanwhile, the delay and discomfort caused by the fact that the focal length of the eyes needs to be adjusted continuously can be reduced.
Further, designers want to achieve the goal of intelligent driving through HUDs, giving them more functionality. Based on this, an augmented reality head-up display (AR-HUD) has been proposed in recent years, and the AR-HUD has a stronger intuitiveness for users, and can fuse an image projected by the HUD with real road information to enhance the acquisition of the road information by drivers, for example, some virtual arrows can be projected and displayed in real time to intuitively guide us to go forward, thereby avoiding the situations of crossing a road junction and distracting the attention of the drivers during driving.
Referring to fig. 1, fig. 1 is a schematic diagram of AR-HUD imaging effect. As shown in fig. 1, the virtual image plane projected by the AR-HUD onto the windshield of the vehicle may be located directly in front of the driver's field of view. As shown in fig. 1, compared with the conventional HUD, the AR-HUD can display information such as the AR navigation guidance arrow in addition to the basic driving speed and the vehicle electric quantity, so as to assist the driver in achieving more intelligent, comfortable and safe driving. As described above, the AR-HUD can realize functions of AR navigation, AR early warning and the like through the image projected by the HUD. Optionally, functions such as vehicle following distance early warning, line pressing early warning, traffic light monitoring, lane change indication in advance, pedestrian early warning, road sign display, lane departure indication, front obstacle early warning, driver state monitoring and the like can be specifically realized, and description is omitted here.
In order to solve the problem that the actual service requirements cannot be met in the current AR function simulation and test technology, the embodiment of the application provides a series of schemes for building various scenes in a software mode based on the existing computing equipment (such as a mobile phone, a car machine, a server and the like), so that the AR functions are simulated in the various scenes, the AR images corresponding to the various scenes are generated, the various scenes and the corresponding AR images are fused, a large number of augmented reality scenes comprising real information and virtual information are rapidly and efficiently obtained, and the AR effect under each scene is conveniently and visually verified. Furthermore, a series of schemes provided by the embodiment of the application can also test the AR function according to the augmented reality scene obtained by simulation, so as to continuously improve and optimize the AR function and improve the AR effect. Furthermore, in order to verify the influence of installation deviation, human eye shake and vehicle shake of AR equipment (such as the AR-HUD) possibly existing in the actual driving situation on the AR imaging effect, a series of solutions provided in the embodiments of the present application may also utilize the above software mode, and by adding different distortion factors and shake amounts, the AR imaging effect under various non-ideal situations is simulated and tested, so as to develop and continuously improve the corresponding distortion removal and shake prevention functions, improve the AR effect, and ensure the use experience of the user.
Referring to fig. 2, fig. 2 is a functional block diagram of an intelligent vehicle according to an embodiment of the present disclosure. The scene processing method provided by the embodiment of the application can be applied to the intelligent vehicle 200 shown in fig. 2, and in one embodiment, the intelligent vehicle 200 can be configured in a fully or partially automatic driving mode. While the smart vehicle 200 is in the autonomous driving mode, the smart vehicle 200 may be placed into operation without human interaction.
The smart vehicle 200 may include various subsystems such as a travel system 202, a sensing system 204, a control system 206, one or more peripherals 208, as well as a power source 210, a computer system 212, and a user interface 216. Alternatively, the smart vehicle 200 may include more or fewer subsystems, and each subsystem may include multiple elements. In addition, each subsystem and element of the smart vehicle 200 may be interconnected by wire or wirelessly.
The travel system 202 may include components that provide powered motion for the smart vehicle 200. In one embodiment, the travel system 202 may include an engine 218, an energy source 219, a transmission 220, and wheels 221. The engine 218 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine of a gasoline engine and an electric motor, a hybrid engine of an internal combustion engine and an air compression engine. The engine 218 may convert the energy source 219 into mechanical energy.
Examples of energy sources 219 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 219 may also provide energy for other systems of the smart vehicle 200.
The transmission 220 may transmit mechanical power from the engine 218 to the wheels 221. The transmission 220 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 220 may also include other devices, such as a clutch. Wherein the drive shaft may comprise one or more shafts that may be coupled to one or more wheels 221.
The sensing system 204 may include a number of sensors that may be used to gather environmental information about the perimeter of the smart vehicle 200 (e.g., may include terrain, roads, motor vehicles, non-motor vehicles, pedestrians, roadblocks, traffic signs, traffic lights, animals, buildings, vegetation, etc. around the smart vehicle 200). As shown in fig. 2, the sensing system 204 may include a positioning system 222 (the positioning system may be a Global Positioning System (GPS) system, a Beidou system or other positioning systems), an Inertial Measurement Unit (IMU) 224, a radar 226, a laser range finder 228, a camera 230, a computer vision system 232, and so on. Optionally, in some possible real-time manners, the tester may drive the smart vehicle 200 to various driving environments (e.g., driving environments of different regions, different terrains, different road conditions, and different weather), collect data of the surrounding environment through a plurality of sensors in the sensing system 204, and further upload the collected data to the server. Subsequently, the tester can acquire a large amount of sensor data from the server under each environment, and construct a large amount of corresponding scenes through the computing device based on the large amount of sensor data. Optionally, the tester may also directly send the collected mass data to the computing device, and the like, which is not specifically limited in this embodiment of the application. Wherein, this a large amount of scenes can be as required reality information in the test of AR function to after further emulation obtains virtual HUD equipment, can test in all kinds of scenes to the AR function that relates to in the HUD etc. observe the AR imaging effect under all kinds of scenes, with constantly optimizing relevant AR function, optimize HUD's AR function etc. guarantee user's use and experience. Optionally, the smart vehicle 200 may further include an AR-HUD (i.e., an AR-enabled HUD, not shown in fig. 2), which may be an AR-HUD obtained through the simulation test and improvement, and may project an AR image to the front of the driver's field of view to perform safe and comfortable AR navigation and AR warning during driving.
The server may be a server, or a server cluster formed by a plurality of servers, or may also be a cloud computing service center, and the like, which is not specifically limited in this embodiment of the application. The computing device may be an intelligent wearable device, a smart phone, a tablet computer, a notebook computer, a desktop computer, or a server with a display screen, and the like, which is not specifically limited in this embodiment of the present application.
The positioning system 222 may be used to estimate the geographic location of the smart vehicle 200. The IMU 224 is used to sense position and orientation changes of the smart vehicle 200 based on inertial acceleration. In one embodiment, the IMU 224 may be a combination of an accelerometer and a gyroscope.
The radar 226 may utilize radio signals to sense objects within the surrounding environment of the smart vehicle 200. In some possible embodiments, the radar 226 may also be used to sense the speed and/or direction of travel, etc., of vehicles in the vicinity of the smart vehicle 200. Wherein, this radar 226 can be laser radar, still can be millimeter wave radar, etc. can be used for gathering the point cloud data of surrounding environment, then can obtain a large amount of scenes of point cloud form for carry out the simulation test to HUD.
The laser rangefinder 228 may utilize laser light to sense objects in the environment in which the smart vehicle 200 is located. In some possible embodiments, laser rangefinder 228 may include one or more laser sources, one or more laser scanners, and one or more detectors, among other system components.
The camera 230 may be used to capture multiple images of the surroundings of the smart vehicle 200, which in turn may result in a large number of scenes in the form of photographs for simulation testing of the HUD. In some possible embodiments, the camera 230 may be a still camera or a video camera.
The computer vision system 232 may operate to process and analyze images captured by the camera 230 in order to identify objects and/or features in the environment surrounding the smart vehicle 200. The objects and/or features may include terrain, motor vehicles, non-motor vehicles, pedestrians, buildings, traffic signals, road boundaries and obstacles, and the like. The computer vision system 232 may use object recognition algorithms, Structure From Motion (SFM) algorithms, video tracking, and other computer vision techniques.
The control system 206 is for controlling the operation of the smart vehicle 200 and its components. The control system 206 may include various elements including a throttle 234, a brake unit 236, and a steering system 240.
The throttle 234 is used to control the operating speed of the engine 218 and thus the speed of the smart vehicle 200.
The brake unit 236 is used for controlling the smart vehicle 200 to decelerate. The brake unit 236 may use friction to slow the wheel 221. In other embodiments, the brake unit 236 may convert the kinetic energy of the wheel 221 into an electrical current. The brake unit 236 may also take other forms to slow the rotation speed of the wheel 221 to control the speed of the smart vehicle 200.
The steering system 240 is operable to adjust the heading of the smart vehicle 200.
Of course, in one example, the control system 206 may additionally or alternatively include components other than those shown and described. Or may reduce some of the components shown above.
The smart vehicle 200 interacts with external sensors, other vehicles, other computer systems, or users through peripherals 208. Peripheral devices 208 may include a wireless communication system 246, an in-vehicle computer 248, a microphone 250, and/or a speaker 252. In some embodiments, the data collected by one or more sensors in the sensor system 204 may also be uploaded to the server through the wireless communication system 246, and the data collected by one or more sensors in the sensor system 204 may also be sent to the computing device for performing the simulation test on the HUD through the wireless communication system 246, which is not particularly limited in this embodiment of the present application.
In some embodiments, the peripheral device 208 provides a means for a user of the smart vehicle 200 to interact with the user interface 216. For example, the onboard computer 248 may provide information to a user of the smart vehicle 200. The user interface 216 may also operate the in-vehicle computer 248 to receive user input. The in-vehicle computer 248 can be operated through a touch screen. In other cases, the peripheral devices 208 may provide a means for the smart vehicle 200 to communicate with other devices located within the vehicle. For example, the microphone 250 may receive audio (e.g., voice commands or other audio input) from a user of the smart vehicle 200. Similarly, the speaker 252 may output audio to the user of the smart vehicle 200.
The wireless communication system 246 may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system 246 may use a third generation mobile communication network (3rd generation mobile networks, 3G) cellular communication such as Code Division Multiple Access (CDMA), global system for mobile communications (GSM)/General Packet Radio Service (GPRS), or a fourth generation mobile communication network (4th generation mobile networks, 4G) cellular communication such as Long Term Evolution (LTE). Or third generation mobile communication network (5G) cellular communication. The wireless communication system 246 may also communicate with a Wireless Local Area Network (WLAN) using wireless-fidelity (WIFI). In some embodiments, the wireless communication system 246 may communicate directly with the device using an infrared link, Bluetooth, or the like. Other wireless protocols, such as: various vehicular communication systems, for example, the wireless communication system 246 may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The power supply 210 may provide power to various components of the smart vehicle 200. In one embodiment, power source 210 may be a rechargeable lithium ion or lead acid battery. One or more battery packs of such batteries may be configured as a power source to provide power to the various components of the smart vehicle 200. In some embodiments, the power source 210 and the energy source 219 may be implemented together, such as in some all-electric vehicles.
Some or all of the functionality of the smart vehicle 200 is controlled by the computer system 212. The computer system 212 may include at least one processor 213, the processor 213 executing instructions 215 stored in a non-transitory computer readable medium, such as the memory 214. The computer system 212 may also be a plurality of computing devices that control individual components or subsystems of the smart vehicle 200 in a distributed manner.
The processor 213 may be any conventional processor, such as a commercially available Central Processing Unit (CPU). Alternatively, the processor may be a dedicated device such as an application-specific integrated circuit (ASIC) or other hardware-based processor. Although fig. 2 functionally illustrates a processor, memory, and other elements of the computer system 212 in the same block, those skilled in the art will appreciate that the processor or memory may actually comprise multiple processors or memories that are not stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than computer system 212. Thus, references to a processor or memory are to be understood as including references to a collection of processors or memories that may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, for example, some of the components in sensing system 204 may each have their own processor that performs only computations related to the component-specific functions.
In various aspects described herein, the processor 213 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle while others are executed by a remote processor.
In some embodiments, the memory 214 may contain instructions 215 (e.g., program logic), the instructions 215 being executable by the processor 213 to perform various functions of the smart vehicle 200, including those described above. The memory 214 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the travel system 202, the sensing system 204, the control system 206, and the peripheral devices 208.
In addition to instructions 215, memory 214 may store data, such as a volume of sensor data collected by sensor system 204 during travel, such as may include image data captured by camera 230 within sensor system 204 and point cloud data collected by radar 226, among others. In some embodiments, the memory 214 may also store, for example, road maps, route information, the location, direction, speed, and other such vehicle data of the vehicle, and other information, among others. Such information may be used by the wireless communication system 246 or the computer system 212, etc. in the smart vehicle 200 during travel of the smart vehicle 200.
A user interface 216 for providing information to or receiving information from a user of the smart vehicle 200. Optionally, the user interface 216 may include one or more input/output devices within the collection of peripheral devices 208, such as a wireless communication system 246, a car-to-car computer 248, a microphone 250, and a speaker 252.
Alternatively, one or more of these components described above may be installed or associated separately from the smart vehicle 200. For example, the memory 214 may exist partially or completely separate from the smart vehicle 200. The above components may be communicatively coupled together in a wired and/or wireless manner.
In summary, the smart vehicle 200 may be a car, a truck, a motorcycle, a bus, a boat, a drone, an airplane, a helicopter, a lawn mower, an amusement ride, a playground vehicle, construction equipment, a trolley, a golf cart, a train, a trolley, etc., which is not limited in this embodiment.
It is understood that the functional block diagram of the smart vehicle in fig. 2 is only an exemplary implementation manner in the embodiment of the present application, and the smart vehicle in the embodiment of the present application includes, but is not limited to, the above structure.
Referring to fig. 3a, fig. 3a is a schematic diagram of a system architecture of a scene processing method according to an embodiment of the present application, and a technical solution according to the embodiment of the present application may be implemented in the system architecture illustrated in fig. 3a or a similar system architecture. As shown in fig. 3a, the system architecture may include a computing device 100 and a plurality of smart vehicles, and in particular may include smart vehicles 200a, 200b, and 200 c. The computing device 100 and the smart vehicles 200a, 200b, and 200c may be connected to each other through a wired or Wireless network (e.g., Wireless-Fidelity (WiFi), bluetooth, and mobile network). Among them, the smart vehicles 200a, 200b, and 200c may each have mounted therein a plurality of sensors, such as a camera, a laser radar, a millimeter wave radar, a temperature sensor, and a humidity sensor, and the like. During driving, the smart vehicles 200a, 200b, and 200c may collect data (which may include images, point clouds, temperature, humidity, etc.) of the surroundings through a plurality of sensors inside the vehicles, and transmit the data to the computing device 100 in a wired or wireless manner. The computing device 100 may construct a large number of simulated scenes based on the large number of sensor data, generate and fuse AR images corresponding to each scene to obtain a corresponding augmented reality scene, and finally display the resulting augmented reality scene.
Optionally, referring to fig. 3b, fig. 3b is a schematic diagram of a system architecture of another scene processing method provided in the embodiment of the present application, and the technical solution of the embodiment of the present application may be specifically implemented in the system architecture illustrated in fig. 3b or a similar system architecture. As shown in fig. 3b, the system architecture may include a server 300 in addition to the computing device 100 and the smart vehicles 200a, 200b, and 200c described above. Accordingly, data acquired by the intelligent vehicles 200a, 200b, and 200c during the driving process may be uploaded to the server 300 through a network, and the subsequent computing device 100 may acquire corresponding sensor data from the server through the network, and so on, which is not described herein again. Therefore, sharing of sensor data can be achieved more conveniently, different testing personnel can acquire data collected by the intelligent vehicles 200a, 200b and 200c from the server side through the network according to actual needs, and testing efficiency is improved.
As described above, after acquiring data collected by the multiple sensors in the smart vehicles 200a, 200b, and 200c, the computing device 100 may construct a large number of simulation scenes based on the large number of sensor data, generate and fuse AR images corresponding to the respective scenes to obtain corresponding augmented reality scenes (i.e., AR effects in the various scenes), and finally display the obtained augmented reality scenes. Therefore, the computing device 100 completes the software simulation method, does not depend on the HUD hardware device, efficiently and intuitively obtains the AR effect under various different scenes, greatly reduces the display cost of the AR function, and can comprehensively and efficiently test the AR function of the HUD according to the AR effect under various different scenes, so that the AR function is continuously optimized, and the use experience of a user is guaranteed. And moreover, time resources and manpower and material resources consumed by the testers when going out to perform real vehicle testing are greatly saved, and the testing cost is greatly reduced. Optionally, the tester can drive the intelligent vehicles 200a, 200b and 200c to various terrains, road conditions, weather and other driving environments to collect data of various scenes, so that a large amount of supports capable of reproducing scenes can be provided for AR function simulation tests of subsequent HUDs and AR imaging effect simulation tests under the influence of jitter or distortion factors, and the reliability of test results is greatly improved.
Alternatively, as described above, the server 300 receives sensor data uploaded by the smart vehicles 200a, 200b, and 200c, where the data may be raw data collected by the sensors, data preprocessed (e.g., filtered, fused, etc.) by the sensors, and the like. The server 300 may construct a large number of simulation scenes by the method in the embodiment of the present application based on the large number of sensor data, generate and fuse AR images corresponding to the respective scenes, so as to obtain corresponding augmented reality scenes. Subsequently, the server 300 sends the obtained augmented reality scene to the computing device 100 through a message, so that the computing device 100 displays the augmented reality scene through a display screen. Optionally, the computing device 100 may also be a display screen or the like within the smart vehicles 200a, 200b, and 200 c.
In addition, in some special scenes, such as a real-time live scene, during the driving of the anchor vehicle 200a, data collected by the on-board sensor (which may also include data collected by a live device (not shown in the figure) on the vehicle, such as video data of the surrounding environment collected by a mobile phone terminal of the anchor vehicle) may be uploaded to the server 300 in real time. Thus, the server 300 can receive the data acquired in real time, construct a simulation scene in real time based on the data, restore the environment around the anchor, generate an AR image corresponding to the simulation scene in real time and fuse the AR image with the AR image to obtain a corresponding augmented reality scene, and send the obtained augmented reality scene to the live broadcast device, so that the display screen of the live broadcast device displays the augmented reality scene. Then, the audience can watch real-time augmented reality scenes (such as tourism scenes shot by the main broadcasting and AR icons such as weather, scenery identification and scenery introduction fused by the server) through the webcast room by using a mobile phone or a tablet in hands. As described above, the scene data used by the server 300 for simulation may be data collected in advance or may be real-time data, which is not specifically limited in this embodiment of the present application.
Optionally, please refer to fig. 3c, where fig. 3c is a functional block diagram of a computing device according to an embodiment of the present application. As shown in fig. 3c, the computing device 100 may include a human eye pose simulation module 101, a HUD parameter simulation module 102, a sensor data module 103, a data fusion module 104, an AR function module 105, a rendering engine module 106, a HUD virtual module 107, a scene generation module 108, and a display module 109.
The human eye posture simulation module 101 can be used for simulating the human eye posture of the driver in the driving process, and comprises the shaking change of the human eye position of the driver when the vehicle jolts and the like.
The HUD parameter simulation module 102 may be configured to set hardware parameters related to the HUD. It should be appreciated that, as described above, the hardware of the HUD itself or the curvature of the different windshields are prone to distortion of the AR image, and therefore, various hardware parameters can be parameterized and simulated by the HUD parameter simulation module 102, thereby providing a large number of comprehensive input parameters for subsequent simulation tests. For example, parameters such as windshield curvature, eye box position, HUD virtual image plane size, and HUD installation position may be included, which is not specifically limited in the embodiment of the present application.
The sensor data module 103 may be configured to acquire and store a large amount of data collected by a plurality of sensors (e.g., data collected by the plurality of sensors in the smart vehicles 200a, 200b, and 200c during driving). Alternatively, the sensor data may be data set in a virtual sensor constructed by an existing simulation system (or simulation software), or the like. Accordingly, in some possible embodiments, the system architecture may also only include a computing device, which is not specifically limited in this application. Therefore, a tester is not required to assemble the sensor in the vehicle, the vehicle is driven to the actual road for data acquisition, the manpower, material resources and time resources are further saved, the testing efficiency is greatly improved, the testing cost is reduced, the virtual sensor does not depend on the actual vehicle and the actual scene, and various different scenes can be more comprehensively and efficiently constructed through simple data setting and change. It is understood that, a tester may select real vehicle collected data or set virtual sensor data according to actual conditions and test requirements, and the like, which is not specifically limited in this embodiment of the present application.
The data fusion module 104 may be configured to fuse data in the sensor module 103, for example, fuse image data captured by a camera with point cloud data acquired by a laser radar, so as to provide more comprehensive and effective data support for a scene required by a subsequent construction test, and improve the quality of the scene. Optionally, as shown in fig. 3c, the data fusion module 104 may further receive related data in the human eye posture simulation module 101 and the HUD parameter simulation module 102, and further may visually display the influence of various hardware parameter changes and human eye posture changes on the AR imaging effect in a virtual image plane projected by the subsequent virtual HUD device.
The AR function module 105 may include a corresponding series of models, software algorithms, and the like, and may be configured to generate AR images matched with various scenes and the like, where each AR image may include one or more AR icons, such as AR navigation direction arrows, driving speed, car electric quantity, and the like.
The rendering engine module 106 may be configured to render the corresponding one or more AR icons into the virtual image plane projected by the HUD.
The HUD virtual module 107 may be configured to perform simulation on the HUD based on the relevant parameters in the HUD parameter simulation module 102, and construct a corresponding HUD virtual image plane, and so on.
The scene generation module 108 may be configured to directly construct a large number of scenes based on the sensor data in the sensor data module 103, or construct a large number of scenes based on data obtained by fusing various sensor data in the data fusion module 104, and so on. Further, corresponding HUD virtual image planes can also be constructed in the large number of scenes by the HUD virtual module 107 described above.
The display module 109 may be configured to display an AR image generated based on the current scene, the model, the corresponding hardware parameters, and the human eye posture in the HUD virtual image plane of the current scene, so that a tester may visually grasp an AR imaging effect under the current condition, and further analyze a problem that may exist in the model or the algorithm according to the AR imaging effect, so as to optimize an AR function. And the influence of various hardware parameters and human eye posture changes on the AR imaging effect can be analyzed according to the AR imaging effect, so that effective support is provided for the development and the test of the subsequent distortion removal function and the anti-shake function, and the use experience of a user is continuously improved.
It is understood that the functional block diagram of the computing device in fig. 3c is only an exemplary implementation manner in the embodiment of the present application, and the computing device in the embodiment of the present application includes, but is not limited to, the above structure.
In summary, the computing device 100 may be a smart phone, a smart wearable device, a tablet computer, a notebook computer, a desktop computer, a vehicle machine, and the like, which have the above functions, and this is not particularly limited in this embodiment of the application. The smart vehicles 200a, 200b, and 200c may be a family car, a coach, a bus, a taxi, a motorcycle, a yacht, and the like, which have the above functions, and this embodiment of the present application is not particularly limited thereto. The server 300 may be a computer, a server, and the like having the above functions, the server 300 may be one server, or may be a server cluster formed by multiple servers, or a cloud computing service center, and the server 300 may provide background services for the computing device 100 and the intelligent vehicles 200a, 200b, and 200c, for example, a car networking service platform, and the like, which is not specifically limited in this embodiment of the application.
In order to facilitate understanding of the embodiments of the present application, the following exemplary application scenarios to which a scenario processing method in the present application is applicable may include the following scenarios.
And a first scene is constructed based on data acquired by a vehicle-mounted sensor aiming at a real scene, and a HUD is subjected to simulation test.
Referring to fig. 4, fig. 4 is a schematic view of an application scenario of a scenario processing method according to an embodiment of the present application. As shown in fig. 4, the application scenario may include a computing device 100 and a smart vehicle 200 (a car is an example in fig. 4) traveling in an actual road (e.g., a multi-lane highway as is more common in the real world shown in fig. 4). Optionally, as shown in fig. 4, the application scenario may further include a plurality of other vehicles, such as a vehicle 1 (a bus is taken as an example in fig. 4), a vehicle 2 (a car is taken as an example in fig. 4), and a vehicle 3 (a car is taken as an example in fig. 4). The computing device 100 and the smart vehicle 200 may establish a communication connection in a wired or wireless manner, and the smart vehicle 200 may be any one of the smart vehicles 200a, 200b, and 200c described above, and may have a built-in sensor system including a plurality of sensors (e.g., a camera, a laser radar, a millimeter wave radar, and the like). The smart vehicle 200 may collect data of the surrounding environment through a plurality of sensors of the vehicle while traveling on the road and transmit the data to the computer device 100. Then, as shown in fig. 4, the computing device 100 may construct a large number of scenes based on data collected by a plurality of sensors in the smart vehicle 200 through a scene processing method provided in an embodiment of the present application. Obviously, the data acquisition is carried out when the real vehicle drives to the actual road, so that a large number of scenes can be all real scene simulation scenes. Further, computing device 100 can be through the mode of software simulation, based on preset HUD hardware parameter, construct virtual HUD to generate corresponding AR image and fuse in this a plurality of outdoor scene simulation scenes, with high efficiency, conveniently obtain a large amount of augmented reality scenes and show, so, need not during the user drives the real car and goes the road, just can audio-visual experience the AR function of HUD product under all kinds of scenes, provide convenience for the user. And, can also test HUD's AR function etc. based on this a large amount of augmented reality scenes, as above, this application embodiment can not rely on HUD hardware equipment, tests HUD's AR function comprehensively, high-efficiently under all kinds of different scenes to constantly optimize AR function, guarantee user's use and experience, and greatly reduced test cost.
As described above, the computing device 100 may be a smart phone, a smart wearable device, a tablet computer, a notebook computer, a desktop computer, a car machine, and the like, which have the above functions, and this embodiment of the present application is not particularly limited thereto. The smart vehicle 200 may be a home car, a minibus, a bus, a taxi, a motorcycle, a yacht, and the like, which have the above functions, and this is not particularly limited in this embodiment of the application.
And a second scene is constructed based on the data set in the virtual sensor, and the HUD is subjected to simulation test.
Referring to fig. 5, fig. 5 is a schematic view of an application scenario of another scenario processing method according to an embodiment of the present application. As shown in fig. 5, the application scenario may include a computing device 100. As shown in fig. 5, the computing device 100 may first construct a plurality of virtual sensors (which may include, for example, virtual cameras, virtual lidar, virtual millimeter-wave radar, and the like) through an existing simulation system (or simulation software), and then set data of each virtual sensor, for example, set a virtual object in the virtual camera, such as a virtual vehicle, a virtual road, and the like. The computing device 100 may then construct a plurality of scenes based on the data of the plurality of virtual sensors, and obviously, the plurality of scenes may all be virtual simulation scenes accordingly. As shown in fig. 5, the virtual simulation scenario may be similar to a game scenario in a first-person perspective, and the like, which is not specifically limited in this embodiment of the present application. Further, the computing device 100 may construct the virtual HUD based on preset HUD hardware parameters in a software simulation manner, so as to generate and fuse corresponding AR images in the multiple real-scene simulation scenes, so as to efficiently and conveniently obtain and display a large number of augmented reality scenes, which is not repeated here. Further, compared with the scheme shown in fig. 4 that the real vehicle is driven to acquire sensor data, the application scenario shown in fig. 5 further saves time resources, manpower and material resources and the like consumed by a worker when going out to acquire data, so that the costs of AR function display and AR function test are further reduced, and the test efficiency is improved.
As described above, the computing device 100 may be a smart phone, a smart wearable device, a tablet computer, a notebook computer, a desktop computer, a car machine, and the like, which have the above functions, and this embodiment of the present application is not particularly limited thereto.
And a third scene, carrying out live broadcast based on the data acquired in real time.
Referring to fig. 4, in some real-time live scenes, for example, scenes such as AR game live broadcast or promotion experience live broadcast activities of related products, data collected by the on-board sensor (which may also include data collected by a live device (not shown in the figure), for example, video data of the surrounding environment collected by a mobile phone terminal of the anchor) may be uploaded to the computing device 100 in real time during the driving of the anchor vehicle 200. In this way, the computing device 100 may receive the data acquired in real time, construct a simulation scene in real time based on the data, restore the environment around the anchor, generate an AR image corresponding to the simulation scene in real time and fuse the AR image with the AR image to obtain a corresponding augmented reality scene, and send the obtained augmented reality scene to the live broadcast device, so that the display screen of the live broadcast device displays the augmented reality scene. Then, the audience can watch real-time augmented reality scenes (such as tourism scenes shot by the main broadcasting and AR icons such as weather, scenery identification and scenery introduction fused by the server) through the webcast room by using a mobile phone or a tablet in hands. The computing device 100 may be a server having the above functions, and may be one server, a server cluster formed by multiple servers, or a cloud computing service center, and the like, which is not specifically limited in this embodiment of the application.
It can be understood that the above application scenarios are only some exemplary implementations in the embodiments of the present application, and the application scenarios in the embodiments of the present application include, but are not limited to, the above application scenarios, and other scenarios and examples will not be listed and described in detail.
Referring to fig. 6a, fig. 6a is a schematic flowchart illustrating a scene processing method according to an embodiment of the present disclosure. The method is applicable to the system architecture described in fig. 3a and 3b and the application scenario described in fig. 4 and 5, wherein the computing device is capable of supporting and executing the method flow steps S801-S804 shown in fig. 6 a. As will be described below with reference to fig. 6a from the side of the computing device, the method may comprise the following steps S801-S804.
Step S801, acquiring a first scene outside a vehicle; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model.
Specifically, the first off-vehicle scene of the computing device may be a two-dimensional scene image, a three-dimensional scene model, or the like, which is not specifically limited in this embodiment of the present application. The scene element may include, for example, a lane, a vehicle, a plant, a pedestrian, an animal, a traffic signal, and the like, which is not particularly limited in the embodiment of the present application.
Step S802, a first AR image corresponding to the first outside-vehicle scene is acquired.
Specifically, the computing device may identify a first off-vehicle scene based on the first off-vehicle scene and a preset model, and acquire a first AR image corresponding to the first off-vehicle scene.
Optionally, each first AR image may include one or more AR icons (e.g., a warning icon or text information for paying attention to the vehicle in front, the sidewalk, and the like, and further, for example, related attraction introduction and weather information, and the like, and may further include related AR navigation identifiers, such as a straight arrow and a left/right turning arrow, and the like, which is not specifically limited in this embodiment of the present application). Optionally, each first off-board scene and each AR icon may have a corresponding one or more attributes, such as weather, season, geographic location, road conditions, terrain, traffic signals, and so forth. The attribute of each AR icon included in the first AR image may be the same as (or match) the attribute of the first off-board scene. For example, the attributes of the first off-board scene include school and road driving, and then corresponding AR icons such as school identification and slow-down may be obtained based on the first off-board scene and a preset model.
And S803, fusing the first vehicle-exterior scene and the first AR image to obtain a second scene.
Specifically, the computing device may fuse a first off-board scene with its corresponding first AR image to obtain a corresponding second scene. Wherein, first car scene of outer is the reality information in the second scene, and first AR image is the virtual information in the second scene.
Step S804, enabling the display screen to display the second scene.
In particular, the display screen may be a display screen of the computing device, and the computing device may display the second scene through its display after obtaining the second scene. Optionally, the display screen may also be a display screen of another device, and after obtaining the second scene, the computing device may send the second scene to the other device, so that the other device displays the second scene through the display screen thereof, and so on, which is not specifically limited in this embodiment of the application. Optionally, reference may also be made to the description of the third corresponding embodiment in fig. 3b and the scenario, which is not described herein again.
Therefore, the embodiment of the application can construct a large number of two-dimensional or three-dimensional simulation scenes through the existing computing equipment (such as a notebook computer, a desktop computer and the like) based on a software method; and then, based on the simulation scene and the developed model, generating an AR image corresponding to the simulation scene, and fusing the simulation scene and the corresponding AR image, thereby rapidly and efficiently obtaining a large amount of augmented reality scenes comprising real information and virtual information.
Further, it should be noted that, in order to verify the effect of a series of software algorithm functions such as an AR map, an AR navigation function, and a point of interest (POI) related to the AR-HUD under a real road condition, and ensure that a user can obtain correct information such as the AR navigation function or the AR warning function when driving in various scenes, multiple tests and optimizations are often required to be performed on the preset model and a series of software algorithms. In the prior art, an automobile provided with the AR-HUD device is generally required to be driven to a real world to perform repeated function tests of multiple road conditions, multiple weather conditions, multiple regions and multiple scenes, namely, real automobile tests. In the real-vehicle test process, if the AR function is found to be invalid, for example, for a road needing to go straight, a right turn or a turn around occurs, and the like, the AR navigation guidance arrow causes the problem whether the data of each sensor in the automobile is asynchronous, the AR event response logic is wrong, or the hardware equipment fails, and the like, and the problem needs to be analyzed and determined under the line after the test is finished, so that the process is complex and tedious, and the real-vehicle test needs to consume a large amount of time, manpower and material resources, and the cost is high. In addition, after the AR function is improved, the AR function needs to be sufficiently tested again, so that the problem is solved, however, since the actual road condition, weather and the like are always changed, it is difficult to test the AR function again by repeating the previous failure scene, so that the reliability of the test result and the test efficiency are greatly reduced. Compared with the prior art, the embodiment of the application can not depend on a real scene and HUD hardware equipment, not only can rapidly and efficiently display various augmented reality scenes, but also can continuously optimize and improve the AR function of the HUD based on the visual AR effect under various augmented reality scenes obtained by simulation of the embodiment of the application, and ensures the use experience of a user.
It is understood that different methods or steps can be executed by different execution bodies, and another scene processing method provided by the embodiment of the present application is described below by taking the computing device 100 as an example. Referring to fig. 6b, fig. 6b is a schematic flowchart illustrating another scene processing method according to an embodiment of the present disclosure. The method is applicable to the system architecture described in fig. 3a and 3b and the application scenario described in fig. 4 and 5, where the computing device may be configured to support and execute the method flow steps S901-S903 shown in fig. 6 b. As will be described below with reference to fig. 6b from the side of the computing device, the method may comprise the following steps S901-S903.
Step S901, acquiring a target scene set, wherein the target scene set comprises X first vehicle-exterior scenes; each first off-board scene includes one or more scene elements.
In particular, a computing device obtains a set of target scenes that may include X first off-board scenes, each of which may include one or more scene elements. Optionally, step S901 may refer to step S801 in the embodiment corresponding to fig. 6a, which is not described herein again. Wherein X is an integer greater than or equal to 1.
Alternatively, the computing device may acquire data collected by N first sensors, which may be in-vehicle sensors (e.g., cameras, lidar, millimeter-wave radar, etc.) disposed on a target vehicle (e.g., any of the above-described smart vehicles 200a, 200b, and 200 c). The data collected by the N first sensors may be data collected for the surroundings of the target vehicle during the travel of the target vehicle. Wherein N is an integer greater than or equal to 1. The computing device may then build the corresponding K first off-board scenes based on the data collected by the N first sensors and by an existing rendering engine (e.g., Open Graphics Library (openGL), unreal engine, unity rendering engine, etc.). It is understood that the K first off-board scenes may be live-action simulation scenes. Wherein K is an integer greater than or equal to 1 and less than or equal to X.
Optionally, the computing device may further acquire data acquired by M second sensors, where the M second sensors may be virtual sensors constructed by a preset simulation system. The data collected by the M second sensors may be data set by the preset simulation system. Wherein M is an integer greater than or equal to 1. Then, correspondingly, the computing device may construct, based on the data collected by the M second sensors, the corresponding P first off-board scenes by an existing rendering engine. It is understood that, correspondingly, the P first off-board scenes may be virtual simulation scenes (similar to virtual driving game scenes). P is an integer greater than or equal to 1 and less than or equal to X.
Optionally, please refer to fig. 7, and fig. 7 is a schematic flowchart of another scene processing method according to an embodiment of the present application. Step S801 may refer to the method flow shown in fig. 7. As shown in step S11 of FIG. 7, first, the computing device may utilize an existing simulation system to initialize basic scene settings, which may include, for example, basic cell information such as traffic, maps, weather, vehicles, and various sensors. Then, the tester may select to generate a corresponding real-world simulation scenario (for example, the above-mentioned K first off-board scenarios) by using the sensor data acquired by the real-vehicle driving according to the actual test conditions and the test requirements, or may select to generate a corresponding virtual simulation scenario (for example, the above-mentioned P first off-board scenarios) by setting the data in the virtual sensor.
For example, as shown in step S12a and step S13a in fig. 7, if the selection is made not to pass the real vehicle data, the rendering engine will complete the instantiation operation of each basic unit information through the data input of the virtual sensor in a manner similar to the construction of the virtual driving game scene, that is, the whole scene will be generated through the simulation. Optionally, please refer to fig. 8, where fig. 8 is a schematic diagram of a scenario provided in the embodiment of the present application. As shown in fig. 8, the first vehicle-exterior scene is a virtual simulation scene (a two-dimensional scene image is taken as an example in fig. 8), which may include scene elements such as a vehicle, a traffic light (i.e., a traffic light), a sidewalk, a plurality of plants, a straight road, and the like, and details thereof are not repeated here. Optionally, the first off-vehicle scene may further include elements (not shown in fig. 8) such as a steering wheel, a windshield, and a bonnet of the vehicle from a first-person perspective of the driver, which is not particularly limited in this embodiment of the application. Optionally, in some possible embodiments, a three-dimensional scene model may also be constructed by using an existing rendering engine based on data input of a virtual sensor, and then an AR function simulation test of the HUD may be performed in the three-dimensional scene model subsequently to improve observability and accuracy of the test, and the like, which is not specifically limited in this embodiment of the present application.
For example, as shown in steps S12a and S13a in fig. 7, if the pass real vehicle data is selected, the computing device may read data of images and point clouds in corresponding sensors by the rendering engine, perform offline/online three-dimensional sparse/dense scene reconstruction by using SFM/Simultaneous localization and Mapping (SLAM) algorithm, and instantiate basic units by using data of sensors such as laser radar and GPS/IMU, the whole scene will be generated by real scene synthesis, and the generated scene may be imported into the rendering engine by object (object file), mesh (mesh) and other storage forms. Optionally, correspondingly, a scene generated by data acquired by the real sensor may be the above-mentioned scene, or a three-dimensional scene model, and so on, which is not described herein again.
Optionally, referring to fig. 9, fig. 9 is a schematic diagram of a camera coordinate system and a world coordinate system according to an embodiment of the present application. As shown in fig. 9, the spatial three-dimensional (3D) coordinates { x, y, z } of the objects in the scene and the 2D coordinates { u, v } of the image captured by the real camera sensor should satisfy the relationship:
Figure BDA0003052946560000211
wherein f isxAnd fyIs a focal length, cxAnd cyIs OoFrom the center OcK is the camera intrinsic parameter. Therefore, the relation from the real 3D position of the object to the 2D coordinate with the optical center of the camera as the center is established, the shooting data of the camera is truly corresponding to the actual coordinate of the object through the internal and external references of the camera, and then the single-frame scene is immediately constructed. For example, please refer to fig. 10, fig. 10 is a schematic diagram of scene reconstruction according to an embodiment of the present disclosure. As shown in fig. 10, local scene reconstruction may be performed through a single camera image or point cloud data of a single scene, so as to obtain a single frame scene. In addition, referring to fig. 10, multi-modal data (e.g. multiple camera images at different angles and different scenes and multiple lasers) collected by the sensor can be reconstructed by multi-frame image reconstruction techniquePoint cloud data acquired by the optical radar aiming at different scenes, etc.) are fused, so that a whole global scene is reconstructed, and the like, and then the AR function of the HUD can be tested in the global scene, which is not specifically limited in the embodiment of the present application. Therefore, a large amount of various virtual simulation scenes and real-scene simulation scenes can be generated by the software-based method through the steps, and therefore basic environment support is provided for the AR imaging effect test of the HUD.
Step S902, fusing X first AR images corresponding to the X first off-board scenes in the X first off-board scenes to generate corresponding X second scenes; each first AR image includes one or more AR icons.
Specifically, step S902 may refer to step S802 and step S803 in the embodiment corresponding to fig. 6a, which is not described herein again.
Optionally, the preset model may be a neural network model, and the neural network model may be trained according to different matching degrees of multiple scenes, multiple AR icons, and multiple scenes and multiple AR icons. The neural network model can be obtained by the following exemplary training method: the method comprises the steps of taking a plurality of scenes as training samples, superposing one or more AR icons matched with the scenes in the scenes to obtain a plurality of corresponding augmented reality scenes, taking the augmented reality scenes as targets, training through a deep learning algorithm to obtain results close to the targets, and obtaining corresponding neural network models. The scene in the training sample may be a picture obtained by shooting with a camera, or a point cloud image obtained by scanning with a laser radar, and the like, which is not specifically limited in this embodiment of the application.
Alternatively, please refer to step S14 shown in fig. 7, after the scene construction is completed, first, the computing device may perform simulation on the HUD based on the preset HUD parameter set, and construct X corresponding HUD virtual image planes in the X first off-vehicle scenes, where the construction of the HUD virtual image planes may also be implemented using the above existing rendering engine technology. Wherein the set of HUD parameters may include at least one of windshield curvature, eye box position, eye observation position, HUD mounting position, and HUD virtual image plane size, among others. For example, please refer to fig. 11, fig. 11 is a schematic diagram of a HUD virtual image plane according to an embodiment of the present disclosure. Use first car scene as the virtual simulation scene in fig. 11 as an example, this HUD virtual image face can be for a corresponding region in this first car scene, fig. 11 can be for the HUD simulation effect under the driver's visual angle, also be equivalent to that a size and real HUD hardware parameter assorted "screen" has been built in the place ahead in the field of vision of "driver", follow-up AR functional effect (also the AR image) can be rendered and drawn on this "screen", wherein, render and can adopt and be not limited to the mode of rendering from the screen and realize.
It should be noted that, in an ideal state, that is, without considering HUD distortion, eye line shake, and change of spatial position of the HUD caused by vehicle bump, when the first off-vehicle scene starts rendering, the eye position of the driver should be in the eye box position calibrated by the HUD, that is, the eye position is in the known optimal observation position, and it is the clearest and the best that the driver observes the object (i.e., the AR image) on the virtual image surface at the position. Thus, after the first off-board scene is constructed, a HUD virtual image plane with a distance of l, a height of h, a width of w, and a transmittance of alpha (alpha) may be constructed in the first off-board scene, and the HUD virtual image plane may completely assume the simulation function of the HUD. Besides, the real human eye observation can be simulated by using the camera model at the eye box position, and the above-mentioned first off-board scene and the AR imaging effect (e.g. the first AR image) of the virtual HUD can be imaged at the observation position.
Optionally, after HUD virtual image face founds the completion, can cut corresponding image frame and be arranged in the memory buffering according to HUD virtual image face size to in will be based on the first AR image that generates of presetting the model and correspond with first car outdoor scene render to this HUD virtual image face, thereby generate the second scene that corresponds, so, can observe the AR imaging effect that obtains HUD under the ideal condition. Optionally, before rendering the first AR image to the HUD virtual image surface, the first AR image may be subjected to certain pre-processing, such as clipping, scaling, rotating, and the like, so that the first AR image is adapted to the first off-vehicle scene and the HUD virtual image surface.
For example, please refer to fig. 12a, fig. 12a is a schematic diagram of an AR imaging effect according to an embodiment of the present application. As shown in fig. 12a, the second scene may inherit a plurality of scene elements in the first off-board scene, and further, the second scene includes a HUD virtual image plane, and a plurality of AR icons displayed on the HUD virtual image plane (for example, current speed-per-hour information and current electric quantity information as shown in fig. 12a, warning information of a vehicle, a sidewalk and a traffic signal, and a straight AR navigation guide arrow, etc. may be included).
For example, please refer to fig. 12b, fig. 12b is a schematic diagram of another AR imaging effect according to an embodiment of the present application. As shown in fig. 12b, it is obvious that the second scene is a real scene, and includes scene elements such as lanes, vehicles, and plants, as well as a constructed HUD virtual image plane and a plurality of AR icons in the HUD virtual image plane (in fig. 12b, the target frame of the vehicle is taken as an example, and may be used to prompt the driver to pay attention to the vehicle beside the lane, so as to ensure driving safety, and so on).
For example, please refer to fig. 12c, fig. 12c is a schematic diagram of another AR imaging effect according to an embodiment of the present application. As shown in fig. 12b, it is obvious that the second scene may be a scene constructed based on point cloud data, and may also include a plurality of scene elements, HUD virtual image planes, AR icons, and the like.
Alternatively, the specific calculation formula involved in step S902 may be as follows:
the projection v' of the spatial position of the object in the camera model (i.e. human eye) can be calculated by using the following projection transformation relation:
v'=Mproj·Mview·Mmodel·v
wherein the content of the first and second substances,
the projection (projection) matrix is:
Figure BDA0003052946560000231
wherein near is a near plane, far is a far plane, and top is a top.
View angle (view)) The matrix is:
Figure BDA0003052946560000232
the model (model) matrix is: mmodel=identity4
The camera field of view (field of view, Fov) is:
Figure BDA0003052946560000233
wherein L ishImaging the camera to near-plane height, fyIs the camera focal length.
In summary, through the projection model and the following Normalized Device Coordinates (NDC) transformation relationship, the AR icon to be rendered can be drawn on the accurate screen Coordinates of the computing Device by using the frame buffer technology.
Figure BDA0003052946560000234
Optionally, referring to fig. 13, fig. 13 is a schematic diagram illustrating a HUD simulation effect under an external viewing angle according to an embodiment of the present application. As described above, by the above steps, the HUD imaging process in the actual driving process as shown in fig. 13 can be completely simulated without considering the influence of any HUD distortion, eye position variation, and vehicle shake factor. Alternatively, as shown in fig. 13, the camera angle may be divided into a vertical angle and a horizontal angle, where the vertical angle is an angle formed by the top and the bottom of the HUD virtual image plane, and the horizontal angle is an angle formed by the left edge and the right edge of the HUD virtual image plane, and will not be described in detail here.
Step S903, analyzing the AR effect based on the one or more scene elements and the one or more AR icons in each second scene.
Specifically, after the computing device generates the second scenes, the computing device may analyze the current AR effect based on the position relationship and/or the logical relationship between one or more scene elements and one or more AR icons in each second scene, and further, may perform corresponding correction on the preset model, a series of software algorithms, and the like, so as to continuously optimize the AR function of the HUD, and ensure the user experience. Optionally, after the AR function is optimized, repeated tests may be performed again based on the same first outside-vehicle scene as that used in the previous tests, and the front and rear AR imaging effects are compared to verify whether the AR imaging effects are effectively improved, and the like, so as to ensure the test rigor and the reliability of the test result.
Referring to fig. 14, fig. 14 is a schematic view illustrating another AR imaging effect according to an embodiment of the present application. Compared with the AR navigation guidance arrow shown in fig. 12a, the position of the AR navigation guidance arrow shown in fig. 14 is obviously deviated from the position of the scene element (straight road), and the position is not attached to the road surface. Therefore, testers can intuitively and quickly locate possible problems and defects of the current algorithm according to the AR imaging effect obtained by current simulation, and the algorithm is improved more accurately, efficiently and pertinently.
It will be appreciated that the imaging effect of the HUD is influenced by a number of factors, and in general, when the HUD device is loaded in a car, it is often necessary to calibrate parameters such as distortion, virtual image distance, and eye box position of the HUD. After accurate calibration parameters are obtained, when the driver observes in the cockpit and the positions of the eyes of the driver are located in the range of the eye boxes, clear HUD virtual image projection can be observed, and therefore the excellent AR visual effect is obtained. However, during the assembly process of the HUD real vehicle, the mounting position of the HUD may be deviated due to the problem of assembly accuracy, and the Virtual Image Distance (VID) of the HUD may be changed, which may cause distortion of the AR image (e.g., AR navigation guide arrow) projected by the HUD. In addition, different types of windshields (e.g., different windshield curvatures) also introduce non-linear parameter distortions that distort the AR image projected by the HUD, thereby greatly affecting the user's visual experience and usage experience. Therefore, most HUDs put into production and manufacture on the market often add corresponding distortion removal functions to deal with the possible image distortion conditions, so as to ensure the user experience. However, in the development and test process of the distortion removing function, a real vehicle test method is mostly adopted, namely after the real vehicle is calibrated and installed with the HUD, the specific influences of AR image distortion and AR rendering precision reduction caused by factors such as different installation positions, air duct glass curvature and the like can be visually evaluated, so that the distortion removing function of the HUD is continuously improved, the use experience of a user is guaranteed, and the process is complex, long in time consumption, low in efficiency and high in cost.
Based on this, further, the embodiment of the application can also be based on the software simulation method, and through the extended support of the HUD parameterization, distortion transformation (that is, distortion processing is performed on the AR image to be rendered) can be directly performed on the memory value of the rendering buffer in the scene, and the real-time rendering is performed to intuitively simulate the influence on the AR imaging effect due to the parameter change. Therefore, specific influences such as AR rendering precision reduction and the like caused by distortion factors such as deviation of the HUD installation position and nonstandard windshield curvature are visually evaluated.
Optionally, distortion processing may be performed on Q first AR images corresponding to the Q first off-vehicle scenes by using a computing device, so as to obtain Q corresponding second AR images; and fusing the Q first vehicle external scenes and the Q first AR images to obtain corresponding Q third scenes, and enabling the display screen to display the Q third scenes. Accordingly, each second AR image may also include one or more AR icons. Wherein the distortion treatment may include at least one of radial distortion, tangential distortion, virtual image distance increase, and virtual image distance decrease, Q being an integer greater than or equal to 1 and less than or equal to X.
Referring to fig. 15, fig. 15 is a schematic diagram illustrating a distortion type and an AR imaging effect according to an embodiment of the present application. Various types of distortion processing and obtained AR imaging effects are shown in FIG. 15, and obviously, after certain degrees of distortion processing such as radial distortion, tangential distortion, virtual image distance increase, virtual image distance reduction, polynomial multiplication (mixing multiple distortion types) and the like, the obtained AR images have obvious distortion compared with the original AR images, the visual perception and the use experience of users are seriously influenced, and the driving comfort is greatly reduced. As shown in fig. 15, in the process of changing the virtual image distance, the displayed AR image may generate some out-of-focus blur. It should be noted that the mathematical model in fig. 15 is only an exemplary illustration, and this is not specifically limited in the embodiment of the present application. Referring to fig. 16, fig. 16 is a schematic diagram illustrating comparison of AR imaging effects according to an embodiment of the present application. As shown in fig. 16, the AR imaging effect after the polynomial distortion of multiple degree is significantly reduced. Optionally, the image data may be superimposed in a frame buffer to be output to a rendering pipeline according to a pixel position corresponding relationship of a distortion function or a distortion table, multiple times of superimposition may be performed according to the complexity of distortion, and an AR imaging effect after distortion processing may be visually displayed in a virtualized HUD virtual image plane. So, because this kind of directly perceived nature, this application embodiment further is equivalent to providing a platform and supplies the developer to carry out analysis to the distortion function to going the distortion function to HUD carries out research and development, constantly promotes HUD's performance, guarantees user's use and experiences.
Optionally, if in the process of researching and developing the distortion removing function of the HUD, when a distortion removing algorithm needs to be tested, the Q second AR images may be further subjected to distortion removing processing based on the distortion removing algorithm to obtain Q third AR images; and fusing the Q first vehicle external scenes and the Q third AR images to obtain corresponding Q fourth scenes, and enabling the display screen to display the Q fourth scenes. Then, based on these Q third scenes and this Q fourth scenes, compare AR imaging effect before and after the distortion removal processing, carry out the analysis to the distortion removal effect of this distortion removal algorithm, further can carry out corresponding correction to this distortion removal algorithm to continuously improve HUD's distortion removal function, etc. and here is no longer repeated. Therefore, in the process of researching the distortion removal model (or distortion removal algorithm), any parameter change related to the distortion removal model can obtain corresponding effect in a visible and acquired mode, and the efficiency of the development and test of the distortion removal function is improved to a great extent.
In addition, jitter during driving also causes degradation of AR accuracy. The shaking factors generally include shaking changes of positions of human eyes of a driver and HUD shaking caused in the driving process of a vehicle, and both the shaking of the positions of the human eyes and the shaking of the HUD can cause certain alignment distortion of virtual images (such as AR icons like AR navigation guide arrows) and real objects (namely objects in the real world, such as a road surface, pedestrians and vehicles) projected by the HUD. Moreover, different drivers have different heights, which easily causes the deviation between the observation position of an individual driver and the position of the eye box, so that the observed AR icon cannot be attached to an object in the real world, thereby causing visual distortion and affecting the AR experience. Accordingly, in order to alleviate AR accuracy degradation caused by the unavoidable jitter during driving, the development of the HUD anti-jitter function is very important. However, in the real vehicle test process, since the actual driving usually accompanies with various kinds of shaking noises (for example, there are shaking of human eye position and HUD shaking at the same time) to affect the imaging effect of the AR, it is difficult to extract a single factor to perform test modeling analysis, and it is also difficult to efficiently and accurately perform development test and optimization on the anti-shaking function.
Further, the embodiment of the application can simulate the influence of the driver's sight drift, road surface bump and the like on the AR imaging effect (or the AR imaging precision) in the actual driving process based on the software simulation method. Specifically, the rotation displacement T [ R | T ] and the jitter (J) and the like can be introduced in the construction of a simulation scene, so that the change of the A imaging effect is analyzed, and an effective test basis is provided for the development of an AR engine anti-shake algorithm. Referring to fig. 17, fig. 17 is a schematic diagram of a human eye sight line coordinate system and a HUD virtual image plane coordinate system according to an embodiment of the present application. Wherein, eyes sight coordinate system and HUD virtual image surface coordinate system can all be based on fixed automobile body coordinate system and establish, optionally, also can regard eyes sight coordinate system as automobile body coordinate system, perhaps, regard HUD virtual image surface coordinate system as automobile body coordinate system, etc. this does not do not specifically limit in this application embodiment. Referring to fig. 18, fig. 18 is a schematic diagram illustrating an effect of a shake condition on an AR imaging effect according to an embodiment of the present application. As shown in fig. 18, when there is a displacement amount in the position of human eyes and there is a jitter amount in the position of the virtual image plane of the HUD, there will be a certain degree of alignment distortion between the observed AR image and the scene, that is, there is a deviation in the position between the AR icon and the corresponding scene element, and the AR icon cannot be attached to the scene, so that the visual experience and the user experience of the user are seriously affected.
Optionally, the S first AR images corresponding to the S first off-vehicle scenes may be subjected to dithering processing by the computing device, so as to obtain S corresponding second AR images; and fusing the S first vehicle external scenes and the S first AR images to obtain S corresponding third scenes, and enabling the display screen to display the S third scenes. The dithering process may include superimposing a preset rotation displacement amount and/or dithering amount, where S is an integer greater than or equal to 1 and less than or equal to X.
Alternatively, referring to fig. 17 and 18 together, in general, the pose of the human eye can be abstracted to a six-dimensional vector, and after the coordinate system of the vehicle body is fixed, the vector of the human eye relative to the origin of the coordinate system can be expressed as tx,ty,tzx,θy,θz]Wherein, [ t ]x,ty,tz]Can represent the position offset of human eyes relative to the origin of the coordinate system of the vehicle body, [ theta ]x,θy,θz]Can represent the yaw angle [ yaw (yaw angle), pitch (pitch angle), yoll (roll angle) of the human eye's line of sight with respect to the axis of the coordinate system]。
Alternatively, the above quantities may be represented by the following transition matrix:
rotating the matrix:
Figure BDA0003052946560000261
thus, a transition matrix is obtained:
Figure BDA0003052946560000262
the transfer matrix can correspond to each quantity in the visual angle matrix in the scene construction process, and the visual angle matrix of the human eye sight under different conditions can be obtained through a series of conversions, so that the correct observation visual angle of the virtual HUD is obtained. It can be understood that the amount of jitter for the HUDBecause there is the shake about only usually in the driving process, this variable can be idealized in the analytic process, the shake quantity about abstracting the change of six degrees of freedom, and add this shake quantity to above-mentioned transfer matrix as the feedback quantity, finally embody to the visual angle matrix in, the change that the engine can be according to the visual angle matrix is rendered, adjust corresponding observation effect, also obtain different AR imaging effect, thereby can directly perceivedly carry out the analysis to the change of AR imaging effect, support follow-up undistorted function to HUD and study and develop, with the performance that constantly promotes HUD, guarantee user's use and experience.
Optionally, if in the process of researching and developing the anti-shake function of the HUD, when an anti-shake algorithm needs to be tested, the S second AR images may be further subjected to anti-shake processing based on the anti-shake algorithm to obtain corresponding S third AR images; and fusing the S first vehicle external scenes and the S third AR images to obtain S corresponding fourth scenes, and enabling the display screen to display the S fourth scenes. Then, based on the S third scenes and the S fourth scenes, the AR imaging effect before and after anti-shake processing is compared, the anti-shake effect of the anti-shake algorithm is analyzed, and the anti-shake algorithm can be further corrected correspondingly, so that the anti-shake function of the HUD is continuously improved.
Optionally, it can be understood that, in the actual use process, distortion and jitter often exist at the same time, so that the embodiment of the present application may further perform distortion processing and jitter processing on the first AR image at the same time to intuitively obtain an AR imaging effect under the simultaneous influence of the distortion and the jitter, and correspondingly, may further perform distortion removal processing and jitter release processing on the second AR image at the same time to analyze a distortion removal effect and an anti-jitter effect under the simultaneous influence of the distortion and the jitter, and the like, which is not specifically limited by the embodiment of the present application.
To sum up, according to the scene processing method provided by the embodiment of the present application, firstly, a simulation scene can be constructed by using an existing rendering engine or a real-scene simulation scene can be constructed by using real road data acquired in advance, and a virtual HUD screen (i.e. a HUD virtual image plane) is constructed in the scene, and an AR icon is drawn in the HUD virtual image plane, so that the correctness of an AR function can be visually verified, and a priori guarantee is provided for driving safety and comfort. Because the scenes can be randomly synthesized and circularly multiplexed, the test coverage rate is greatly increased, and the problems of difficult restoration and non-repeatability of the scenes of the AR function experience and test in the prior art are solved to a certain extent. Therefore, the method and the device can support the repeatability of multiple road conditions, multiple weather conditions, multiple regions and multiple scenes, get rid of the restriction of hardware equipment, simplify the experience and the test flow of the AR function, and are beneficial to the rapid iteration of model and algorithm development. Secondly, in the prior art, the AR effect test for the HUD mainly adopts offline calibration measurement, and the algorithm function is tested and improved by driving a real vehicle, which greatly depends on the test collection of offline data, so that the scene of AR function failure (for example, correct AR icon is not displayed or the whole AR image is distorted and distorted) cannot be restored. In addition, after HUD demarcates the installation, the parameter has been confirmed basically, if meet installation deviation, windshield manufacturing defect scheduling problem, the influence to HUD formation of image effect can not be avoided to lead to the measuring accuracy low, inefficiency and input cost height easily. The embodiment of the application can directly reflect the influence of the external factors on AR imaging in the HUD virtual image surface through parametric simulation of distortion factors and multi-modal jitter factors, can visually observe and determine the influence of various factors on the AR imaging effect of the HUD, and is favorable for stability analysis of the AR imaging effect of the HUD. Therefore, the scene processing method provided by the embodiment of the application can improve the test efficiency to a great extent, has the advantages of wide coverage, high reusability and low cost, and can support the extension requirements of incremental scenes and parameter introduction.
In addition, it should be noted that the virtualization simulation technology can be applied to many aspects, and its core idea is that everything about the real world can be constructed in a virtual three-dimensional environment, so in some possible embodiments, in addition to a virtual AR-HUD hardware device, the display effect of an AR function event triggered in the driving process on the HUD device is simulated, and various scenes and other devices can be simulated based on different requirements, thereby realizing efficient and low-cost test analysis on other devices, and the like.
Referring to fig. 19, fig. 19 is a schematic structural diagram of a scene processing apparatus according to an embodiment of the present application, where the scene processing apparatus 30 can be applied to the computing device. The scene processing apparatus 30 may include a first acquiring unit 301, a second acquiring unit 302, a fusing unit 303, and a first displaying unit 304, wherein the details of each unit are as follows:
a first acquiring unit 301, configured to acquire a first off-board scene; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model;
a second acquiring unit 302, configured to acquire a first AR image corresponding to the first off-vehicle scene;
a fusion unit 303, configured to fuse the first extra-vehicle scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene;
a first display unit 304, configured to enable the display screen to display the second scene.
In a possible implementation manner, the second obtaining unit 302 is specifically configured to:
acquiring the first AR image corresponding to the first off-vehicle scene according to the first off-vehicle scene and a preset model; wherein the first AR image comprises one or more AR icons.
In a possible implementation manner, the preset model is a neural network model, and the neural network model is obtained by training according to a plurality of scenes, a plurality of AR icons, and different matching degrees of the scenes and the AR icons.
In a possible implementation manner, the fusion unit 303 is specifically configured to:
determining a corresponding HUD virtual image surface in the first vehicle-mounted scene based on a preset HUD parameter set of the head-up display; the HUD virtual image surface is a corresponding area in the first vehicle-exterior scene;
rendering the first AR image into the HUD virtual image plane to obtain a second scene.
In one possible implementation, the set of HUD parameters includes at least one of windshield curvature, eye box position, eye observation position, HUD installation position, and HUD virtual image plane size.
In a possible implementation manner, the first obtaining unit 301 is specifically configured to:
acquiring data collected by a first sensor; the first sensor is a vehicle-mounted sensor; the data collected by the first sensor are data collected aiming at the surrounding environment of the target vehicle in the running process of the target vehicle, and comprise at least one of image data, point cloud data, temperature data and humidity data; the first sensor comprises at least one of a camera, a laser radar, a millimeter wave radar, a temperature sensor and a humidity sensor;
and constructing the first off-board scene based on the data acquired by the first sensor, wherein the first off-board scene is a real scene simulation scene.
In a possible implementation manner, the first obtaining unit 301 is specifically configured to:
acquiring data collected by a second sensor; the second sensor is a sensor constructed by a preset simulation system; the data collected by the second sensor is data set by the preset simulation system and comprises at least one of weather, roads, pedestrians, vehicles, plants and traffic signals;
and constructing the first vehicle-exterior scene based on the data acquired by the second sensor, wherein the first vehicle-exterior scene is a virtual simulation scene.
In one possible implementation, the apparatus 30 further includes:
a first preprocessing unit 305, configured to perform first preprocessing on the first AR image, and acquire a second AR image corresponding to the first off-vehicle scene; the first preprocessing includes at least one of distortion processing and dithering processing; the distortion treatment comprises at least one of radial distortion, tangential distortion, virtual image distance increase and virtual image distance decrease; the dithering processing comprises the superposition of preset rotation displacement and/or dithering amount;
a second fusion unit 306, configured to fuse the first extra-vehicular scene and the second AR image to obtain a third scene;
a second display unit 307, configured to enable the display screen to display the third scene.
In one possible implementation, the apparatus 30 further includes:
a second preprocessing unit 308, configured to perform second preprocessing on the second AR image, to obtain a third AR image corresponding to the first off-vehicle scene; the second preprocessing includes at least one of a distortion removal processing and an anti-shake processing;
a third fusion unit 309, configured to fuse the first extra-vehicle scene and the third AR image to obtain a fourth scene;
a third display unit 310, configured to enable the display screen to display the fourth scene.
In one possible implementation, the apparatus 30 further includes:
an optimizing unit 311, configured to obtain a processing effect of the distortion removal processing and/or the anti-shake processing based on the third scene and the fourth scene, so as to optimize a corresponding distortion removal function and/or anti-shake function.
In one possible implementation, the first off-board scene includes one or more scene elements; the one or more scene elements include one or more of weather, roads, pedestrians, vehicles, vegetation, and traffic signals; the one or more AR icons include one or more of a left turn, a right turn, and a straight navigation identification; the device 30 further comprises:
a correcting unit 312, configured to correspondingly correct the preset model based on a positional relationship and/or a logical relationship between the one or more scene elements and the one or more AR icons in each second scene.
Wherein, the first obtaining unit 301 is configured to execute step S801 in the method embodiment corresponding to fig. 6 a; the second obtaining unit 302 is configured to execute step S802 in the method embodiment corresponding to fig. 6 a; the merging unit 303 is configured to perform step S8/3 in the above-mentioned method embodiment corresponding to fig. 6 a; the first display unit 304 is configured to perform step S804 in the above-mentioned method embodiment corresponding to fig. 6 a; the first preprocessing unit 305, the second fusing unit 306, the second display unit 307, the second preprocessing unit 308, the third fusing unit 309, the third display unit 310, the optimizing unit 311 and the modifying unit 312 are configured to execute step S804 in the method embodiment corresponding to fig. 6 a;
it should be noted that, for functions of each functional unit in the scene processing apparatus described in this embodiment of the application, reference may be specifically made to the relevant description of step S801 to step S804 in the method embodiment described in fig. 6a, reference may also be made to the relevant description of step S901 to step S903 in the method embodiment described in fig. 6b, and reference may also be made to the relevant description of step S11 to step S15 in the method embodiment described in fig. 7, which is not described again here.
Each of the units in fig. 19 may be implemented in software, hardware, or a combination thereof. The unit implemented in hardware may include a circuit and a furnace, an arithmetic circuit, an analog circuit, or the like. A unit implemented in software may comprise program instructions, considered as a software product, stored in a memory and executable by a processor to perform the relevant functions, see in particular the previous description.
Based on the description of the method embodiment and the apparatus embodiment, the embodiment of the present application further provides a computing device. Referring to fig. 20, fig. 20 is a schematic structural diagram of a computing device according to an embodiment of the present disclosure. Optionally, the computing device 1000 may be the computing device 100 in fig. 3a, 3b and 3c, wherein the computing device 1000 includes at least a processor 1001, an input device 1002, an output device 1003, a computer-readable storage medium 1004, a database 1005 and a memory 1006, and the computing device 1000 may also include other general-purpose components, which are not described in detail herein. The processor 1001, input device 1002, output device 1003, and computer-readable storage medium 1004 within the computing device 1000 may be connected by a bus or other means.
The processor 1001 may be configured to implement the first obtaining unit 301, the second obtaining unit 302, the fusing unit 303, the first display unit 304, the first preprocessing unit 305, the second fusing unit 306, the second display unit 307, the second preprocessing unit 308, the third fusing unit 309, the third display unit 310, the optimizing unit 311, and the modifying unit 312 in the scene processing device 30, where details of the implementation process may specifically refer to the description related to steps S801 to S803 in the method embodiment described in fig. 6a, may also refer to the description related to steps S901 to S903 in the method embodiment described in fig. 6b, and may also refer to the description related to steps S11 to S15 in the method embodiment described in fig. 7, and details thereof are not repeated here. The processor 1001 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs according to the above schemes.
The Memory 1006 in the computing device 1000 may be a Read-Only Memory (ROM) or other types of static Memory devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic Memory devices that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 1006 may be a separate device and connected to the processor 1001 through a bus. The memory 1006 may also be integrated with the processor 1001.
A computer-readable storage medium 1004 may be stored in the memory 1006 of the computing device 1000, the computer-readable storage medium 1004 being used to store a computer program comprising program instructions, the processor 1001 being used to execute the program instructions stored by the computer-readable storage medium 1004. The processor 1001 (or CPU) is a computing core and a control core of the computing device 1000, and is adapted to implement one or more instructions, and specifically, adapted to load and execute one or more instructions to implement corresponding method flows or corresponding functions; in one embodiment, the processor 1001 according to the embodiment of the present application may be configured to acquire a first off-board scene; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model; acquiring a first AR image corresponding to the first off-vehicle scene; fusing the first off-vehicle scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene; enabling the display screen to display the second scene, and so on.
Embodiments of the present application also provide a computer-readable storage medium (Memory) that is a Memory device in the computing device 1000 and is used for storing programs and data. It is understood that the computer-readable storage medium herein can include both built-in storage media in the computing device 1000 and, of course, extended storage media supported by the computing device 1000. The computer-readable storage medium provides storage space that stores an operating system for computing device 1000. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for being loaded and executed by processor 1001. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; and optionally at least one computer readable storage medium remotely located from the aforementioned processor.
Embodiments of the present application also provide a computer program, where the computer program includes instructions, and when the computer program is executed by a computer, the computer may perform part or all of the steps of any of the scene processing methods.
It should be noted that, for functions of each functional unit in the computing device 1000 described in this embodiment of the application, reference may be made to relevant descriptions of step S801 to step S804 in the method embodiment described in fig. 6a, reference may also be made to relevant descriptions of step S901 to step S903 in the method embodiment described in fig. 6b, and reference may also be made to relevant descriptions of step S11 to step S15 in the method embodiment described in fig. 7, which is not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, and may specifically be a processor in the computer device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a magnetic disk, an optical disk, a Read-only memory (ROM) or a Random Access Memory (RAM).
Furthermore, the terms "first," "second," "third," and "fourth," etc. in the description and claims of the present application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
It should be noted that the terms "component," "module," "system," and the like as used herein are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a terminal device and the terminal device can be a component. One or more components can reside within a process and/or thread of execution and a component can be localized on one computer and/or distributed between 2 or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (26)

1. A method for processing a scene, comprising:
acquiring a first scene outside a vehicle; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model;
acquiring a first Augmented Reality (AR) image corresponding to the first off-vehicle scene;
fusing the first off-vehicle scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene;
enabling a display screen to display the second scene.
2. The method of claim 1, wherein said obtaining a first AR image corresponding to the first off-board scene comprises:
acquiring the first AR image corresponding to the first off-vehicle scene according to the first off-vehicle scene and a preset model; wherein the first AR image comprises one or more AR icons.
3. The method of claim 2, wherein the predetermined model is a neural network model trained from a plurality of scenes, a plurality of AR icons, and different degrees of matching between the scenes and the AR icons.
4. The method according to any of claims 2 and 3, wherein said fusing the first off-board scene and the first AR image to obtain a second scene comprises:
determining a corresponding HUD virtual image surface in the first vehicle-mounted scene based on a preset HUD parameter set of the head-up display; the HUD virtual image surface is a corresponding area in the first vehicle-exterior scene;
rendering the first AR image into the HUD virtual image plane to obtain a second scene.
5. The method of claim 4, wherein the set of HUD parameters includes at least one of windshield curvature, eye box position, eye observation position, HUD installation position, and HUD virtual image plane size.
6. The method of any of claims 1-5, wherein the obtaining the first off-board scene comprises:
acquiring data collected by a first sensor; the first sensor is a vehicle-mounted sensor; the data collected by the first sensor are data collected aiming at the surrounding environment of the target vehicle in the running process of the target vehicle, and comprise at least one of image data, point cloud data, temperature data and humidity data; the first sensor comprises at least one of a camera, a laser radar, a millimeter wave radar, a temperature sensor and a humidity sensor;
and constructing the first off-board scene based on the data acquired by the first sensor, wherein the first off-board scene is a real scene simulation scene.
7. The method of any of claims 1-5, wherein the obtaining the first off-board scene comprises:
acquiring data collected by a second sensor; the second sensor is a sensor constructed by a preset simulation system; the data collected by the second sensor is data set by the preset simulation system and comprises at least one of weather, roads, pedestrians, vehicles, plants and traffic signals;
and constructing the first vehicle-exterior scene based on the data acquired by the second sensor, wherein the first vehicle-exterior scene is a virtual simulation scene.
8. The method according to any one of claims 1-7, further comprising:
performing first preprocessing on the first AR image to obtain a second AR image corresponding to the first off-vehicle scene; the first preprocessing includes at least one of distortion processing and dithering processing; the distortion treatment comprises at least one of radial distortion, tangential distortion, virtual image distance increase and virtual image distance decrease; the dithering processing comprises the superposition of preset rotation displacement and/or dithering amount;
fusing the first off-board scene and the second AR image to obtain a third scene;
enabling the display screen to display the third scene.
9. The method of claim 8, further comprising:
performing second preprocessing on the second AR image to obtain a third AR image corresponding to the first off-vehicle scene; the second preprocessing includes at least one of a distortion removal processing and an anti-shake processing;
fusing the first extra-vehicular scene and the third AR image to obtain a fourth scene;
enabling the display screen to display the fourth scene.
10. The method of claim 9, further comprising:
and acquiring the processing effect of the distortion removal processing and/or the anti-shaking processing based on the third scene and the fourth scene so as to optimize the corresponding distortion removal function and/or anti-shaking function.
11. The method of any of claims 2-10, wherein the first off-board scene comprises one or more scene elements; the one or more scene elements include one or more of weather, roads, pedestrians, vehicles, vegetation, and traffic signals; the one or more AR icons include one or more of a left turn, a right turn, and a straight navigation identification; the method further comprises the following steps:
and correspondingly correcting the preset model based on the position relation and/or the logic relation between the one or more scene elements and the one or more AR icons in each second scene.
12. A scene processing apparatus, comprising:
a first acquisition unit for acquiring a first off-board scene; the first vehicle exterior scene is a two-dimensional scene image or a three-dimensional scene model;
a second acquisition unit configured to acquire a first AR image corresponding to the first off-vehicle scene;
a fusion unit for fusing the first extra-vehicular scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene;
and the first display unit is used for enabling the display screen to display the second scene.
13. The apparatus according to claim 12, wherein the second obtaining unit is specifically configured to:
acquiring the first AR image corresponding to the first off-vehicle scene according to the first off-vehicle scene and a preset model; wherein the first AR image comprises one or more AR icons.
14. The apparatus of claim 13, wherein the predetermined model is a neural network model trained from a plurality of scenes, a plurality of AR icons, and different degrees of matching between the scenes and the AR icons.
15. The device according to any one of claims 13 and 14, wherein the fusion unit is specifically configured to:
determining a corresponding HUD virtual image surface in the first vehicle-mounted scene based on a preset HUD parameter set of the head-up display; the HUD virtual image surface is a corresponding area in the first vehicle-exterior scene;
rendering the first AR image into the HUD virtual image plane to obtain a second scene.
16. The apparatus of claim 15, wherein the set of HUD parameters includes at least one of windshield curvature, eye box position, eye observation position, HUD installation position, and HUD virtual image plane size.
17. The apparatus according to any one of claims 12 to 16, wherein the first obtaining unit is specifically configured to:
acquiring data collected by a first sensor; the first sensor is a vehicle-mounted sensor; the data collected by the first sensor are data collected aiming at the surrounding environment of the target vehicle in the running process of the target vehicle, and comprise at least one of image data, point cloud data, temperature data and humidity data; the first sensor comprises at least one of a camera, a laser radar, a millimeter wave radar, a temperature sensor and a humidity sensor;
and constructing the first off-board scene based on the data acquired by the first sensor, wherein the first off-board scene is a real scene simulation scene.
18. The apparatus according to any one of claims 12 to 16, wherein the first obtaining unit is specifically configured to:
acquiring data collected by a second sensor; the second sensor is a sensor constructed by a preset simulation system; the data collected by the second sensor is data set by the preset simulation system and comprises at least one of weather, roads, pedestrians, vehicles, plants and traffic signals;
and constructing the first vehicle-exterior scene based on the data acquired by the second sensor, wherein the first vehicle-exterior scene is a virtual simulation scene.
19. The apparatus of any one of claims 12-18, further comprising:
the first preprocessing unit is used for performing first preprocessing on the first AR image to acquire a second AR image corresponding to the first off-vehicle scene; the first preprocessing includes at least one of distortion processing and dithering processing; the distortion treatment comprises at least one of radial distortion, tangential distortion, virtual image distance increase and virtual image distance decrease; the dithering processing comprises the superposition of preset rotation displacement and/or dithering amount;
a second fusion unit for fusing the first off-vehicle scene and the second AR image to obtain a third scene;
and the second display unit is used for enabling the display screen to display the third scene.
20. The apparatus of claim 19, further comprising:
the second preprocessing unit is used for performing second preprocessing on the second AR image to acquire a third AR image corresponding to the first off-vehicle scene; the second preprocessing includes at least one of a distortion removal processing and an anti-shake processing;
a third fusion unit, configured to fuse the first extra-vehicle scene and the third AR image to obtain a fourth scene;
and the third display unit is used for enabling the display screen to display the fourth scene.
21. The apparatus of claim 20, further comprising:
and the optimization unit is used for acquiring the processing effect of the distortion removal processing and/or the anti-jitter processing based on the third scene and the fourth scene so as to optimize the corresponding distortion removal function and/or anti-jitter function.
22. The apparatus of any of claims 13-21, wherein the first off-board scene comprises one or more scene elements; the one or more scene elements include one or more of weather, roads, pedestrians, vehicles, vegetation, and traffic signals; the one or more AR icons include one or more of a left turn, a right turn, and a straight navigation identification; the device further comprises:
and a correcting unit, configured to perform corresponding correction on the preset model based on a positional relationship and/or a logical relationship between the one or more scene elements and the one or more AR icons in each second scene.
23. A scene processing system, comprising: a terminal and a server;
the terminal is used for sending a first scene outside the vehicle; the first vehicle-exterior scene is sensing information acquired by a sensor of the terminal;
the server is used for receiving the first vehicle-outside scene from the terminal;
the server is further used for acquiring a first AR image corresponding to the first off-board scene;
the server is further used for fusing the first off-board scene and the first AR image to obtain a second scene; the first vehicle-external scene is real information in the second scene, and the first AR image is virtual information in the second scene;
the server is further configured to send the second scene;
the terminal is further used for receiving the second scene and displaying the second scene.
24. A computing device comprising a processor and a memory, the processor and the memory being coupled, wherein the memory is configured to store program code and the processor is configured to invoke the program code to perform the method of any of claims 1 to 11.
25. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1 to 11.
26. A computer program, characterized in that the computer program comprises instructions which, when executed by a computer, cause the computer to carry out the method according to any one of claims 1 to 11.
CN202180001054.8A 2021-03-31 2021-03-31 Scene processing method, device and system and related equipment Active CN113260430B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/084488 WO2022205102A1 (en) 2021-03-31 2021-03-31 Scene processing method, apparatus and system and related device

Publications (2)

Publication Number Publication Date
CN113260430A true CN113260430A (en) 2021-08-13
CN113260430B CN113260430B (en) 2022-07-22

Family

ID=77180569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180001054.8A Active CN113260430B (en) 2021-03-31 2021-03-31 Scene processing method, device and system and related equipment

Country Status (2)

Country Link
CN (1) CN113260430B (en)
WO (1) WO2022205102A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114141092A (en) * 2021-11-10 2022-03-04 武汉未来幻影科技有限公司 Method and system for constructing animation scene of driving test simulator
CN114299162A (en) * 2021-12-30 2022-04-08 合众新能源汽车有限公司 Rapid calibration method for AR-HUD
CN114661398A (en) * 2022-03-22 2022-06-24 上海商汤智能科技有限公司 Information display method and device, computer equipment and storage medium
CN114859754A (en) * 2022-04-07 2022-08-05 江苏泽景汽车电子股份有限公司 Simulation test method and simulation test system of head-up display system
CN116311935A (en) * 2023-03-20 2023-06-23 冉林甫 Smart city traffic management method based on big data
CN116774679A (en) * 2023-08-25 2023-09-19 北京斯年智驾科技有限公司 Automatic driving vehicle testing method, system, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293582A1 (en) * 2012-05-07 2013-11-07 Victor Ng-Thow-Hing Method to generate virtual display surfaces from video imagery of road based scenery
CN104266654A (en) * 2014-09-26 2015-01-07 广东好帮手电子科技股份有限公司 Vehicle real scene navigation system and method
CN106383587A (en) * 2016-10-26 2017-02-08 腾讯科技(深圳)有限公司 Augmented reality scene generation method, device and equipment
CN107589851A (en) * 2017-10-17 2018-01-16 极鱼(北京)科技有限公司 The exchange method and system of automobile
CN207180987U (en) * 2017-08-07 2018-04-03 安徽江淮汽车集团股份有限公司 Engine bench test system for head-up display
CN110525342A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of vehicle-mounted auxiliary driving method of AR-HUD based on deep learning and its system
CN110663032A (en) * 2017-12-21 2020-01-07 谷歌有限责任公司 Support for enhancing testing of Augmented Reality (AR) applications

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130293582A1 (en) * 2012-05-07 2013-11-07 Victor Ng-Thow-Hing Method to generate virtual display surfaces from video imagery of road based scenery
CN104266654A (en) * 2014-09-26 2015-01-07 广东好帮手电子科技股份有限公司 Vehicle real scene navigation system and method
CN106383587A (en) * 2016-10-26 2017-02-08 腾讯科技(深圳)有限公司 Augmented reality scene generation method, device and equipment
CN207180987U (en) * 2017-08-07 2018-04-03 安徽江淮汽车集团股份有限公司 Engine bench test system for head-up display
CN107589851A (en) * 2017-10-17 2018-01-16 极鱼(北京)科技有限公司 The exchange method and system of automobile
CN110663032A (en) * 2017-12-21 2020-01-07 谷歌有限责任公司 Support for enhancing testing of Augmented Reality (AR) applications
CN110525342A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of vehicle-mounted auxiliary driving method of AR-HUD based on deep learning and its system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114141092A (en) * 2021-11-10 2022-03-04 武汉未来幻影科技有限公司 Method and system for constructing animation scene of driving test simulator
CN114299162A (en) * 2021-12-30 2022-04-08 合众新能源汽车有限公司 Rapid calibration method for AR-HUD
CN114299162B (en) * 2021-12-30 2024-05-10 合众新能源汽车股份有限公司 Rapid calibration method for AR-HUD
CN114661398A (en) * 2022-03-22 2022-06-24 上海商汤智能科技有限公司 Information display method and device, computer equipment and storage medium
CN114661398B (en) * 2022-03-22 2024-05-17 上海商汤智能科技有限公司 Information display method and device, computer equipment and storage medium
CN114859754A (en) * 2022-04-07 2022-08-05 江苏泽景汽车电子股份有限公司 Simulation test method and simulation test system of head-up display system
CN114859754B (en) * 2022-04-07 2023-10-03 江苏泽景汽车电子股份有限公司 Simulation test method and simulation test system of head-up display system
CN116311935A (en) * 2023-03-20 2023-06-23 冉林甫 Smart city traffic management method based on big data
CN116311935B (en) * 2023-03-20 2024-03-15 湖北省规划设计研究总院有限责任公司 Smart city traffic management method based on big data
CN116774679A (en) * 2023-08-25 2023-09-19 北京斯年智驾科技有限公司 Automatic driving vehicle testing method, system, device and storage medium
CN116774679B (en) * 2023-08-25 2023-11-28 北京斯年智驾科技有限公司 Automatic driving vehicle testing method, system, device and storage medium

Also Published As

Publication number Publication date
CN113260430B (en) 2022-07-22
WO2022205102A1 (en) 2022-10-06

Similar Documents

Publication Publication Date Title
CN113260430B (en) Scene processing method, device and system and related equipment
JP6988819B2 (en) Image processing device, image processing method, and program
US11436484B2 (en) Training, testing, and verifying autonomous machines using simulated environments
CN107554425B (en) A kind of vehicle-mounted head-up display AR-HUD of augmented reality
US10068377B2 (en) Three dimensional graphical overlays for a three dimensional heads-up display unit of a vehicle
US10712556B2 (en) Image information processing method and augmented reality AR device
US20160210775A1 (en) Virtual sensor testbed
US20220204009A1 (en) Simulations of sensor behavior in an autonomous vehicle
CN112987593A (en) Visual positioning hardware-in-the-loop simulation platform and simulation method
CN115330923A (en) Point cloud data rendering method and device, vehicle, readable storage medium and chip
CN114240769A (en) Image processing method and device
US20220315033A1 (en) Apparatus and method for providing extended function to vehicle
KR102625688B1 (en) Display devices and route guidance systems based on mixed reality
CN114820504B (en) Method and device for detecting image fusion deviation, electronic equipment and storage medium
EP2991358A2 (en) Communication of cloud-based content to a driver
Galazka et al. CiThruS2: Open-source photorealistic 3D framework for driving and traffic simulation in real time
WO2022142596A1 (en) Image processing method and apparatus, and storage medium
CN115675504A (en) Vehicle warning method and related equipment
CN115035239B (en) Method and device for building virtual environment, computer equipment and vehicle
CN111932687B (en) In-vehicle mixed reality display method and device
JP7390436B2 (en) Navigation methods, devices, electronic devices, computer readable storage media and computer programs
EP4369042A1 (en) Systems and techniques for processing lidar data
US20230005214A1 (en) Use of Real-World Lighting Parameters to Generate Virtual Environments
WO2024092559A1 (en) Navigation method and corresponding device
Zair Collision warning design in automotive head-up displays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant