WO2022266829A1 - Display method and apparatus, device, and vehicle - Google Patents

Display method and apparatus, device, and vehicle Download PDF

Info

Publication number
WO2022266829A1
WO2022266829A1 PCT/CN2021/101446 CN2021101446W WO2022266829A1 WO 2022266829 A1 WO2022266829 A1 WO 2022266829A1 CN 2021101446 W CN2021101446 W CN 2021101446W WO 2022266829 A1 WO2022266829 A1 WO 2022266829A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
target object
vehicle
user
infrared image
Prior art date
Application number
PCT/CN2021/101446
Other languages
French (fr)
Chinese (zh)
Inventor
朱伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/101446 priority Critical patent/WO2022266829A1/en
Priority to CN202180001862.4A priority patent/CN113597617A/en
Publication of WO2022266829A1 publication Critical patent/WO2022266829A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present application relates to the technical field of driving assistance, in particular to a display method and device, equipment and a vehicle.
  • HUD head-up Display
  • AR-HUD Augmented Reality-Head Up Display
  • existing vehicle manufacturers provide a The solution is to mark and remind pedestrians or obstacles that may exist in front of the glass.
  • this solution is still limited by the light conditions of night driving, and cannot solve the problem of not being able to see the road clearly when the light is poor or when vehicles on both sides meet.
  • the problem of pedestrians or obstacles on the road, so its labeling and reminder effects are also easily affected.
  • the present application provides a display method, device, equipment, and vehicle, which can detect target objects outside the vehicle through infrared imaging during driving, and display the label information of the target object, so as to improve driving performance. security.
  • the display method may be executed by a display device or some components in the display device, wherein the display device may be an AR-HUD, a HUD or other devices with a display function.
  • the display device may be an AR-HUD, a HUD or other devices with a display function.
  • Part of the components in the display device may be processing chips, processing circuits, processors, and the like.
  • the first aspect of the present application provides a display method, including: performing infrared supplementary light on a target area outside the vehicle.
  • Information of an infrared image of the target area is acquired, wherein the target area includes a target object.
  • the location of the target object in the infrared image is determined.
  • the label information of the target object is displayed in the display area.
  • this method acquires infrared image information of the target area outside the vehicle by means of infrared supplementary light on the target area outside the vehicle to identify the target object appearing in front of the vehicle, and through the acquired infrared image
  • the position of the target object determines the display position of the label information, so that the generated label information can be displayed at the display position.
  • determining the position of the target object in the infrared image includes: providing the information of the infrared image to an image recognition model, and the image recognition model determines that the target object is in the infrared image. position in the image.
  • the trained image recognition model can be used to identify the target object in the infrared image, and determine the position of the target object in the infrared image.
  • the image recognition model can be trained by neural network or deep learning. According to the recognition requirements of different types of target objects, this method can use different image recognition models to identify the target objects in the infrared image, so as to improve the accuracy of the target object. Object recognition success rate.
  • the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes.
  • the display position of the annotation information can be determined according to the user's human eye position and the spatial position of the target object, and when the annotation information is displayed according to the determined display position, the annotation that the user sees can be made
  • the information can be fused with the position of the target object, so that the user can still determine the target object and its position outside the vehicle according to the displayed annotation information when the lighting conditions outside the vehicle are not good.
  • the display size of the annotation information is related to the display position of the annotation information, the position of the user's eyes, and the size of the target object.
  • the annotation information can be in various forms, and the display size of the annotation information can be determined according to the position of the user's eyes, the display position of the annotation information, and the spatial position of the target object.
  • the annotation information can be specifically embodied in the form of a prompt box, and the display size of the prompt box can be calculated by obtaining the user's eye position, the spatial position of the target object, and the display position of the prompt box, so that the user The tooltips seen can be matched with pedestrians. As the vehicle travels, the distance between the target object and the vehicle becomes closer and closer, and the size of the prompt box increases accordingly, so that the user can intuitively perceive the position of the target object.
  • the display area is the display area of an augmented reality head-up display
  • the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes, including: the annotation information
  • the display position of is determined by the user's first line of sight.
  • the first line of sight is the line of sight from the user's eye position to the spatial position of the target object.
  • the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the annotation information is determined through the user's first line of sight, including: the annotation The display position of the information is determined by the intersection of the user's first line of sight and the front windshield of the vehicle.
  • the display area in this method may specifically be the display area of the augmented reality head-up display, and the augmented reality head-up display may project and display the marked information according to the display position of the marked information.
  • the augmented reality head-up display can use the front windshield of the vehicle as a display area, and determine the user's first line of sight according to the position of the user's human eyes and the spatial position of the target object. The intersection point can be determined as the display position of the annotation information, so that the annotation information seen by the user is on the same line of sight as the target object, and the display effect of the annotation information is improved.
  • determining the intersection of the first line of sight and the front windshield of the vehicle includes: determining a connection line between the target object and the image acquisition device based on the spatial position of the target object The horizontal direction is parallel to the front windshield of the vehicle at a first included angle with respect to the horizontal direction. Based on the distance between the user's eye position and the image acquisition device, and the first angle, a second angle between the user's eye and the target object relative to the horizontal direction is determined. Based on the distance between the user's eyes and the front windshield of the vehicle, and the second included angle, the display position of the annotation information on the front windshield is determined.
  • the first angle between the line between the target object and the image acquisition device relative to the horizontal direction can be calculated, based on the user's eyes
  • the lateral distance from the image acquisition device and the first angle can be used to calculate the second angle between the user's eyes and the target object relative to the horizontal direction.
  • the distance of the front windshield of the vehicle can be calculated to obtain the display position of the label information on the front windshield.
  • the calculation amount of the display position is small, and the calculation method is based on simple trigonometric functions, the calculation amount is small, and the accurate display position can be quickly obtained.
  • the spatial position of the target object is related to the position of the target object in the infrared image, internal parameters and external parameters of an image acquisition device that acquires the infrared image.
  • the spatial position of the target object may specifically be the spatial position of the target object in the vehicle coordinate system.
  • This method can calculate the distance between the target object and the image acquisition device based on the position of the target object in the infrared image, the installation height of the image acquisition device, and the angle between the visible ground and other parameters. Based on the distance and the image acquisition device in the vehicle The spatial position in the vehicle coordinate system can determine the spatial position of the target object in the vehicle coordinate system.
  • the target object includes one or more objects among other vehicles, pedestrians, and animals.
  • the target object in this method can be moving objects such as other vehicles, pedestrians, animals, etc., and can also be static objects such as road signs and trees.
  • the type of target object can be selected according to the user's needs to realize the user's Customized requirements.
  • the first aspect before determining the position of the target object in the infrared image, it further includes: performing one or more of cropping, noise reduction, enhancement, smoothing, and sharpening on the infrared image multiple processing.
  • the infrared image can be cropped, denoised, and enhanced. , smoothing, or sharpening, etc., so as to facilitate effective and rapid identification of target objects in infrared images.
  • a second aspect of the present application provides a display device, including: a supplementary light module, configured to perform infrared supplementary light on a target area outside the vehicle.
  • the obtaining module is used to obtain information of the infrared image of the target area, wherein the target area includes the target object.
  • a processing module configured to determine the position of the target object in the infrared image.
  • the sending module is configured to display the label information of the target object in the display area according to the position of the target object in the infrared image.
  • the display position of the annotation information is related to the spatial position of the target object, the position of the user's eyes, and the size of the target object.
  • the display size of the annotation information is related to the display position of the annotation information, the position of the user's eyes, and the size of the target object.
  • the display area is the display area of an augmented reality head-up display
  • the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes, including: the annotation information
  • the display position of is determined by the user's first line of sight.
  • the first line of sight is the line of sight from the user's eye position to the spatial position of the target object.
  • the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the annotation information is determined by the first line of sight of the user's first line of sight It includes: the display position of the marked information is determined by the intersection point of the user's first line of sight and the front windshield of the vehicle.
  • the spatial position of the target object is related to the position of the target object in the infrared image, internal parameters and external parameters of an image acquisition device that acquires the infrared image.
  • the target object includes one or more objects among other vehicles, pedestrians, and animals.
  • the processing module before the processing module is used to determine the position of the target object in the infrared image, it is further configured to: perform cropping, noise reduction, enhancement, smoothing, and one or more of sharpening.
  • a third aspect of the present application provides a computing device, including: a processor, and a memory, on which program instructions are stored, and when the program instructions are executed by the processor, the processor performs the operations described in the first aspect and the above-mentioned various methods. Display methods in various technical solutions provided by optional implementation manners.
  • the computing device is one of AR-HUD and HUD.
  • the computing device is a vehicle.
  • the computing device is one of an on-board computer and an on-board computer.
  • a fourth aspect of the present application provides an electronic device, including: a processor, and an interface circuit, wherein the processor accesses a memory through the interface circuit, and the memory stores program instructions, and when the program instructions are executed by the processor At this time, the processor is made to execute the display methods in the various technical solutions provided in the first aspect and the above-mentioned various optional implementation manners.
  • the electronic device is one of AR-HUD and HUD.
  • the electronic device is a vehicle.
  • the electronic device is one of a car machine and a car computer.
  • a fifth aspect of the present application provides a display system, including: a vehicle-machine device, and a computing device coupled with the vehicle-machine device as provided in the third aspect and the various optional implementation manners above, Or the electronic device in the various technical solutions provided in the fourth aspect and the above-mentioned various optional implementation manners coupled with the vehicle-machine device.
  • the display system is a vehicle.
  • the sixth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored.
  • the program instructions are executed by a computer, the computer executes multiple A display method in the technical solution.
  • the seventh aspect of the present application provides a computer program product, which includes program instructions.
  • the program instructions When the program instructions are executed by a computer, the computer executes various technical solutions as provided in the first aspect and various optional implementation manners above. display method in .
  • the display method, device, equipment, and vehicle provided by the present application detect and collect real-time infrared images of the target area outside the vehicle through infrared imaging to identify the target object in the infrared image and determine the target object in the infrared image.
  • the position in the image is used to obtain the spatial position of the target object, and the display position of the annotation information is determined according to the spatial position of the target object and the position of the user's eyes, so that the generated annotation information is displayed at the display position.
  • the annotation information seen by human eyes can be fused with the position of the target object, so as to remind the user to pay attention to the target object outside the vehicle.
  • the application can also determine the size of the annotation information according to the user's eye position, the display position of the annotation information, and the size of the target object, so that the size of the annotation information can gradually become larger as the target object approaches, so that To achieve a better display effect.
  • the present application it is possible to timely detect target objects around the vehicle during driving, especially at night when the illumination is poor, and display marked information, thereby improving driving safety.
  • FIG. 1 is a structural diagram of an application scenario of a display method provided by an embodiment of the present application
  • FIG. 2 is a structural diagram of a vehicle provided in an embodiment of the present application.
  • FIG. 3A is a schematic side view of a vehicle cockpit provided by an embodiment of the present application.
  • Fig. 3B is a front schematic diagram of a vehicle cockpit provided by an embodiment of the present application.
  • FIG. 4 is a flow chart of a display method provided by an embodiment of the present application.
  • FIG. 5 is a flow chart for determining the display position of annotation information provided by the embodiment of the present application.
  • FIG. 6 is a position distribution diagram of the front view angles of vehicles and pedestrians according to the embodiment of the present application.
  • FIG. 7 is a position distribution diagram of side view angles of vehicles and pedestrians according to the embodiment of the present application.
  • Fig. 8 is a position distribution diagram of the overlooking angles of vehicles and pedestrians according to the embodiment of the present application.
  • FIG. 9 is a structural diagram of a display device provided by an embodiment of the present application.
  • FIG. 10 is an architecture diagram of a computing device provided by an embodiment of the present application.
  • FIG. 11 is a structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 12 is a structural diagram of a display system provided by an embodiment of the present application.
  • the low beam or high beam of the vehicle can be turned on to illuminate the front of the vehicle, the lighting range of the low beam is limited.
  • the high beam can properly improve the lighting range in front of the vehicle, the high beam is easy It will interfere with pedestrians or other vehicles on the road.
  • the user of the own car is also easily disturbed by the high beams of the oncoming vehicle. It is easy to have blind spots when meeting cars, and there is a risk of accidents.
  • the embodiment of the present application provides a display method, device, device, and vehicle, which can detect target objects around the vehicle through infrared imaging during driving, and display label information at the corresponding position of the target object
  • the annotation information seen by the user is fused with the position of the target object, so as to realize the purpose of reminding the user and improve the safety of driving at night.
  • the user is usually a driver.
  • the user can also be a copilot passenger or a rear passenger, etc.
  • the application is described in detail below.
  • FIG. 1 is an architecture diagram of an application scenario of a display method provided by an embodiment of the present application.
  • the application scenario of this embodiment specifically involves a vehicle 100 , which may be a family car or a truck, or a special vehicle such as an ambulance, fire engine, police car or engineering emergency vehicle.
  • vehicle 100 which may be a family car or a truck, or a special vehicle such as an ambulance, fire engine, police car or engineering emergency vehicle.
  • the above-mentioned supplementary light device 110, collection device 120, processing device 130, and sending device 140 may be installed on the vehicle, and may be installed outside or inside the vehicle.
  • the specific architecture of the vehicle 100 involved in this application scenario will be described in detail below with reference to FIGS. 3A-3B .
  • the supplementary light device 110 can be an infrared supplementary light, an infrared emitter, or other equipment or a combination of devices with an infrared emission function, and can be arranged at the front of the vehicle 100, such as a large panel at the front of the vehicle. lights for easy wiring. It can also be arranged on the top of the vehicle or the side of the rearview mirror of the vehicle cockpit facing the outside of the vehicle. It is mainly used to provide infrared supplementary light to the target area around the vehicle when driving the vehicle at night with poor lighting, and the range of supplementary light can cover the maximum field of view angle of the acquisition device 120 .
  • the target area may be the front, side or rear of the vehicle, and the infrared supplementary light is applied to the target area, so that the collection device 120 can obtain a clearer infrared image when detecting and collecting.
  • the target area may contain target objects to be detected and collected, and the target objects may be other vehicles, pedestrians, animals, or other obstacles.
  • a high-power infrared supplementary light for example, 30 watts
  • infrared light is invisible light
  • a high-power infrared supplementary light will not be harmful to pedestrians or pedestrians on the road. other vehicles are affected.
  • the infrared supplementary light is an example of this embodiment.
  • other devices or devices capable of emitting infrared rays can also be selected. This embodiment does not specifically limit the type, position and quantity of the supplementary light device 110 .
  • the acquisition device 120 may include an out-of-vehicle acquisition device and an in-vehicle acquisition device.
  • the acquisition device outside the vehicle can specifically use an infrared camera, on-board radar, or other equipment or multiple combined equipment with infrared image acquisition or infrared scanning functions, and can be arranged on the top, head or vehicle cockpit of the vehicle 100
  • the side of the rearview mirror facing the outside of the vehicle can be installed inside or outside the vehicle. It is mainly used to detect and collect infrared image information of the target area of infrared supplementary light outside the vehicle.
  • the target area may contain target objects to be detected and collected, and the target objects may be other vehicles, pedestrians, animals or other obstacles.
  • the infrared image information may be a single infrared image, or one or more frames of infrared images in the collected video stream.
  • the in-vehicle acquisition device can specifically use equipment such as a vehicle-mounted camera, a human eye detector, etc.
  • the in-vehicle acquisition device can be set according to the location according to requirements, for example, it can be installed on the A-pillar of the vehicle cockpit , the B-pillar or the side of the rearview mirror of the vehicle cockpit facing the user, it can also be set on the steering wheel, the area near the center console, and can also be set on the position above the display screen behind the seat. It is mainly used to detect and collect the human eye position information of the user in the vehicle cockpit. There may be one collection device in the vehicle, or there may be multiple collection devices, and the application does not limit its location and quantity.
  • the processing device 130 can be an electronic device, specifically a processor of a vehicle-mounted processing device such as a car machine or a vehicle-mounted computer, or a conventional processor such as a central processing unit (Central Processing Unit, CPU) or a microprocessor (Micro Control Unit, MCU).
  • the chip processor can also be used as terminal hardware such as mobile phones and tablets.
  • the processing device 130 can be preset with an image recognition model or by acquiring the preset image recognition model in other devices in the vehicle, it can identify the target object in the infrared image according to the received infrared image information, and determine the target object in the infrared image. position in the target object, and generate annotation information corresponding to the target object.
  • the annotation information can be a prompt box, a highlighted sign, an AR image, etc., and can also be a text or a guideline. And it can also determine the spatial position of the target object according to the position of the target object in the infrared image, determine the user's human eye position according to the human eye position information obtained by the acquisition device in the vehicle, and determine the user's human eye position according to the spatial position of the target object and the user's human eye position. eye position, determine the display position of the label information of the target object, and output the determined label information and the display position of the label information to the sending device 140 .
  • the sending device 140 can be HUD, AR-HUD or other devices with display functions, and can be installed above or inside the center console of the vehicle cockpit, and is mainly used to display the marked information in the display area.
  • the display area of the sending device 140 can be the front windshield of the vehicle, or it can be an independently displayed transparent screen, which is used to reflect the light of the marked information sent by the sending device 140 and then enter the user's eyes, so that the user can see through the front windshield.
  • the windshield or transparent screen looks out of the car, you can see the fusion of the marked information and the position of the target object outside the vehicle, so as to remind the user of the type or position of the target object that appears outside the vehicle, improve the display effect of the marked information, and improve the driving experience. security.
  • the supplementary light device 110, the acquisition device 120, the processing device 130, and the sending device 140 can communicate data or instructions through wired communication (such as interface circuit) or wireless communication (such as Bluetooth, wifi), etc., for example, supplementary light
  • the device 110 may receive a control command from the processing device 130 through Bluetooth communication, and turn on the supplementary light for the target area outside the vehicle.
  • the collecting device 120 may transmit the infrared image information to the processing device 130 through Bluetooth communication.
  • the acquisition device 120 collects the user's eye position information, it may transmit the user's eye position information to the processing device 130 through Bluetooth communication.
  • the processing device 130 determines the target object and the spatial position of the target object in the infrared image according to the infrared image information, and generates labeling information, and calculates the display position of the labeling information according to the spatial position of the target object and the user's eye position information, and obtains
  • the annotation information and the display position of the annotation information are output to the sending device 140, and the sending device 140 displays the annotation information at the display position of the display area.
  • the vehicle 100 involved in this embodiment can collect infrared image information on the target area outside the vehicle through infrared imaging to determine the target object in the target area, and can also collect the user's personal information through the in-vehicle acquisition device.
  • Eye position information and based on the infrared image information and the user's eye position information, determine the label information and its display position, use the sending device to display the label information at the display position, so that the label information seen by the user can be compared with the vehicle In order to achieve the purpose of reminding the user and improve the safety of driving at night.
  • Fig. 4 shows a flow chart of a display method provided by an embodiment of the present application.
  • the display method can be executed by a display device or some devices in the display device, such as AR-HUD, HUD, car, processor, etc., where
  • the processor may be a processor of a display device, or a processor of a vehicle-mounted processing device such as a vehicle machine or a vehicle-mounted computer.
  • the infrared image information of the target area outside the vehicle can be collected by means of infrared imaging, and the label information can be displayed on the corresponding position, so that the user can still use the displayed label information when the lighting conditions outside the vehicle are not good.
  • the display method includes:
  • a supplementary light device on the vehicle, for example, An infrared supplementary light is arranged on the side of the vehicle top, head or vehicle cockpit rearview mirror facing the outside of the vehicle, wherein the processor can send an instruction to turn on the supplementary light to the infrared supplementary light through the interface circuit to turn on the infrared supplementary light.
  • the infrared supplementary light is used to provide infrared supplementary light to the target area outside the vehicle, and to detect the target area outside the vehicle in real time through the acquisition device, and to obtain infrared image information in real time.
  • the acquisition device may be an infrared camera, and the infrared image information may include information such as resolution, size, dimension, and color.
  • S420 Acquire information about the infrared image of the target area
  • the processor can send an image acquisition instruction to the infrared camera through the interface circuit, so as to control the infrared camera to acquire the infrared image of the target area outside the vehicle.
  • the target object can be a target object that can trigger an early warning while the vehicle is driving, specifically pedestrians, vehicles, animals, or other obstacles.
  • the processor can quickly recognize the features of the target object in the infrared image based on the recognition model, thereby determining the target object in the infrared image and the position of the target object in the infrared image.
  • the recognition model can be realized through a neural network model or a deep learning model. Specifically, different recognition models can be used to recognize infrared images based on different forms of target objects.
  • a portrait recognition model can be used to identify infrared images.
  • the image is recognized to determine the pedestrian and the position of the pedestrian in the infrared image. Since the height of the pedestrian is different, in this embodiment, for the convenience of subsequent calculation, the position of the foot of the pedestrian can be used as the position to be determined.
  • the processor can determine the target object in the infrared image and the position of the target object in the infrared image according to the information of the acquired infrared image. For example, when the target object is a pedestrian, because the height of the pedestrian is different In this embodiment, for the convenience of subsequent calculation, the position of the pedestrian's foot in the infrared image may be used as the position of the target object to be determined in the infrared image.
  • the processor can also generate label information of the target object according to the type of the target object in the infrared image.
  • the annotation information can be information with a reminder effect generated based on the target object in the infrared image, for example, it can be a prompt box, a highlighted sign or an arrow mark, etc., it can also be a prompt text or a guide line, etc., or it can be AR images with AR effects, etc.
  • the processor can determine the spatial position of the target object based on the position of the target object in the acquired infrared image, internal parameters, external parameters, etc. of the infrared camera, and then based on the spatial position of the target object and the spatial position of the user's eyes, The display position of the label information in the display area can be determined.
  • the spatial position of the target object and the spatial position of the user's eyes may specifically be their respective spatial positions in the vehicle coordinate system.
  • the display area may be the display area of the AR-HUD, and the display area of the AR-HUD may be located on the front windshield of the vehicle. Then in this embodiment, determining the display position of the annotation information in the display area can be specifically implemented in the following manner:
  • S431 Determine the spatial position of the target object based on the position of the target object in the infrared image
  • Figure 6 it is a position distribution diagram of the front view angles of vehicles and pedestrians in this embodiment, it can be considered that Figure 6 is equivalent to a kind of equivalent schematic diagram of the infrared image acquired under the infrared camera field of view, the target object in this figure is For the pedestrian in front of the vehicle, the position of the pedestrian's foot is used as the position of the pedestrian in the infrared image, and the horizontal direction is the farthest road surface captured by the infrared camera.
  • the pixel distance from the pedestrian to the horizontal direction in the infrared image is a
  • the pixel distance from the front of the vehicle is The distance is b
  • the distances from the pixels on both sides of the infrared image are c and d, respectively.
  • the parameters of the infrared camera can be determined at the time of installation, the horizontal direction, the front of the vehicle, and the positions on both sides in the infrared image collected by the infrared camera are all fixed parameters. Therefore, based on the two-dimensional position of the pedestrian in the infrared image, The distance of the pedestrian relative to the infrared camera can be calculated.
  • FIG. 7 it is the position distribution diagram of the side view angles of vehicles and pedestrians in this embodiment, in this Figure 7, according to the position where the infrared camera is installed, it can be determined that the height of the infrared camera installation from the ground is H, and it can also be Determine the angle between the nearest road surface visible to the infrared camera and the farthest road surface as ⁇ .
  • the unit of the angle ⁇ can be radians. Since the installation position of the infrared camera can be fixed at a certain place on the vehicle, the height H and the angle ⁇ can be determined as a known parameter. Then based on the trigonometric function, the calculation method of the horizontal distance L between the pedestrian and the infrared camera can be:
  • the spatial position of the pedestrian in the vehicle coordinate system that is, the spatial position of the target object, can be determined.
  • the user's eyes are detected by the above-mentioned in-vehicle acquisition device, which may be a camera or an eyeball detector. According to the obtained position information of the user's eyes, and the conversion relationship between the installation position of the in-vehicle acquisition device and the vehicle coordinate system, the spatial position of the user's eyes in the vehicle coordinate system can be obtained.
  • the in-vehicle acquisition device which may be a camera or an eyeball detector.
  • S433 Based on the spatial position of the user's eyes and the spatial position of the target object, determine the display position of the annotation information
  • the marked information when displaying the marked information, specifically, the marked information may be sent to the front windshield of the vehicle for display, so that the user can observe it head-up.
  • the labeling information can be displayed at a fixed position on the front windshield, and the user can determine whether there are pedestrians, other vehicles, etc. Target objects such as vehicles and animals.
  • the label information can also be displayed on different positions of the front windshield.
  • the connection line between the user's eyes and the pedestrian can be determined based on the spatial position of the user's eyes and the spatial position of the pedestrian.
  • the intersection point between the glasses can be determined as the display position of the label information on the front windshield. Wherein, the spatial position of the intersection point can be obtained by coordinate calculation.
  • the vehicle coordinate system can be used as the reference coordinate system, and then based on the spatial coordinates of the user's eyes, the spatial coordinates of pedestrians, and the spatial coordinates of the vehicle's front windshield, determine The spatial coordinates of the intersection point on the front windshield.
  • FIG. 8 it is a position distribution diagram of the overlooking angles of vehicles and pedestrians in this embodiment.
  • the horizontal distance between the user's eyes and the infrared camera can be obtained.
  • the distance is e.
  • the horizontal field of view of the infrared camera is ⁇
  • the angle between the line between the pedestrian and the infrared camera relative to the front windshield is ⁇
  • the angle between the line between the pedestrian and the user's eyes is relative to the front windshield is ⁇
  • the units of angle ⁇ and angle ⁇ can also be radians.
  • the spatial position of the intersection between the line between the user's eyes and the pedestrian and the front windshield of the vehicle can be calculated, that is, the calculation The display position of the marked information on the front windshield.
  • the calculated display position is specifically the spatial position in the vehicle coordinate system.
  • the spatial position coordinates in the vehicle coordinate system can be further converted into projected coordinates in the AR-HUD coordinate system, and then the label information and The projected coordinates are sent to the AR-HUD.
  • the processor can send the generated annotation information and its display position to the AR-HUD through the interface circuit, and the AR-HUD projects the annotation information to the calculated display position on the front windshield for display.
  • the annotation information seen by the user is kept in the same line of sight as the pedestrian, so that the user can quickly perceive the target object and its position during driving.
  • the display size of the annotation information can be a fixed size, for example, it can be a fixed-size prompt box, an AR image, etc. generated according to the target object, or it can be a fixed-size text and guide line generated according to the target object. Wait. By sending the fixed-size annotation information to the display position for display, the user can be reminded of the existence of the target object outside the vehicle.
  • the display size of the annotation information may also be related to the display position of the annotation information, the spatial position of the user's eyes, and the size of the target object. According to the spatial position of the user's eyes, the display position of the annotation information, and the spatial position of the target object, a model similar to a frustum can be formed.
  • the display size of the annotation information at the display position can be determined, so that The annotation information seen by the user can match the target object, and the display size of the annotation information can change accordingly with the relative distance between the vehicle and the target object.
  • the annotation information can be specifically embodied in the form of a prompt box, and the display size of the prompt box can be calculated by obtaining the spatial position of the user's eyes, the spatial position of the pedestrian, and the display position of the prompt box, so that The tooltip the user sees can be matched to a pedestrian. Based on this, as the vehicle travels and the distance between the pedestrian and the vehicle becomes closer, the size of the prompt box can also increase accordingly, so as to remind the user to pay attention to the existence of the pedestrian and the change of the distance to the pedestrian.
  • the display method provided by the embodiment of the present application uses infrared imaging to detect the environment outside the vehicle, and determines the display position of the marked information based on the spatial position of the target object and the spatial position of the user's eyes.
  • infrared supplementary light and infrared imaging are used to improve the imaging capability under poor lighting conditions. The effect of infrared imaging will not be affected by visible light, and infrared supplementary light will not affect oncoming vehicles or pedestrians. influence and avoid traffic hazards.
  • the display position of the annotation information is calculated, which involves a small amount of calculation and reduces the occupation of vehicle processing resources. According to the calculated display position, the label information is displayed, so that when the lighting conditions outside the vehicle are not good, the user can still determine the target object and its position outside the vehicle according to the displayed label information, and improve driving safety.
  • FIG. 9 is a structural diagram of a display device provided by an embodiment of the present application, and the display device may be used to implement various optional embodiments of the above-mentioned display method. As shown in FIG. 9 , the display device has a supplementary light module 910 , an acquisition module 920 , a processing module 930 , and a sending module 940 .
  • the supplementary light module 910 is configured to execute step S410 in the above display method and examples therein.
  • the acquiring module 920 is configured to execute step S420 and examples thereof in the above display method.
  • the processing module 930 is configured to execute any one of steps S430, S431-S433 and any optional example thereof in the above display method.
  • the sending module 940 is configured to execute step S440 and examples thereof in the above display method.
  • the display device acquires the infrared image in front of the vehicle through infrared imaging, determines the display position of the marked information according to the position of the target object in the infrared image and the position of the user's eyes inside the vehicle, and uses the display device to display the The annotation information is displayed at the corresponding display position.
  • the marked information seen by the user can be matched with the target object, which facilitates the user to quickly perceive the target object and its position outside the vehicle, and improves driving safety.
  • the display device in the embodiment of the present application may be implemented by software, for example, by computer programs or instructions having the above-mentioned functions, and the corresponding computer programs or instructions may be stored in the internal memory of the terminal and read by the processor.
  • the corresponding computer programs or instructions inside the memory realize the above functions.
  • the display device in the embodiment of the present application may also be realized by hardware, for example, the supplementary light module 910 may be realized by a supplementary light device on a vehicle, such as an infrared supplementary light lamp or other equipment capable of realizing infrared supplementary light function.
  • the acquisition module 920 can be realized by an acquisition device outside the vehicle or an acquisition device inside the vehicle.
  • the acquisition device outside the vehicle can be an infrared camera, infrared radar, etc.
  • the acquisition device inside the vehicle can be a vehicle-mounted camera or an eye tracker, etc.
  • the acquisition module 920 may also be implemented by an interface circuit between the processor and the infrared camera or the vehicle camera on the vehicle.
  • the processing module 930 may be realized by a processing device on the vehicle, such as a processor of a vehicle-mounted processing device such as a vehicle machine or a vehicle-mounted computer, or may also be realized by a processor of a HUD or AR-HUD, or the processing module 930 may also be Realized by terminals such as mobile phones or tablets.
  • the sending module 940 can be realized by the display device on the vehicle, for example, it can be realized by HUD or AR-HUD, or it can also be realized by some components of HUD or AR-HUD, or it can also be realized by the processor and HUD or AR-HUD The interface circuit between realizes.
  • the display device in the embodiment of the present application may also be implemented by a combination of a processor and a software module.
  • the embodiment of the present application also provides a vehicle with the above-mentioned display device.
  • the vehicle may be a family car or a truck, or may be a special vehicle such as an ambulance, a fire engine, a police car or an engineering emergency vehicle.
  • the above-mentioned supplementary light device, acquisition device, processing device, sending device, etc. are also installed on the vehicle.
  • the above-mentioned modules and devices can be arranged in the vehicle system in the form of pre-installation or post-installation, and the data interaction between the modules can rely on the bus or interface circuit of the vehicle, or with the development of wireless technology, each Modules can also use wireless communication for data interaction to eliminate the inconvenience caused by wiring.
  • the display device of this embodiment can also be combined with the AR-HUD and installed on the vehicle in the form of a vehicle-mounted device, so as to achieve a better AR early warning effect.
  • FIG. 10 is a structural diagram of a computing device 1000 provided by an embodiment of the present application.
  • the computing device can be used as a display device to execute the optional embodiments of the above display method, and the computing device can be a terminal, or a chip or a chip system inside the terminal.
  • the computing device 1000 includes: a processor 1010 and a memory 1020 .
  • computing device 1000 shown in FIG. 10 may further include a communication interface 1030 for communicating with other devices, and may specifically include one or more transceiver circuits or interface circuits.
  • the processor 1010 may be connected to the memory 1020 .
  • the memory 1020 can be used to store the program codes and data. Therefore, the memory 1020 may be an internal storage module of the processor 1010, or an external storage module independent of the processor 1010, or may include an internal storage module of the processor 1010 and an external storage module independent of the processor 1010. part.
  • the computing device 1000 may further include a bus.
  • the memory 1020 and the communication interface 1030 may be connected to the processor 1010 through a bus.
  • the bus may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one line is used in FIG. 10 , but it does not mean that there is only one bus or one type of bus.
  • the processor 1010 may use a central processing unit (central processing unit, CPU).
  • the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), application specific integrated circuits (application specific integrated circuits, ASICs), off-the-shelf programmable gate arrays (field programmable gate arrays, FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the processor 1010 adopts one or more integrated circuits for executing related programs, so as to implement the technical solutions provided by the embodiments of the present application.
  • the memory 1020 may include read-only memory and random-access memory, and provides instructions and data to the processor 1010 .
  • a portion of processor 1010 may also include non-volatile random access memory.
  • processor 1010 may also store device type information.
  • the processor 1010 executes the computer-executed instructions in the memory 1020 to perform any operation steps of the above-mentioned display method and any optional embodiment thereof, for example, the processor 1010 can execute the memory
  • the computer in 1020 executes instructions to execute the display method in the embodiment corresponding to FIG. 4 .
  • the processor 1010 controls the supplementary light of the vehicle by executing the supplementary light instruction in the memory 1020
  • the device performs infrared supplementary light on the target area outside the vehicle.
  • the processor 1010 controls the acquisition device of the vehicle to acquire the infrared image information of the target area by executing the acquisition instruction in the memory 1020 .
  • the processor 1010 determines the position of the target object in the infrared image by executing the processing instructions in the memory 1020, and determines the display position of the label information according to the position of the target object in the infrared image.
  • the processor 1010 controls the sending device of the vehicle to display the marking information of the target object in the display area by executing the sending instruction in the memory 1020 .
  • the computing device 1000 may correspond to a corresponding body executing the method according to each embodiment of the present application, and the above-mentioned and other operations and/or functions of each module in the computing device 1000 are for realizing the present invention For the sake of brevity, the corresponding processes of the methods in the embodiments are not repeated here.
  • Fig. 11 is a structure diagram of an electronic device 1100 provided by an embodiment of the present application.
  • the electronic device 1100 can be used as a display device to execute various optional embodiments of the above-mentioned display method.
  • the electronic device can be a terminal or a terminal A chip or system-on-a-chip inside.
  • the electronic device 1100 includes: a processor 1110, and an interface circuit 1120, wherein the processor 1110 accesses the memory through the interface circuit 1120, and the memory stores program instructions, and when the program instructions are executed by the processor, the processor executes the above-mentioned Any operational steps of the method and any alternative embodiments thereof are shown.
  • the processor 1110 can acquire computer-executable instructions in the memory through the interface circuit 1120 to execute the display method in the embodiment corresponding to FIG. Acquire the supplementary light instruction in the memory, and control the supplementary light device of the vehicle to perform infrared supplementary light on the target area outside the vehicle.
  • the processor 1110 acquires the acquisition instruction in the memory through the interface circuit 1120, and controls the acquisition device of the vehicle to acquire the infrared image information of the target area.
  • the processor 1110 acquires processing instructions in the memory through the interface circuit 1120, determines the position of the target object in the infrared image, and determines the display position of the label information according to the position of the target object in the infrared image.
  • the processor 1110 acquires the sending instruction in the memory through the interface circuit 1120, and controls the sending device of the vehicle to display the labeling information of the target object in the display area.
  • the electronic device may further include a communication interface, a bus, etc.
  • a communication interface a bus, etc.
  • Fig. 12 is a structure diagram of a display system 1200 provided by the embodiment of the present application.
  • the display system 1200 can be used as a display device to execute various optional embodiments of the above display methods.
  • the display system can be a terminal or a terminal A chip or system-on-a-chip inside.
  • the display system 1200 includes: an in-vehicle device 1210 , and an electronic device 1220 coupled with the in-vehicle device 1210 .
  • the electronic device 1220 may be the processing device 130 as shown in FIG. 1 , may also be the processing module 930 as shown in FIG. 9 , may also be the computing device 1000 as shown in FIG. 10 , or may be the The electronic device 1100.
  • the in-vehicle device 1210 may be the supplementary light device 110 on the vehicle shown in FIG. 1 or the supplementary light module 910 shown in FIG. 9 , such as a vehicle headlight, an infrared supplementary light, and the like.
  • the vehicle-machine device 1210 may also be the collection device 120 on the vehicle shown in FIG. 1 or the acquisition module 920 shown in FIG. 9 , such as a vehicle-mounted camera, an infrared camera, or a radar.
  • the vehicle-machine device 1210 may also be the sending device 140 on the vehicle shown in FIG. 1 or the sending module 940 shown in FIG. 9 , such as AR-HUD, HUD, etc., or some devices in the AR-HUD or HUD.
  • the display system 1200 can execute the display method in the embodiment corresponding to FIG.
  • Infrared images are collected for the target area outside the vehicle.
  • the electronic device 1220 can determine the position of the target object in the infrared image according to the acquired infrared image containing the target object, and determine the display position of the annotation information according to the position of the target object in the infrared image.
  • the in-vehicle device 1210 may send the marked information to a display location for display.
  • the in-vehicle device 1210 and the electronic device 1220 can communicate data or instructions through a wired method (such as an interface circuit), and can also perform data or instruction communications through a wireless method (such as bluetooth, wifi).
  • the disclosed systems, devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing device, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
  • the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and the program is used to execute a display method when executed by a processor, and the method includes the solutions described in the above-mentioned embodiments at least one.
  • the computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal in baseband or propagated as part of a carrier wave with computer readable program code embodied therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider). connect).
  • LAN local area network
  • WAN wide area network
  • connect such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.

Abstract

The present application relates to the technical field of assisted driving, and provides a display method and apparatus, a device, and a vehicle. The method comprises: performing infrared light supplementation on a target region outside a vehicle; obtaining information of an infrared image of the target region in an infrared imaging mode; determining the position of a target object in the infrared image according to the obtained infrared image; and displaying annotation information of the target object in a display region according to the position of the target object in the infrared image. According to the present application, the target object outside the vehicle can be detected in an infrared imaging mode in a driving process, and the annotation information of the target object is displayed, so that a user can still determine the target object outside the vehicle and the position of the target object according to the displayed annotation information when the illumination condition outside the vehicle is poor, and the safety of driving is improved.

Description

一种显示方法及装置、设备及车辆A display method and device, equipment and vehicle 技术领域technical field
本申请涉及辅助驾驶技术领域,特别涉及一种显示方法及装置、设备及车辆。The present application relates to the technical field of driving assistance, in particular to a display method and device, equipment and a vehicle.
背景技术Background technique
夜晚开车的时候,如果在光线不佳的环境中行驶,可能会看不清道路上的行人、车辆或其他障碍物,容易增加交通事故发生的可能性。除此之外,当夜晚与对面行驶而来的车辆会车时,由于对面车辆的灯光会很耀眼,往往会给司机造成瞬间眼盲,导致看不清对面车辆或道路上的行人等目标对象,也会增加交通事故发生的可能性。When driving at night, if you drive in an environment with poor light, you may not be able to see pedestrians, vehicles or other obstacles on the road clearly, which will easily increase the possibility of traffic accidents. In addition, when meeting with an oncoming vehicle at night, because the lights of the oncoming vehicle will be very dazzling, it will often cause instant blindness to the driver, resulting in the inability to see the target objects such as the oncoming vehicle or pedestrians on the road clearly. , will also increase the possibility of traffic accidents.
随着抬头显示器(Head Up Display,HUD)或增强现实抬头显示器(Augmented Reality-Head Up Display,AR-HUD)在近年来的快速发展,现有的车辆厂商提供了一种在车辆的前挡风玻璃上对前方可能存在的行人或障碍物进行标注和提醒的方案,然而该方案仍然受限于夜间驾驶的光线条件限制,无法解决光线不佳或者双方车辆会车时所造成的看不清道路上的行人或障碍物的问题,因此其标注和提醒效果也容易受到较大影响。With the rapid development of head-up display (Head Up Display, HUD) or augmented reality head-up display (Augmented Reality-Head Up Display, AR-HUD) in recent years, existing vehicle manufacturers provide a The solution is to mark and remind pedestrians or obstacles that may exist in front of the glass. However, this solution is still limited by the light conditions of night driving, and cannot solve the problem of not being able to see the road clearly when the light is poor or when vehicles on both sides meet. The problem of pedestrians or obstacles on the road, so its labeling and reminder effects are also easily affected.
发明内容Contents of the invention
有鉴于此,本申请提供了一种显示方法及装置、设备及车辆,能够在驾驶过程中通过红外成像的方式对车辆外的目标对象进行检测,并将目标对象的标注信息进行显示,提高驾驶的安全性。In view of this, the present application provides a display method, device, equipment, and vehicle, which can detect target objects outside the vehicle through infrared imaging during driving, and display the label information of the target object, so as to improve driving performance. security.
应理解,本申请所提供的方案中,显示方法可以由显示装置或该显示装置中的部分器件执行,其中,显示装置可以为AR-HUD、HUD或其他具有显示功能的装置。显示装置中的部分器件可以是处理芯片、处理电路、处理器等。It should be understood that in the solutions provided in this application, the display method may be executed by a display device or some components in the display device, wherein the display device may be an AR-HUD, a HUD or other devices with a display function. Part of the components in the display device may be processing chips, processing circuits, processors, and the like.
本申请的第一方面提供一种显示方法,包括:对车辆外的目标区域进行红外补光。获取该目标区域的红外图像的信息,其中,该目标区域中包括目标对象。确定该目标对象在该红外图像中的位置。根据该目标对象在该红外图像中的位置,在显示区域显示该目标对象的标注信息。The first aspect of the present application provides a display method, including: performing infrared supplementary light on a target area outside the vehicle. Information of an infrared image of the target area is acquired, wherein the target area includes a target object. The location of the target object in the infrared image is determined. According to the position of the target object in the infrared image, the label information of the target object is displayed in the display area.
由上,本方法通过对车辆外的目标区域进行红外补光,采用红外成像的方式获取车辆外的目标区域的红外图像的信息,以识别车辆前方出现的目标对象,并通过获取的红外图像中目标对象的位置,确定标注信息的显示位置,从而可将生成的标注信息在显示位置进行显示。通过本方法,可以在驾驶过程中,尤其在夜间照明不佳的情况下,及时检测车辆前方的目标对象,并将生成的标注信息显示在对应位置上,使得用户根据所显示的标注信息,确定车辆外的目标对象及其位置,提高驾驶的安全性。From the above, this method acquires infrared image information of the target area outside the vehicle by means of infrared supplementary light on the target area outside the vehicle to identify the target object appearing in front of the vehicle, and through the acquired infrared image The position of the target object determines the display position of the label information, so that the generated label information can be displayed at the display position. Through this method, the target object in front of the vehicle can be detected in time during the driving process, especially in the case of poor lighting at night, and the generated annotation information can be displayed on the corresponding position, so that the user can determine Target objects and their positions outside the vehicle improve driving safety.
在第一方面的一种可能的实现方式中,确定该目标对象在该红外图像中的位置包括:将该红外图像的信息提供给图像识别模型,由该图像识别模型确定该目标对象在 该红外图像中的位置。In a possible implementation manner of the first aspect, determining the position of the target object in the infrared image includes: providing the information of the infrared image to an image recognition model, and the image recognition model determines that the target object is in the infrared image. position in the image.
由上,根据获取的目标区域的红外图像,可采用训练好的图像识别模型对红外图像中的目标对象进行识别,并对目标对象在红外图像中的位置进行确定。该图像识别模型可以是通过神经网络或深度学习的方式训练得到的,针对不同类型的目标对象的识别需求,本方法可以采用不同的图像识别模型对红外图像中的目标对象进行识别,以提高目标对象的识别成功率。From the above, according to the acquired infrared image of the target area, the trained image recognition model can be used to identify the target object in the infrared image, and determine the position of the target object in the infrared image. The image recognition model can be trained by neural network or deep learning. According to the recognition requirements of different types of target objects, this method can use different image recognition models to identify the target objects in the infrared image, so as to improve the accuracy of the target object. Object recognition success rate.
在第一方面的一种可能的实现方式中,该标注信息的显示位置与该目标对象的空间位置、用户的人眼位置有关。In a possible implementation manner of the first aspect, the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes.
由上,为实现用户的观看体验,可以根据用户的人眼位置和目标对象的空间位置确定标注信息的显示位置,根据确定出的显示位置对标注信息进行显示时,可以使得用户看到的标注信息能够与目标对象的位置相融合,以使得用户在车外照明情况不佳时,仍然能够根据所显示的标注信息,确定车辆外的目标对象及其位置。From the above, in order to realize the user's viewing experience, the display position of the annotation information can be determined according to the user's human eye position and the spatial position of the target object, and when the annotation information is displayed according to the determined display position, the annotation that the user sees can be made The information can be fused with the position of the target object, so that the user can still determine the target object and its position outside the vehicle according to the displayed annotation information when the lighting conditions outside the vehicle are not good.
在第一方面的一种可能的实现方式中,该标注信息的显示尺寸与该标注信息的显示位置、用户的人眼位置、该目标对象的尺寸有关。In a possible implementation manner of the first aspect, the display size of the annotation information is related to the display position of the annotation information, the position of the user's eyes, and the size of the target object.
由上,标注信息可以为多种体现形式,该标注信息的显示尺寸可以根据用户的人眼位置、标注信息的显示位置和目标对象的空间位置确定。例如,该标注信息具体可以体现为一种提示框的形式,通过获取的用户的人眼位置、目标对象的空间位置和该提示框的显示位置,即可计算该提示框的显示大小,使得用户看到的提示框能够与行人匹配。随着车辆的行驶,目标对象与车辆的距离越来越近,则提示框的大小也会随之增大,从而使得用户能够直观感知到目标对象的位置。From the above, the annotation information can be in various forms, and the display size of the annotation information can be determined according to the position of the user's eyes, the display position of the annotation information, and the spatial position of the target object. For example, the annotation information can be specifically embodied in the form of a prompt box, and the display size of the prompt box can be calculated by obtaining the user's eye position, the spatial position of the target object, and the display position of the prompt box, so that the user The tooltips seen can be matched with pedestrians. As the vehicle travels, the distance between the target object and the vehicle becomes closer and closer, and the size of the prompt box increases accordingly, so that the user can intuitively perceive the position of the target object.
在第一方面的一种可能的实现方式中,该显示区域为增强现实抬头显示器的显示区域,该标注信息的显示位置与该目标对象的空间位置、用户的人眼位置有关包括:该标注信息的显示位置是通过该用户的第一视线确定的。该第一视线是从该用户的人眼位置到该目标对象的空间位置的视线。In a possible implementation of the first aspect, the display area is the display area of an augmented reality head-up display, and the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes, including: the annotation information The display position of is determined by the user's first line of sight. The first line of sight is the line of sight from the user's eye position to the spatial position of the target object.
在第一方面的一种可能的实现方式中,该增强现实抬头显示器的显示区域位于该车辆的前挡风玻璃,该标注信息的显示位置是通过该用户的第一视线确定的包括:该标注信息的显示位置是通过该用户的第一视线与该车辆的前挡风玻璃的交点确定的。In a possible implementation of the first aspect, the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the annotation information is determined through the user's first line of sight, including: the annotation The display position of the information is determined by the intersection of the user's first line of sight and the front windshield of the vehicle.
由上,本方法中的显示区域具体可以为增强现实抬头显示器的显示区域,增强现实抬头显示器可以根据标注信息的显示位置,将该标注信息进行投影显示。具体的,增强现实抬头显示器可以将车辆的前挡风玻璃作为显示区域,根据用户的人眼位置和目标对象的空间位置确定用户的第一视线,该第一视线与车辆的前挡风玻璃的交点即可确定为标注信息的显示位置,使得用户看到的标注信息与目标对象在同一视线上,提高标注信息的显示效果。From the above, the display area in this method may specifically be the display area of the augmented reality head-up display, and the augmented reality head-up display may project and display the marked information according to the display position of the marked information. Specifically, the augmented reality head-up display can use the front windshield of the vehicle as a display area, and determine the user's first line of sight according to the position of the user's human eyes and the spatial position of the target object. The intersection point can be determined as the display position of the annotation information, so that the annotation information seen by the user is on the same line of sight as the target object, and the display effect of the annotation information is improved.
在第一方面的一种可能的实现方式中,确定该第一视线与车辆的前挡风玻璃的交点包括:基于该目标对象的空间位置,确定该目标对象与图像采集装置之间的连线相对于水平方向的第一夹角,该水平方向与该车辆的前挡风玻璃平行。基于用户的人眼位置与图像采集装置的距离、该第一夹角,确定用户的人眼与目标对象之间的连线相对于水平方向的第二夹角。基于用户的人眼与车辆的前挡风玻璃的距离、该第二夹角,确定标注信息在前挡风玻璃上的显示位置。In a possible implementation manner of the first aspect, determining the intersection of the first line of sight and the front windshield of the vehicle includes: determining a connection line between the target object and the image acquisition device based on the spatial position of the target object The horizontal direction is parallel to the front windshield of the vehicle at a first included angle with respect to the horizontal direction. Based on the distance between the user's eye position and the image acquisition device, and the first angle, a second angle between the user's eye and the target object relative to the horizontal direction is determined. Based on the distance between the user's eyes and the front windshield of the vehicle, and the second included angle, the display position of the annotation information on the front windshield is determined.
由上,基于目标对象在红外图像中的位置和图像采集装置的水平视场角,即可计算得到目标对象与图像采集装置之间的连线相对于水平方向的第一夹角,基于用户眼睛与图像采集装置的横向距离和该第一夹角,即可计算得到用户眼睛与该目标对象之间的连线相对于水平方向的第二夹角,基于该第二夹角以及用户眼睛与车辆的前挡风玻璃的距离,即可计算得到标注信息在前挡风玻璃上的显示位置。本方法中显示位置的计算量较小,且计算方式均是基于简单的三角函数,计算量较小,能够快速得到准确的显示位置。From the above, based on the position of the target object in the infrared image and the horizontal field of view angle of the image acquisition device, the first angle between the line between the target object and the image acquisition device relative to the horizontal direction can be calculated, based on the user's eyes The lateral distance from the image acquisition device and the first angle can be used to calculate the second angle between the user's eyes and the target object relative to the horizontal direction. Based on the second angle and the distance between the user's eyes and the vehicle The distance of the front windshield of the vehicle can be calculated to obtain the display position of the label information on the front windshield. In this method, the calculation amount of the display position is small, and the calculation method is based on simple trigonometric functions, the calculation amount is small, and the accurate display position can be quickly obtained.
在第一方面的一种可能的实现方式中,该目标对象的空间位置与该目标对象在该红外图像中的位置、获取该红外图像的图像采集装置的内参和外参有关。In a possible implementation manner of the first aspect, the spatial position of the target object is related to the position of the target object in the infrared image, internal parameters and external parameters of an image acquisition device that acquires the infrared image.
由上,目标对象的空间位置具体可以是目标对象在车辆坐标系下的空间位置。本方法基于目标对象在红外图像中的位置和图像采集装置的安装高度、可视地面夹角等参数,即可计算得到该目标对象与图像采集装置的距离,基于该距离以及图像采集装置在车辆坐标系下的空间位置,即可确定该目标对象在车辆坐标系下的空间位置。From the above, the spatial position of the target object may specifically be the spatial position of the target object in the vehicle coordinate system. This method can calculate the distance between the target object and the image acquisition device based on the position of the target object in the infrared image, the installation height of the image acquisition device, and the angle between the visible ground and other parameters. Based on the distance and the image acquisition device in the vehicle The spatial position in the vehicle coordinate system can determine the spatial position of the target object in the vehicle coordinate system.
在第一方面的一种可能的实现方式中,该目标对象包括其他车辆、行人、动物中的一个或多个对象。In a possible implementation manner of the first aspect, the target object includes one or more objects among other vehicles, pedestrians, and animals.
由上,本方法中的目标对象可以是其他车辆、行人、动物等移动的对象,还可以是道路标志、树木等静止的对象,具体可以根据用户的需求选择目标对象的类型,以实现用户的客制化需求。From the above, the target object in this method can be moving objects such as other vehicles, pedestrians, animals, etc., and can also be static objects such as road signs and trees. Specifically, the type of target object can be selected according to the user's needs to realize the user's Customized requirements.
在第一方面的一种可能的实现方式中,确定该目标对象在该红外图像中的位置之前,还包括:对该红外图像进行裁剪、降噪、增强、平滑、和锐化中的一个或多个处理。In a possible implementation manner of the first aspect, before determining the position of the target object in the infrared image, it further includes: performing one or more of cropping, noise reduction, enhancement, smoothing, and sharpening on the infrared image multiple processing.
由上,由于红外补光的范围有限,可能无法完全覆盖图像采集装置所能涵盖到的图像采集范围,因此在对获取的红外图像进行识别之前,可以对该红外图像进行裁剪、降噪、增强、平滑、或锐化等处理,从而便于对红外图像中的目标对象进行有效快速的识别。From the above, due to the limited range of infrared supplementary light, it may not be able to completely cover the image acquisition range covered by the image acquisition device. Therefore, before the acquired infrared image is recognized, the infrared image can be cropped, denoised, and enhanced. , smoothing, or sharpening, etc., so as to facilitate effective and rapid identification of target objects in infrared images.
本申请的第二方面提供一种显示装置,包括:补光模块,用于对车辆外的目标区域进行红外补光。获取模块,用于获取该目标区域的红外图像的信息,其中,该目标区域中包括目标对象。处理模块,用于确定该目标对象在该红外图像中的位置。发送模块,用于根据该目标对象在该红外图像中的位置,在显示区域显示该目标对象的标注信息。A second aspect of the present application provides a display device, including: a supplementary light module, configured to perform infrared supplementary light on a target area outside the vehicle. The obtaining module is used to obtain information of the infrared image of the target area, wherein the target area includes the target object. A processing module, configured to determine the position of the target object in the infrared image. The sending module is configured to display the label information of the target object in the display area according to the position of the target object in the infrared image.
在第二方面的一种可能的实现方式中,该标注信息的显示位置与该目标对象的空间位置、用户的人眼位置、该目标对象的尺寸有关。In a possible implementation manner of the second aspect, the display position of the annotation information is related to the spatial position of the target object, the position of the user's eyes, and the size of the target object.
在第二方面的一种可能的实现方式中,该标注信息的显示尺寸与该标注信息的显示位置、用户的人眼位置、该目标对象的尺寸有关。In a possible implementation manner of the second aspect, the display size of the annotation information is related to the display position of the annotation information, the position of the user's eyes, and the size of the target object.
在第二方面的一种可能的实现方式中,该显示区域为增强现实抬头显示器的显示区域,该标注信息的显示位置与该目标对象的空间位置、用户的人眼位置有关包括:该标注信息的显示位置是通过该用户的第一视线的第一视线确定的。该第一视线是从该用户的人眼位置到该目标对象的空间位置的视线。In a possible implementation of the second aspect, the display area is the display area of an augmented reality head-up display, and the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes, including: the annotation information The display position of is determined by the user's first line of sight. The first line of sight is the line of sight from the user's eye position to the spatial position of the target object.
在第二方面的一种可能的实现方式中,该增强现实抬头显示器的显示区域位于该车辆的前挡风玻璃,该标注信息的显示位置是通过该用户的第一视线的第一视线确定的包括:该标注信息的显示位置是通过该用户的第一视线与该车辆的前挡风玻璃的交点确定的。In a possible implementation manner of the second aspect, the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the annotation information is determined by the first line of sight of the user's first line of sight It includes: the display position of the marked information is determined by the intersection point of the user's first line of sight and the front windshield of the vehicle.
在第二方面的一种可能的实现方式中,该目标对象的空间位置与该目标对象在该红外图像中的位置、获取该红外图像的图像采集装置的内参和外参有关。In a possible implementation manner of the second aspect, the spatial position of the target object is related to the position of the target object in the infrared image, internal parameters and external parameters of an image acquisition device that acquires the infrared image.
在第二方面的一种可能的实现方式中,该目标对象包括其他车辆、行人、动物中的一个或多个对象。In a possible implementation manner of the second aspect, the target object includes one or more objects among other vehicles, pedestrians, and animals.
在第二方面的一种可能的实现方式中,该处理模块在用于确定该目标对象在该红外图像中的位置之前,还用于:对该红外图像进行裁剪、降噪、增强、平滑、和锐化中的一个或多个处理。In a possible implementation manner of the second aspect, before the processing module is used to determine the position of the target object in the infrared image, it is further configured to: perform cropping, noise reduction, enhancement, smoothing, and one or more of sharpening.
本申请的第三方面提供一种计算设备,包括:处理器,以及存储器,其上存储有程序指令,该程序指令当被该处理器执行时使得该处理器执行如第一方面及上述各种可选的实现方式提供的多种技术方案中的显示方法。A third aspect of the present application provides a computing device, including: a processor, and a memory, on which program instructions are stored, and when the program instructions are executed by the processor, the processor performs the operations described in the first aspect and the above-mentioned various methods. Display methods in various technical solutions provided by optional implementation manners.
在一种可能的实现方式中,该计算设备为AR-HUD、HUD中的一个。In a possible implementation manner, the computing device is one of AR-HUD and HUD.
在一种可能的实现方式中,该计算设备为车辆。In a possible implementation, the computing device is a vehicle.
在一种可能的实现方式中,该计算设备为车机、车载电脑中的一个。In a possible implementation manner, the computing device is one of an on-board computer and an on-board computer.
本申请的第四方面提供了一种电子装置,包括:处理器,以及接口电路,其中,该处理器通过该接口电路访问存储器,该存储器存储有程序指令,该程序指令当被该处理器执行时使得该处理器执行如第一方面及上述各种可选的实现方式提供的多种技术方案中的显示方法。A fourth aspect of the present application provides an electronic device, including: a processor, and an interface circuit, wherein the processor accesses a memory through the interface circuit, and the memory stores program instructions, and when the program instructions are executed by the processor At this time, the processor is made to execute the display methods in the various technical solutions provided in the first aspect and the above-mentioned various optional implementation manners.
在一种可能的实现方式中,该电子装置为AR-HUD、HUD中的一个。In a possible implementation manner, the electronic device is one of AR-HUD and HUD.
在一种可能的实现方式中,该电子装置为车辆。In a possible implementation manner, the electronic device is a vehicle.
在一种可能的实现方式中,该电子装置为车机、车载电脑中的一个。In a possible implementation manner, the electronic device is one of a car machine and a car computer.
本申请的第五方面提供一种显示系统,包括:车机装置,以及与该车机装置耦合的如第三方面及上述各种可选的实现方式提供的多种技术方案中的计算设备,或与该车机装置耦合的如第四方面及上述各种可选的实现方式提供的多种技术方案中的电子装置。A fifth aspect of the present application provides a display system, including: a vehicle-machine device, and a computing device coupled with the vehicle-machine device as provided in the third aspect and the various optional implementation manners above, Or the electronic device in the various technical solutions provided in the fourth aspect and the above-mentioned various optional implementation manners coupled with the vehicle-machine device.
在一种可能的实现方式中,该显示系统为车辆。In a possible implementation manner, the display system is a vehicle.
本申请的第六方面提供一种计算机可读存储介质,其上存储有程序指令,该程序指令当被计算机执行时使得该计算机执行如第一方面及上述各种可选的实现方式提供的多种技术方案中的显示方法。The sixth aspect of the present application provides a computer-readable storage medium, on which program instructions are stored. When the program instructions are executed by a computer, the computer executes multiple A display method in the technical solution.
本申请的第七方面提供一种计算机程序产品,其包括有程序指令,该程序指令当被计算机执行时使得该计算机执行如第一方面及上述各种可选的实现方式提供的多种 技术方案中的显示方法。The seventh aspect of the present application provides a computer program product, which includes program instructions. When the program instructions are executed by a computer, the computer executes various technical solutions as provided in the first aspect and various optional implementation manners above. display method in .
综上,本申请提供的显示方法及装置、设备及车辆,通过红外成像的方式对车辆外的目标区域进行实时红外图像的检测和采集,以识别红外图像中的目标对象,确定目标对象在红外图像中的位置,以此得到目标对象的空间位置,并根据目标对象的空间位置和用户的人眼位置确定标注信息的显示位置,从而将生成的标注信息在显示位置进行显示。由此可使得人眼看到的标注信息能够和目标对象的位置相融合,以提醒用户注意车辆外的目标对象。同时,本申请还可以根据用户的人眼位置、标注信息的显示位置、以及目标对象的尺寸,确定标注信息的尺寸,从而可以实现标注信息的尺寸随着目标对象的接近而逐渐变大,以实现更优的显示效果。通过本申请,可以在驾驶过程中,尤其在夜间照明不佳的情况下,及时检测车辆周围的目标对象,并进行标注信息的显示,提高驾驶的安全性。To sum up, the display method, device, equipment, and vehicle provided by the present application detect and collect real-time infrared images of the target area outside the vehicle through infrared imaging to identify the target object in the infrared image and determine the target object in the infrared image. The position in the image is used to obtain the spatial position of the target object, and the display position of the annotation information is determined according to the spatial position of the target object and the position of the user's eyes, so that the generated annotation information is displayed at the display position. In this way, the annotation information seen by human eyes can be fused with the position of the target object, so as to remind the user to pay attention to the target object outside the vehicle. At the same time, the application can also determine the size of the annotation information according to the user's eye position, the display position of the annotation information, and the size of the target object, so that the size of the annotation information can gradually become larger as the target object approaches, so that To achieve a better display effect. Through the present application, it is possible to timely detect target objects around the vehicle during driving, especially at night when the illumination is poor, and display marked information, thereby improving driving safety.
附图说明Description of drawings
图1为本申请实施例提供的显示方法的一种应用场景的架构图;FIG. 1 is a structural diagram of an application scenario of a display method provided by an embodiment of the present application;
图2为本申请实施例提供的一种车辆的架构图;FIG. 2 is a structural diagram of a vehicle provided in an embodiment of the present application;
图3A为本申请实施例提供的一种车辆座舱的侧部示意图;FIG. 3A is a schematic side view of a vehicle cockpit provided by an embodiment of the present application;
图3B为本申请实施例提供的一种车辆座舱的前部示意图;Fig. 3B is a front schematic diagram of a vehicle cockpit provided by an embodiment of the present application;
图4为本申请实施例提供的一种显示方法的流程图;FIG. 4 is a flow chart of a display method provided by an embodiment of the present application;
图5为本申请实施例提供的一种确定标注信息的显示位置的流程图;FIG. 5 is a flow chart for determining the display position of annotation information provided by the embodiment of the present application;
图6为本申请实施例的车辆与行人的正视角度的位置分布图;FIG. 6 is a position distribution diagram of the front view angles of vehicles and pedestrians according to the embodiment of the present application;
图7为本申请实施例的车辆与行人的侧视角度的位置分布图;FIG. 7 is a position distribution diagram of side view angles of vehicles and pedestrians according to the embodiment of the present application;
图8为本申请实施例的车辆与行人的俯视角度的位置分布图;Fig. 8 is a position distribution diagram of the overlooking angles of vehicles and pedestrians according to the embodiment of the present application;
图9为本申请实施例提供的一种显示装置的架构图;FIG. 9 is a structural diagram of a display device provided by an embodiment of the present application;
图10为本申请实施例提供的一种计算设备的架构图;FIG. 10 is an architecture diagram of a computing device provided by an embodiment of the present application;
图11为本申请实施例提供的一种电子装置的架构图;FIG. 11 is a structural diagram of an electronic device provided by an embodiment of the present application;
图12为本申请实施例提供的一种显示系统的架构图。FIG. 12 is a structural diagram of a display system provided by an embodiment of the present application.
应理解,上述结构示意图中,各框图的尺寸和形态仅供参考,不应构成对本发明实施例的排他性的解读。结构示意图所呈现的各框图间的相对位置和包含关系,仅为示意性地表示各框图间的结构关联,而非限制本发明实施例的物理连接方式。It should be understood that in the above structural schematic diagrams, the dimensions and shapes of each block diagram are for reference only, and should not constitute an exclusive interpretation of the embodiments of the present invention. The relative positions and containment relationships among the block diagrams presented in the structural diagram are only schematic representations of the structural relationships among the block diagrams, rather than limiting the physical connection manners of the embodiments of the present invention.
具体实施方式detailed description
下面结合附图并举实施例,对本申请提供的技术方案作进一步说明。应理解,本申请实施例中提供的系统结构和业务场景主要是为了说明本申请的技术方案的可能的实施方式,不应被解读为对本申请的技术方案的唯一限定。本领域普通技术人员可知,随着系统结构的演进和新业务场景的出现,本申请提供的技术方案对类似技术问题同样适用。The technical solutions provided by the present application will be further described below in conjunction with the accompanying drawings and examples. It should be understood that the system structure and business scenarios provided in the embodiments of the present application are mainly for illustrating possible implementations of the technical solution of the present application, and should not be interpreted as the only limitation on the technical solution of the present application. Those skilled in the art know that, with the evolution of the system structure and the emergence of new business scenarios, the technical solutions provided in this application are also applicable to similar technical problems.
应理解,本申请实施例提供的显示方案,包括显示方法及装置、设备及车辆。由于这些技术方案解决问题的原理相同或相似,在如下具体实施例的介绍中,某些重复之处可能 不再赘述,但应视为这些具体实施例之间已有相互引用,可以相互结合。It should be understood that the display solutions provided in the embodiments of the present application include display methods and devices, equipment, and vehicles. Because the principles of these technical solutions to solve problems are the same or similar, in the introduction of the following specific embodiments, some repetitions may not be repeated, but it should be considered that these specific embodiments have mutual references and can be combined with each other.
夜晚开车的时候,如果在光线不佳的环境中行驶,可能会看不清车辆周围的目标对象,例如行人、其他车辆、小动物或其他障碍物等,容易增加交通事故发生的可能性。针对于此,虽然可以通过打开车辆的近光灯或远光灯对车辆前方进行照明,然而近光灯的照明范围有限,远光灯虽然可以适当提升车辆前方的照明范围,但是远光灯容易对道路上的行人或者其他车辆等产生干扰,同时自车的用户也容易被对面车辆的远光灯干扰,会车时容易出现视觉盲区,存在发生事故的风险。When driving at night, if you drive in a poorly lit environment, you may not be able to see clearly the target objects around the vehicle, such as pedestrians, other vehicles, small animals or other obstacles, which will easily increase the possibility of traffic accidents. In view of this, although the low beam or high beam of the vehicle can be turned on to illuminate the front of the vehicle, the lighting range of the low beam is limited. Although the high beam can properly improve the lighting range in front of the vehicle, the high beam is easy It will interfere with pedestrians or other vehicles on the road. At the same time, the user of the own car is also easily disturbed by the high beams of the oncoming vehicle. It is easy to have blind spots when meeting cars, and there is a risk of accidents.
为此,本申请实施例提供了一种显示方法及装置、设备及车辆,能够在驾驶过程中通过红外成像的方式对车辆周围的目标对象进行检测,并将标注信息显示在目标对象的对应位置上,使得用户看到的标注信息与目标对象的位置相融合,以实现提醒用户的目的,提高夜间驾驶的安全性。其中,用户通常是驾驶员。用户也可以是副驾乘客或后排乘客等。下面对本申请进行详细介绍。To this end, the embodiment of the present application provides a display method, device, device, and vehicle, which can detect target objects around the vehicle through infrared imaging during driving, and display label information at the corresponding position of the target object In this way, the annotation information seen by the user is fused with the position of the target object, so as to realize the purpose of reminding the user and improve the safety of driving at night. Among them, the user is usually a driver. The user can also be a copilot passenger or a rear passenger, etc. The application is described in detail below.
首先对本实施例涉及的应用场景进行概要性的说明。图1为本申请实施例提供的显示方法的一种应用场景的架构图,如图1所示,该应用场景包括:补光装置110、采集装置120、处理装置130、发送装置140。如图2所示,本实施例的应用场景具体涉及一种车辆100,该车辆可以是家用轿车或载货汽车等,还可以是特种车辆例如救护车、消防车、警车或工程抢险车等。该车辆上可以安装有上述补光装置110、采集装置120、处理装置130、发送装置140,可以安装在车辆的外部,也可以安装在车辆的内部。下面参照图3A-图3B对该应用场景所涉及的该车辆100的具体架构进行详细描述。Firstly, the application scenarios involved in this embodiment are briefly described. FIG. 1 is an architecture diagram of an application scenario of a display method provided by an embodiment of the present application. As shown in FIG. As shown in FIG. 2 , the application scenario of this embodiment specifically involves a vehicle 100 , which may be a family car or a truck, or a special vehicle such as an ambulance, fire engine, police car or engineering emergency vehicle. The above-mentioned supplementary light device 110, collection device 120, processing device 130, and sending device 140 may be installed on the vehicle, and may be installed outside or inside the vehicle. The specific architecture of the vehicle 100 involved in this application scenario will be described in detail below with reference to FIGS. 3A-3B .
如图3A所示,补光装置110可以为红外补光灯、红外发射器或其他具有红外发射功能的一个设备或多个组合设备,可以设置在车辆100的前部,例如车辆前部的大灯处,以便于布线。还可以设置在车辆的顶部或车辆座舱的后视镜的朝向车外的一侧。其主要用于在照明不佳的夜晚驾驶车辆时,对车辆周围的目标区域进行红外补光,补光的范围能够涵盖采集装置120的最大视场角。其中该目标区域可以为车辆的前方、侧方或者后方,通过对该目标区域进行红外补光,以便于采集装置120在进行检测和采集时,能够获取到较为清晰的红外图像。该目标区域中可以包含待检测和采集的目标对象,该目标对象可以为其他车辆、行人、动物或其他障碍物等对象。本实施例选用红外补光灯时,可以选用大功率的红外补光灯(例如30瓦特),由于红外光为不可见光,因此选用大功率的红外补光灯也不会对道路上的行人或其他车辆造成影响。红外补光灯是本实施例的一个示例,除此之外,还可以选用其他能够实现发射红外线的装置或设备,本实施例不对该补光装置110的类型、位置和数量做具体限定。As shown in FIG. 3A , the supplementary light device 110 can be an infrared supplementary light, an infrared emitter, or other equipment or a combination of devices with an infrared emission function, and can be arranged at the front of the vehicle 100, such as a large panel at the front of the vehicle. lights for easy wiring. It can also be arranged on the top of the vehicle or the side of the rearview mirror of the vehicle cockpit facing the outside of the vehicle. It is mainly used to provide infrared supplementary light to the target area around the vehicle when driving the vehicle at night with poor lighting, and the range of supplementary light can cover the maximum field of view angle of the acquisition device 120 . The target area may be the front, side or rear of the vehicle, and the infrared supplementary light is applied to the target area, so that the collection device 120 can obtain a clearer infrared image when detecting and collecting. The target area may contain target objects to be detected and collected, and the target objects may be other vehicles, pedestrians, animals, or other obstacles. When the present embodiment selects the infrared supplementary light, a high-power infrared supplementary light (for example, 30 watts) can be selected for use. Since infrared light is invisible light, a high-power infrared supplementary light will not be harmful to pedestrians or pedestrians on the road. other vehicles are affected. The infrared supplementary light is an example of this embodiment. In addition, other devices or devices capable of emitting infrared rays can also be selected. This embodiment does not specifically limit the type, position and quantity of the supplementary light device 110 .
采集装置120可以包括车外采集装置和车内采集装置。如图3A所示,车外采集装置具体可以采用红外摄像头、车载雷达或其他具有红外图像采集或红外扫描功能的一个设备或多个组合设备,可以设置在车辆100的顶部、头部或车辆座舱的后视镜的朝向车外的一侧,可以安装在车辆的内部,也可以安装在车辆的外部。其主要用于对车辆外的红外补光的目标区域进行红外图像信息的检测和采集。该目标区域中可以包含待检测和采集的目标对象,该目标对象可以为其他车辆、行人、动物或其他障碍物 等对象。该红外图像信息可以是单张的红外图像,还可以是采集的视频流中的一帧或多帧的红外图像。如图3B所示,车内采集装置具体可以采用车载摄像头、人眼检测仪等设备,车内采集装置在具体实现过程中,可以按照需求设置按照位置,例如,可以设置在车辆座舱的A柱、B柱或车辆座舱的后视镜的朝向用户的一侧,还可以设置在方向盘、中控台附近区域,还可以设置在座椅后方显示屏上方等位置。其主要用于对车辆座舱的用户的人眼位置信息进行检测和采集。车内采集装置可以是一台,也可以是多台,本申请对其位置和数量不做限定。The acquisition device 120 may include an out-of-vehicle acquisition device and an in-vehicle acquisition device. As shown in Figure 3A, the acquisition device outside the vehicle can specifically use an infrared camera, on-board radar, or other equipment or multiple combined equipment with infrared image acquisition or infrared scanning functions, and can be arranged on the top, head or vehicle cockpit of the vehicle 100 The side of the rearview mirror facing the outside of the vehicle can be installed inside or outside the vehicle. It is mainly used to detect and collect infrared image information of the target area of infrared supplementary light outside the vehicle. The target area may contain target objects to be detected and collected, and the target objects may be other vehicles, pedestrians, animals or other obstacles. The infrared image information may be a single infrared image, or one or more frames of infrared images in the collected video stream. As shown in Figure 3B, the in-vehicle acquisition device can specifically use equipment such as a vehicle-mounted camera, a human eye detector, etc. During the specific implementation process, the in-vehicle acquisition device can be set according to the location according to requirements, for example, it can be installed on the A-pillar of the vehicle cockpit , the B-pillar or the side of the rearview mirror of the vehicle cockpit facing the user, it can also be set on the steering wheel, the area near the center console, and can also be set on the position above the display screen behind the seat. It is mainly used to detect and collect the human eye position information of the user in the vehicle cockpit. There may be one collection device in the vehicle, or there may be multiple collection devices, and the application does not limit its location and quantity.
处理装置130可以为电子设备,具体可以为车机或车载电脑等车载处理装置的处理器,也可以为中央处理器(Central Processing Unit,CPU)、微处理器(Micro Control Unit,MCU)等常规的芯片处理器,还可以为手机、平板等终端硬件。该处理装置130内可以预设有图像识别模型或通过获取车辆内其他器件内预设的图像识别模型,可以根据接收的红外图像信息对红外图像中的目标对象进行识别,确定目标对象在红外图像中的位置,并生成对应该目标对象的标注信息,该标注信息可以为提示框、高亮标志、AR图像等,还可以为文字或指引线等。并且还可以根据目标对象在红外图像中的位置,确定目标对象的空间位置,根据车内采集装置获取的人眼位置信息,确定用户的人眼位置,根据该目标对象的空间位置和用户的人眼位置,确定目标对象的标注信息的显示位置,并将确定的标注信息和标注信息的显示位置输出至发送装置140中。The processing device 130 can be an electronic device, specifically a processor of a vehicle-mounted processing device such as a car machine or a vehicle-mounted computer, or a conventional processor such as a central processing unit (Central Processing Unit, CPU) or a microprocessor (Micro Control Unit, MCU). The chip processor can also be used as terminal hardware such as mobile phones and tablets. The processing device 130 can be preset with an image recognition model or by acquiring the preset image recognition model in other devices in the vehicle, it can identify the target object in the infrared image according to the received infrared image information, and determine the target object in the infrared image. position in the target object, and generate annotation information corresponding to the target object. The annotation information can be a prompt box, a highlighted sign, an AR image, etc., and can also be a text or a guideline. And it can also determine the spatial position of the target object according to the position of the target object in the infrared image, determine the user's human eye position according to the human eye position information obtained by the acquisition device in the vehicle, and determine the user's human eye position according to the spatial position of the target object and the user's human eye position. eye position, determine the display position of the label information of the target object, and output the determined label information and the display position of the label information to the sending device 140 .
发送装置140可以为HUD、AR-HUD或其他具有显示功能的设备,可以安装于车辆座舱的中控台上方或中控台内部,主要用于将标注信息在显示区域进行显示。该发送装置140的显示区域可以为车辆的前挡风玻璃,还可以为独立显示的透明屏幕,用以反射发送装置140发出的标注信息的光线后进入到用户的眼中,使得用户透过前挡风玻璃或透明屏幕望向车外时,能够看到标注信息与车辆外的目标对象的位置相融合,以提醒用户车辆外出现的目标对象的类型或位置,提高标注信息的显示效果,提高行驶的安全性。The sending device 140 can be HUD, AR-HUD or other devices with display functions, and can be installed above or inside the center console of the vehicle cockpit, and is mainly used to display the marked information in the display area. The display area of the sending device 140 can be the front windshield of the vehicle, or it can be an independently displayed transparent screen, which is used to reflect the light of the marked information sent by the sending device 140 and then enter the user's eyes, so that the user can see through the front windshield. When the windshield or transparent screen looks out of the car, you can see the fusion of the marked information and the position of the target object outside the vehicle, so as to remind the user of the type or position of the target object that appears outside the vehicle, improve the display effect of the marked information, and improve the driving experience. security.
其中,补光装置110、采集装置120、处理装置130、发送装置140可以分别通过有线通信(如接口电路)或无线通信(如蓝牙、wifi)等方式进行数据或指令的通信,例如,补光装置110可以通过蓝牙通信接收处理装置130的控制指令,开启对车辆外的目标区域的补光。采集装置120在采集到目标区域的红外图像信息后,可以通过蓝牙通信将该红外图像信息传输给处理装置130。采集装置120在采集到用户的人眼位置信息后,可以通过蓝牙通信将该用户的人眼位置信息传输给处理装置130。处理装置130根据红外图像信息确定红外图像中的目标对象以及目标对象的空间位置,并生成标注信息,还根据目标对象的空间位置和用户的人眼位置信息计算标注信息的显示位置,并将得到的标注信息和标注信息的显示位置输出至发送装置140中,该发送装置140将该标注信息在显示区域的显示位置进行显示。Among them, the supplementary light device 110, the acquisition device 120, the processing device 130, and the sending device 140 can communicate data or instructions through wired communication (such as interface circuit) or wireless communication (such as Bluetooth, wifi), etc., for example, supplementary light The device 110 may receive a control command from the processing device 130 through Bluetooth communication, and turn on the supplementary light for the target area outside the vehicle. After collecting the infrared image information of the target area, the collecting device 120 may transmit the infrared image information to the processing device 130 through Bluetooth communication. After the acquisition device 120 collects the user's eye position information, it may transmit the user's eye position information to the processing device 130 through Bluetooth communication. The processing device 130 determines the target object and the spatial position of the target object in the infrared image according to the infrared image information, and generates labeling information, and calculates the display position of the labeling information according to the spatial position of the target object and the user's eye position information, and obtains The annotation information and the display position of the annotation information are output to the sending device 140, and the sending device 140 displays the annotation information at the display position of the display area.
通过上述结构,本实施例涉及的车辆100能够通过红外成像的方式对车辆外的目标区域进行红外图像信息的采集,以确定目标区域内的目标对象,还能够通过车内采集装置采集用户的人眼位置信息,并基于红外图像信息和用户的人眼位置信息,确定标注信息及其显示位置,利用发送装置将该标注信息显示在该显示位置处,从而使得用户看到的标注信息能够与车辆外的目标对象相融合,以实现提醒用户的目的,提高 夜间驾驶的安全性。Through the above-mentioned structure, the vehicle 100 involved in this embodiment can collect infrared image information on the target area outside the vehicle through infrared imaging to determine the target object in the target area, and can also collect the user's personal information through the in-vehicle acquisition device. Eye position information, and based on the infrared image information and the user's eye position information, determine the label information and its display position, use the sending device to display the label information at the display position, so that the label information seen by the user can be compared with the vehicle In order to achieve the purpose of reminding the user and improve the safety of driving at night.
图4示出了本申请实施例提供的一种显示方法的流程图,该显示方法可以由显示装置或显示装置中的部分器件来执行,例如AR-HUD、HUD、车、处理器等,其中该处理器可以是显示装置的处理器,也可以是车机或车载电脑等车载处理装置的处理器。具体可通过红外成像的方式对车辆外的目标区域进行红外图像信息的采集,并将标注信息显示在对应位置上,使得用户在车外照明情况不佳时,仍然能够根据所显示的标注信息,确定车辆外的目标对象及其位置,提高夜间驾驶的安全性。如图4所示,该显示方法包括:Fig. 4 shows a flow chart of a display method provided by an embodiment of the present application. The display method can be executed by a display device or some devices in the display device, such as AR-HUD, HUD, car, processor, etc., where The processor may be a processor of a display device, or a processor of a vehicle-mounted processing device such as a vehicle machine or a vehicle-mounted computer. Specifically, the infrared image information of the target area outside the vehicle can be collected by means of infrared imaging, and the label information can be displayed on the corresponding position, so that the user can still use the displayed label information when the lighting conditions outside the vehicle are not good. Improve night driving safety by identifying target objects and their locations outside the vehicle. As shown in Figure 4, the display method includes:
S410:对车辆外的目标区域进行红外补光;S410: performing infrared supplementary light on the target area outside the vehicle;
本实施例中,考虑到夜间驾驶过程中或者在照明条件不佳的路况的驾驶过程中,缺乏照明造成的红外成像效果不佳的情况,可以通过在车辆上设置补光装置的方式,例如可以在车辆顶部、头部或车辆座舱的后视镜的朝向车外的一侧设置红外补光灯,其中,处理器可以通过接口电路,向红外补光灯发送开启补光的指令,以开启该红外补光灯,对车辆外的目标区域进行红外补光,并通过采集装置对车辆外的目标区域进行实时检测,并实时获取红外图像信息。例如该采集装置可以为红外摄像头,该红外图像信息可以包括分辨率、大小、尺寸、颜色等信息。In this embodiment, considering that during driving at night or on roads with poor lighting conditions, the infrared imaging effect caused by lack of lighting is not good, it is possible to install a supplementary light device on the vehicle, for example, An infrared supplementary light is arranged on the side of the vehicle top, head or vehicle cockpit rearview mirror facing the outside of the vehicle, wherein the processor can send an instruction to turn on the supplementary light to the infrared supplementary light through the interface circuit to turn on the infrared supplementary light. The infrared supplementary light is used to provide infrared supplementary light to the target area outside the vehicle, and to detect the target area outside the vehicle in real time through the acquisition device, and to obtain infrared image information in real time. For example, the acquisition device may be an infrared camera, and the infrared image information may include information such as resolution, size, dimension, and color.
S420:获取所述目标区域的红外图像的信息;S420: Acquire information about the infrared image of the target area;
该处理器可以通过接口电路向红外摄像头发送采集图像的指令,以控制该红外摄像头获取车辆外的目标区域的红外图像。通过对获取的红外图像中的目标对象进行识别,该目标对象可以是车辆行驶过程中能够触发预警的目标对象,具体可以是行人、车辆、动物或者其他障碍物等。该处理器可以基于识别模型快速识别到红外图像中的目标对象特征,从而确定红外图像中的目标对象,以及该目标对象在红外图像中的位置。该识别模型可以通过神经网络模型或深度学习模型来实现,具体可以基于不同形式的目标对象,采用不同的识别模型进行红外图像的识别,例如当目标对象为行人时,可采用人像识别模型对红外图像进行识别,以确定红外图像中的行人以及行人的位置,由于行人的身高不同,本实施例为便于后续计算,可以以行人的脚部位置作为需要确定的位置。The processor can send an image acquisition instruction to the infrared camera through the interface circuit, so as to control the infrared camera to acquire the infrared image of the target area outside the vehicle. By identifying the target object in the acquired infrared image, the target object can be a target object that can trigger an early warning while the vehicle is driving, specifically pedestrians, vehicles, animals, or other obstacles. The processor can quickly recognize the features of the target object in the infrared image based on the recognition model, thereby determining the target object in the infrared image and the position of the target object in the infrared image. The recognition model can be realized through a neural network model or a deep learning model. Specifically, different recognition models can be used to recognize infrared images based on different forms of target objects. For example, when the target object is a pedestrian, a portrait recognition model can be used to identify infrared images. The image is recognized to determine the pedestrian and the position of the pedestrian in the infrared image. Since the height of the pedestrian is different, in this embodiment, for the convenience of subsequent calculation, the position of the foot of the pedestrian can be used as the position to be determined.
S430:确定所述目标对象在所述红外图像中的位置;S430: Determine the position of the target object in the infrared image;
本实施例中,处理器可以根据获取的红外图像的信息,确定红外图像中的目标对象,以及该目标对象在红外图像中的位置,例如,当该目标对象为行人时,由于行人的身高不同,本实施例为便于后续计算,可以以行人的脚部在红外图像中的位置作为需要确定的目标对象在红外图像中的位置。该处理器还可以根据红外图像中的目标对象的类型,生成该目标对象的标注信息。其中,该标注信息可以是基于红外图像中的目标对象,生成的具有提醒效果的信息,例如可以是提示框、高亮标志或箭头标记等,还可以是提示文字或指引线等,还可以是具有AR效果的AR图像等。In this embodiment, the processor can determine the target object in the infrared image and the position of the target object in the infrared image according to the information of the acquired infrared image. For example, when the target object is a pedestrian, because the height of the pedestrian is different In this embodiment, for the convenience of subsequent calculation, the position of the pedestrian's foot in the infrared image may be used as the position of the target object to be determined in the infrared image. The processor can also generate label information of the target object according to the type of the target object in the infrared image. Wherein, the annotation information can be information with a reminder effect generated based on the target object in the infrared image, for example, it can be a prompt box, a highlighted sign or an arrow mark, etc., it can also be a prompt text or a guide line, etc., or it can be AR images with AR effects, etc.
处理器基于所述目标对象在获取的红外图像中的位置,以及红外摄像头的内部参数、外部参数等,可以确定目标对象的空间位置,然后基于该目标对象的空间位置和用户眼睛的空间位置,可确定标注信息在显示区域的显示位置。其中,目标对象的空 间位置、用户眼睛的空间位置具体可以是其分别在车辆坐标系下的空间位置。The processor can determine the spatial position of the target object based on the position of the target object in the acquired infrared image, internal parameters, external parameters, etc. of the infrared camera, and then based on the spatial position of the target object and the spatial position of the user's eyes, The display position of the label information in the display area can be determined. Wherein, the spatial position of the target object and the spatial position of the user's eyes may specifically be their respective spatial positions in the vehicle coordinate system.
如图5所示的确定标注信息的显示位置的流程图中,显示区域可以为AR-HUD的显示区域,并且该AR-HUD的显示区域可以位于车辆的前挡风玻璃上。则本实施例中,确定标注信息在显示区域的显示位置,具体可以通过下述方式实现:In the flow chart of determining the display position of the annotation information shown in FIG. 5 , the display area may be the display area of the AR-HUD, and the display area of the AR-HUD may be located on the front windshield of the vehicle. Then in this embodiment, determining the display position of the annotation information in the display area can be specifically implemented in the following manner:
S431:基于所述目标对象在所述红外图像中的位置,确定所述目标对象的空间位置;S431: Determine the spatial position of the target object based on the position of the target object in the infrared image;
如图6所示为本实施例的车辆与行人的正视角度的位置分布图,可以认为该图6相当于红外摄像头视野下获取的红外图像的一种等效示意图,该图中的目标对象为车辆前方的行人,具体以行人的脚部位置作为行人在红外图像中的位置,水平方向为红外摄像头拍摄的最远路面,红外图像中的行人到水平方向的像素距离为a,距离车头的像素距离为b,距离红外图像两侧的像素距离分别为c和d。由于红外摄像头的各项参数在安装时就可确定,因此红外摄像头采集的红外图像中的水平方向、车头、两侧位置均是固定参数,因此基于行人在所述红外图像中的二维位置,即可计算行人相对于红外摄像头的距离。As shown in Figure 6, it is a position distribution diagram of the front view angles of vehicles and pedestrians in this embodiment, it can be considered that Figure 6 is equivalent to a kind of equivalent schematic diagram of the infrared image acquired under the infrared camera field of view, the target object in this figure is For the pedestrian in front of the vehicle, the position of the pedestrian's foot is used as the position of the pedestrian in the infrared image, and the horizontal direction is the farthest road surface captured by the infrared camera. The pixel distance from the pedestrian to the horizontal direction in the infrared image is a, and the pixel distance from the front of the vehicle is The distance is b, and the distances from the pixels on both sides of the infrared image are c and d, respectively. Since the parameters of the infrared camera can be determined at the time of installation, the horizontal direction, the front of the vehicle, and the positions on both sides in the infrared image collected by the infrared camera are all fixed parameters. Therefore, based on the two-dimensional position of the pedestrian in the infrared image, The distance of the pedestrian relative to the infrared camera can be calculated.
如图7所示为本实施例的车辆与行人的侧视角度的位置分布图,该图7中,可以根据红外摄像头安装的位置,确定红外摄像头安装的位置距离地面的高度为H,还可以确定红外摄像头可见的最近路面与最远路面之间的夹角为θ,该夹角θ的单位可以为弧度,由于红外摄像头的安装位置可以固定在车辆的某一处,则高度H和夹角θ可以确定为可知参数。则基于三角函数,行人与红外摄像头之间的水平距离L的计算方式可以为:As shown in Figure 7, it is the position distribution diagram of the side view angles of vehicles and pedestrians in this embodiment, in this Figure 7, according to the position where the infrared camera is installed, it can be determined that the height of the infrared camera installation from the ground is H, and it can also be Determine the angle between the nearest road surface visible to the infrared camera and the farthest road surface as θ. The unit of the angle θ can be radians. Since the installation position of the infrared camera can be fixed at a certain place on the vehicle, the height H and the angle θ can be determined as a known parameter. Then based on the trigonometric function, the calculation method of the horizontal distance L between the pedestrian and the infrared camera can be:
Figure PCTCN2021101446-appb-000001
Figure PCTCN2021101446-appb-000001
简化之后可得:After simplification, we get:
Figure PCTCN2021101446-appb-000002
Figure PCTCN2021101446-appb-000002
基于计算得到的行人与红外摄像头之间的水平距离L,以及红外摄像头在车辆坐标系下的空间位置,可以确定行人在车辆坐标系下的空间位置,即目标对象的空间位置。Based on the calculated horizontal distance L between the pedestrian and the infrared camera, and the spatial position of the infrared camera in the vehicle coordinate system, the spatial position of the pedestrian in the vehicle coordinate system, that is, the spatial position of the target object, can be determined.
S432:获取用户的人眼位置信息;S432: Obtain the user's eye position information;
通过上述的车内采集装置对用户的眼睛进行检测,该车内采集装置可以为摄像头,还可以为眼球检测仪。根据获取的用户的人眼位置信息,以及该车内采集装置的安装位置与车辆坐标系的转换关系,可以得到用户眼睛在车辆坐标系下的空间位置。The user's eyes are detected by the above-mentioned in-vehicle acquisition device, which may be a camera or an eyeball detector. According to the obtained position information of the user's eyes, and the conversion relationship between the installation position of the in-vehicle acquisition device and the vehicle coordinate system, the spatial position of the user's eyes in the vehicle coordinate system can be obtained.
S433:基于用户眼睛的空间位置和所述目标对象的空间位置,确定标注信息的显示位置;S433: Based on the spatial position of the user's eyes and the spatial position of the target object, determine the display position of the annotation information;
本实施例中,在进行标注信息的显示时,具体可以将标注信息发送到车辆的前挡风玻璃上进行显示,以便于用户平视观察。其中,可以将标注信息显示在前挡风玻璃的固定位置,用户在行驶过程中,无需改变视线,可以通过平视该前挡风玻璃的固定位置上的标注信息,确定车辆外是否存在行人、其他车辆、动物等目标对象。还可以将标注信息显示在前挡风玻璃的不同位置,具体可以基于用户眼睛的空间位置和行人 的空间位置,确定用户眼睛与行人之间的连线,则该连线与车辆的前挡风玻璃之间的交点即可确定为标注信息在前挡风玻璃上的显示位置。其中,该交点的空间位置可以通过坐标计算的方式得到,具体可以以车辆坐标系作为参考坐标系,然后基于用户眼睛的空间坐标、行人的空间坐标和车辆的前挡风玻璃的空间坐标,确定该交点在前挡风玻璃上的空间坐标。In this embodiment, when displaying the marked information, specifically, the marked information may be sent to the front windshield of the vehicle for display, so that the user can observe it head-up. Among them, the labeling information can be displayed at a fixed position on the front windshield, and the user can determine whether there are pedestrians, other vehicles, etc. Target objects such as vehicles and animals. The label information can also be displayed on different positions of the front windshield. Specifically, the connection line between the user's eyes and the pedestrian can be determined based on the spatial position of the user's eyes and the spatial position of the pedestrian. The intersection point between the glasses can be determined as the display position of the label information on the front windshield. Wherein, the spatial position of the intersection point can be obtained by coordinate calculation. Specifically, the vehicle coordinate system can be used as the reference coordinate system, and then based on the spatial coordinates of the user's eyes, the spatial coordinates of pedestrians, and the spatial coordinates of the vehicle's front windshield, determine The spatial coordinates of the intersection point on the front windshield.
如图8所示为本实施例的车辆与行人的俯视角度的位置分布图,该图8中,基于用户眼睛的空间位置和红外摄像头的空间位置,可获取用户眼睛与红外摄像头之间的横向距离为e。红外摄像头的水平视场角为λ,设行人与红外摄像头之间的连线相对于前挡风玻璃的夹角为α,行人与用户眼睛之间的连线相对于前挡风玻璃的夹角为β,其中,夹角α和夹角β的单位也可以为弧度。则夹角α的计算方式为:As shown in Figure 8, it is a position distribution diagram of the overlooking angles of vehicles and pedestrians in this embodiment. In this Figure 8, based on the spatial position of the user's eyes and the spatial position of the infrared camera, the horizontal distance between the user's eyes and the infrared camera can be obtained. The distance is e. The horizontal field of view of the infrared camera is λ, and the angle between the line between the pedestrian and the infrared camera relative to the front windshield is α, and the angle between the line between the pedestrian and the user's eyes is relative to the front windshield is β, wherein, the units of angle α and angle β can also be radians. Then the calculation method of the included angle α is:
Figure PCTCN2021101446-appb-000003
Figure PCTCN2021101446-appb-000003
简化之后可得:After simplification, we get:
Figure PCTCN2021101446-appb-000004
Figure PCTCN2021101446-appb-000004
基于所述夹角α,夹角β的计算方式为:Based on the angle α, the calculation method of the angle β is:
Figure PCTCN2021101446-appb-000005
Figure PCTCN2021101446-appb-000005
基于计算得到的夹角β以及用户眼睛与前挡风玻璃之间的距离,即可计算出用户眼睛与行人之间的连线与车辆的前挡风玻璃之间的交点的空间位置,即计算出标注信息在前挡风玻璃上的显示位置。Based on the calculated angle β and the distance between the user's eyes and the front windshield, the spatial position of the intersection between the line between the user's eyes and the pedestrian and the front windshield of the vehicle can be calculated, that is, the calculation The display position of the marked information on the front windshield.
计算得到的显示位置具体为车辆坐标系下的空间位置,在进行投影之前,可进一步将该车辆坐标系下的空间位置坐标转换为AR-HUD坐标系下的投影坐标,然后将该标注信息以及该投影坐标发送至AR-HUD。The calculated display position is specifically the spatial position in the vehicle coordinate system. Before projection, the spatial position coordinates in the vehicle coordinate system can be further converted into projected coordinates in the AR-HUD coordinate system, and then the label information and The projected coordinates are sent to the AR-HUD.
S440:在显示区域显示所述目标对象的标注信息。S440: Display the annotation information of the target object in the display area.
本实施例中,处理器可通过接口电路将生成的标注信息及其显示位置发送到AR-HUD,该AR-HUD将标注信息投影到上述计算出的前挡风玻璃上的显示位置进行显示,使得用户看到的标注信息与行人保持在同一视线上,以便于用户在驾驶过程中对目标对象及其位置的快速感知。In this embodiment, the processor can send the generated annotation information and its display position to the AR-HUD through the interface circuit, and the AR-HUD projects the annotation information to the calculated display position on the front windshield for display. The annotation information seen by the user is kept in the same line of sight as the pedestrian, so that the user can quickly perceive the target object and its position during driving.
在一些实施例中,该标注信息的显示尺寸可以为固定尺寸,例如可以是根据目标对象生成的固定尺寸的提示框、AR图像等,还可以是根据目标对象生成的固定尺寸的文字、指引线等。通过将该固定尺寸的标注信息发送到显示位置进行显示,可以提醒用户车辆外的目标对象的存在。在另外一些实施例中,该标注信息的显示尺寸还可以与该标注信息的显示位置、用户眼睛的空间位置、以及该目标对象的尺寸有关。根据用户眼睛的空间位置、标注信息的显示位置和目标对象的空间位置,可以构成一个类似于视锥体的模型,因此根据目标对象的尺寸,可确定标注信息在显示位置的显示尺寸,以使得用户看到的标注信息能够与目标对象匹配,并且该标注信息的显示尺寸可以随着车辆与目标对象的相对距离产生相应的变化。例如,该标注信息具体可以体现为一种提示框的形式,通过获取的用户眼睛的空间位置、所述行人的空间位置和该提示框的显示位置,即可计算该提示框的显示大小,使得用户看到的提示框能够与行 人匹配。基于此,随着车辆的行驶,行人与车辆的距离越来越近,则提示框的尺寸也可以随之增大,以提醒用户注意行人的存在以及与行人的距离变化。In some embodiments, the display size of the annotation information can be a fixed size, for example, it can be a fixed-size prompt box, an AR image, etc. generated according to the target object, or it can be a fixed-size text and guide line generated according to the target object. Wait. By sending the fixed-size annotation information to the display position for display, the user can be reminded of the existence of the target object outside the vehicle. In some other embodiments, the display size of the annotation information may also be related to the display position of the annotation information, the spatial position of the user's eyes, and the size of the target object. According to the spatial position of the user's eyes, the display position of the annotation information, and the spatial position of the target object, a model similar to a frustum can be formed. Therefore, according to the size of the target object, the display size of the annotation information at the display position can be determined, so that The annotation information seen by the user can match the target object, and the display size of the annotation information can change accordingly with the relative distance between the vehicle and the target object. For example, the annotation information can be specifically embodied in the form of a prompt box, and the display size of the prompt box can be calculated by obtaining the spatial position of the user's eyes, the spatial position of the pedestrian, and the display position of the prompt box, so that The tooltip the user sees can be matched to a pedestrian. Based on this, as the vehicle travels and the distance between the pedestrian and the vehicle becomes closer, the size of the prompt box can also increase accordingly, so as to remind the user to pay attention to the existence of the pedestrian and the change of the distance to the pedestrian.
综上所述,本申请实施例提供的显示方法,通过采用红外成像的形式对车外环境进行检测,并基于目标对象的空间位置和用户眼睛的空间位置,确定标注信息的显示位置。本实施例采用红外补光以及红外成像的方式,提升了在光照不佳的情况下的成像能力,其中红外成像的效果不会被可见光影响,并且红外补光也不会对迎面车或行人产生影响,避免交通隐患。除此之外,本实施例基于用户眼睛的空间位置和目标对象的空间位置,计算得到标注信息的显示位置,涉及的计算量较小,减少对车辆处理资源的占用。根据计算得到的显示位置,将标注信息进行显示,从而使得用户在车外照明情况不佳时,仍然能够根据所显示的标注信息,确定车辆外的目标对象及其位置,提高驾驶的安全性。To sum up, the display method provided by the embodiment of the present application uses infrared imaging to detect the environment outside the vehicle, and determines the display position of the marked information based on the spatial position of the target object and the spatial position of the user's eyes. In this embodiment, infrared supplementary light and infrared imaging are used to improve the imaging capability under poor lighting conditions. The effect of infrared imaging will not be affected by visible light, and infrared supplementary light will not affect oncoming vehicles or pedestrians. influence and avoid traffic hazards. In addition, in this embodiment, based on the spatial position of the user's eyes and the spatial position of the target object, the display position of the annotation information is calculated, which involves a small amount of calculation and reduces the occupation of vehicle processing resources. According to the calculated display position, the label information is displayed, so that when the lighting conditions outside the vehicle are not good, the user can still determine the target object and its position outside the vehicle according to the displayed label information, and improve driving safety.
图9是本申请实施例提供的一种显示装置的架构图,该显示装置可以用于实现上述的显示方法中的各可选实施例。如图9所示,该显示装置具有补光模块910、获取模块920、处理模块930、发送模块940。FIG. 9 is a structural diagram of a display device provided by an embodiment of the present application, and the display device may be used to implement various optional embodiments of the above-mentioned display method. As shown in FIG. 9 , the display device has a supplementary light module 910 , an acquisition module 920 , a processing module 930 , and a sending module 940 .
补光模块910用于执行上述显示方法中的S410步骤以及其中的示例。获取模块920用于执行上述显示方法中的S420步骤以及其中的示例。处理模块930用于执行上述显示方法中的S430、S431-S433中任一步骤以及其中任一可选的示例。发送模块940用于执行上述显示方法中的S440步骤以及其中的示例。具体可参见方法实施例中的详细描述,此处不做赘述。The supplementary light module 910 is configured to execute step S410 in the above display method and examples therein. The acquiring module 920 is configured to execute step S420 and examples thereof in the above display method. The processing module 930 is configured to execute any one of steps S430, S431-S433 and any optional example thereof in the above display method. The sending module 940 is configured to execute step S440 and examples thereof in the above display method. For details, refer to the detailed description in the method embodiments, and details are not repeated here.
本实施例提供的显示装置,通过红外成像的方式,获取车辆前方的红外图像,根据该红外图像中目标对象的位置以及车辆内部用户眼睛的位置,确定标注信息的显示位置,并通过显示装置将该标注信息显示在对应的显示位置处。通过该显示装置,可以使得用户看到的标注信息能够与目标对象匹配,便于用户迅速感知车外的目标对象及其位置,提高驾驶的安全性。The display device provided in this embodiment acquires the infrared image in front of the vehicle through infrared imaging, determines the display position of the marked information according to the position of the target object in the infrared image and the position of the user's eyes inside the vehicle, and uses the display device to display the The annotation information is displayed at the corresponding display position. Through the display device, the marked information seen by the user can be matched with the target object, which facilitates the user to quickly perceive the target object and its position outside the vehicle, and improves driving safety.
应理解的是,本申请实施例中的显示装置可以由软件实现,例如可以由具有上述功能计算机程序或指令来实现,相应计算机程序或指令可以存储在终端内部的存储器中,通过处理器读取该存储器内部的相应计算机程序或指令来实现上述功能。或者,本申请实施例的显示装置还可以由硬件来实现,例如,该补光模块910可以由车辆上的补光装置实现,例如红外补光灯或其他可以实现红外补光功能的设备等。该获取模块920可以由车辆上的车外采集装置或车内采集装置实现,例如车外采集装置可以是红外摄像头、红外雷达等,车内采集装置可以是车载摄像头或眼球追踪仪等,或者,该获取模块920也可以由处理器与车辆上的红外摄像头或车载摄像头之间的接口电路来实现。该处理模块930可以由车辆上的处理装置实现,例如车机或车载电脑等车载处理装置的处理器实现,或者也可以由HUD或AR-HUD的处理器实现,又或者该处理模块930还可以由手机或平板等终端实现。该发送模块940可以由车辆上的显示装置实现,例如可以由HUD或AR-HUD实现,或者还可以由HUD或AR-HUD的部分器件实现,又或者还可以由处理器与HUD或AR-HUD之间的接口电路实现。或者,本申请实施例中的显示装置还可以由处理器和软件模块的结合实现。It should be understood that the display device in the embodiment of the present application may be implemented by software, for example, by computer programs or instructions having the above-mentioned functions, and the corresponding computer programs or instructions may be stored in the internal memory of the terminal and read by the processor. The corresponding computer programs or instructions inside the memory realize the above functions. Alternatively, the display device in the embodiment of the present application may also be realized by hardware, for example, the supplementary light module 910 may be realized by a supplementary light device on a vehicle, such as an infrared supplementary light lamp or other equipment capable of realizing infrared supplementary light function. The acquisition module 920 can be realized by an acquisition device outside the vehicle or an acquisition device inside the vehicle. For example, the acquisition device outside the vehicle can be an infrared camera, infrared radar, etc., and the acquisition device inside the vehicle can be a vehicle-mounted camera or an eye tracker, etc., or, The acquisition module 920 may also be implemented by an interface circuit between the processor and the infrared camera or the vehicle camera on the vehicle. The processing module 930 may be realized by a processing device on the vehicle, such as a processor of a vehicle-mounted processing device such as a vehicle machine or a vehicle-mounted computer, or may also be realized by a processor of a HUD or AR-HUD, or the processing module 930 may also be Realized by terminals such as mobile phones or tablets. The sending module 940 can be realized by the display device on the vehicle, for example, it can be realized by HUD or AR-HUD, or it can also be realized by some components of HUD or AR-HUD, or it can also be realized by the processor and HUD or AR-HUD The interface circuit between realizes. Alternatively, the display device in the embodiment of the present application may also be implemented by a combination of a processor and a software module.
应理解,本申请实施例中的装置或模块的处理细节可以参考图1-图8所示的实施例及相关扩展实施例的相关表述,本申请实施例将不再重复赘述。It should be understood that for the processing details of the devices or modules in the embodiments of the present application, reference may be made to the relevant descriptions of the embodiments shown in FIGS. 1-8 and related extended embodiments, and the embodiments of the present application will not be repeated here.
另外,本申请实施例还提供了具有上述显示装置的车辆,该车辆可以是家用轿车或载货汽车等,还可以是特种车辆例如救护车、消防车、警车或工程抢险车等。除此之外,该车辆上还设置有上述的补光装置、采集装置、处理装置、发送装置等。其中,上述的各模块和装置可以采用预装或后装的形式布置于车辆系统中,其中各模块之间可依赖于车辆的总线或接口电路进行数据交互,或者随着无线技术的发展,各模块之间还可以采用无线的通信方式进行数据交互,以消除布线带来的不便。除此之外,本实施例的显示装置还可与AR-HUD共同组合,以车载设备的形式安装于车辆上,从而实现更好的AR预警效果。In addition, the embodiment of the present application also provides a vehicle with the above-mentioned display device. The vehicle may be a family car or a truck, or may be a special vehicle such as an ambulance, a fire engine, a police car or an engineering emergency vehicle. In addition, the above-mentioned supplementary light device, acquisition device, processing device, sending device, etc. are also installed on the vehicle. Among them, the above-mentioned modules and devices can be arranged in the vehicle system in the form of pre-installation or post-installation, and the data interaction between the modules can rely on the bus or interface circuit of the vehicle, or with the development of wireless technology, each Modules can also use wireless communication for data interaction to eliminate the inconvenience caused by wiring. In addition, the display device of this embodiment can also be combined with the AR-HUD and installed on the vehicle in the form of a vehicle-mounted device, so as to achieve a better AR early warning effect.
图10是本申请实施例提供的一种计算设备1000的架构图。该计算设备可以作为显示装置,执行上述显示方法中的各可选实施例,该计算设备可以是终端,也可以是终端内部的芯片或芯片系统。如图10所示,该计算设备1000包括:处理器1010、存储器1020。FIG. 10 is a structural diagram of a computing device 1000 provided by an embodiment of the present application. The computing device can be used as a display device to execute the optional embodiments of the above display method, and the computing device can be a terminal, or a chip or a chip system inside the terminal. As shown in FIG. 10 , the computing device 1000 includes: a processor 1010 and a memory 1020 .
应理解,图10所示的计算设备1000中还可以包括通信接口1030,可以用于与其他设备之间进行通信,具体可以包括一个或多个收发电路或接口电路。It should be understood that the computing device 1000 shown in FIG. 10 may further include a communication interface 1030 for communicating with other devices, and may specifically include one or more transceiver circuits or interface circuits.
其中,该处理器1010可以与存储器1020连接。该存储器1020可以用于存储该程序代码和数据。因此,该存储器1020可以是处理器1010内部的存储模块,也可以是与处理器1010独立的外部存储模块,还可以是包括处理器1010内部的存储模块和与处理器1010独立的外部存储模块的部件。Wherein, the processor 1010 may be connected to the memory 1020 . The memory 1020 can be used to store the program codes and data. Therefore, the memory 1020 may be an internal storage module of the processor 1010, or an external storage module independent of the processor 1010, or may include an internal storage module of the processor 1010 and an external storage module independent of the processor 1010. part.
其中,计算设备1000还可以包括总线。其中,存储器1020、通信接口1030可以通过总线与处理器1010连接。总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图10中仅用一条线表示,但并不表示仅有一根总线或一种类型的总线。Wherein, the computing device 1000 may further include a bus. Wherein, the memory 1020 and the communication interface 1030 may be connected to the processor 1010 through a bus. The bus may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like. The bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one line is used in FIG. 10 , but it does not mean that there is only one bus or one type of bus.
应理解,在本申请实施例中,该处理器1010可以采用中央处理模块(central processing unit,CPU)。该处理器还可以是其它通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate Array,FPGA)或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。或者该处理器1010采用一个或多个集成电路,用于执行相关程序,以实现本申请实施例所提供的技术方案。It should be understood that, in the embodiment of the present application, the processor 1010 may use a central processing unit (central processing unit, CPU). The processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), application specific integrated circuits (application specific integrated circuits, ASICs), off-the-shelf programmable gate arrays (field programmable gate arrays, FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like. Alternatively, the processor 1010 adopts one or more integrated circuits for executing related programs, so as to implement the technical solutions provided by the embodiments of the present application.
该存储器1020可以包括只读存储器和随机存取存储器,并向处理器1010提供指令和数据。处理器1010的一部分还可以包括非易失性随机存取存储器。例如,处理器1010还可以存储设备类型的信息。The memory 1020 may include read-only memory and random-access memory, and provides instructions and data to the processor 1010 . A portion of processor 1010 may also include non-volatile random access memory. For example, processor 1010 may also store device type information.
在计算设备1000运行时,所述处理器1010执行所述存储器1020中的计算机执行指令执行上述显示方法的任一操作步骤以及其中任一可选的实施例,例如该处理器1010可以通过执行存储器1020中的计算机执行指令,以执行图4对应的实施例中的显示方法,在夜晚照明不佳的路面上行驶时,该处理器1010通过执行存储器1020中 的补光指令,控制车辆的补光装置对车辆外的目标区域进行红外补光。该处理器1010通过执行存储器1020中的采集指令,控制车辆的采集装置获取目标区域的红外图像的信息。该处理器1010通过执行存储器1020中的处理指令,确定目标对象在所述红外图像中的位置,并根据该目标对象在红外图像中的位置确定标注信息的显示位置。该处理器1010通过执行存储器1020中的发送指令,控制车辆的发送装置在显示区域显示所述目标对象的标注信息。When the computing device 1000 is running, the processor 1010 executes the computer-executed instructions in the memory 1020 to perform any operation steps of the above-mentioned display method and any optional embodiment thereof, for example, the processor 1010 can execute the memory The computer in 1020 executes instructions to execute the display method in the embodiment corresponding to FIG. 4 . When driving on a road with poor lighting at night, the processor 1010 controls the supplementary light of the vehicle by executing the supplementary light instruction in the memory 1020 The device performs infrared supplementary light on the target area outside the vehicle. The processor 1010 controls the acquisition device of the vehicle to acquire the infrared image information of the target area by executing the acquisition instruction in the memory 1020 . The processor 1010 determines the position of the target object in the infrared image by executing the processing instructions in the memory 1020, and determines the display position of the label information according to the position of the target object in the infrared image. The processor 1010 controls the sending device of the vehicle to display the marking information of the target object in the display area by executing the sending instruction in the memory 1020 .
应理解,根据本申请实施例的计算设备1000可以对应于执行根据本申请各实施例的方法中的相应主体,并且计算设备1000中的各个模块的上述和其它操作和/或功能分别为了实现本实施例各方法的相应流程,为了简洁,在此不再赘述。It should be understood that the computing device 1000 according to the embodiment of the present application may correspond to a corresponding body executing the method according to each embodiment of the present application, and the above-mentioned and other operations and/or functions of each module in the computing device 1000 are for realizing the present invention For the sake of brevity, the corresponding processes of the methods in the embodiments are not repeated here.
图11是本申请实施例提供的一种电子装置1100的架构图,该电子装置1100可以作为显示装置,执行上述显示方法中的各可选实施例,该电子装置可以是终端,也可以是终端内部的芯片或芯片系统。如图11,该电子装置1100包括:处理器1110,以及接口电路1120,其中,处理器1110通过接口电路1120访问存储器,存储器存储有程序指令,程序指令当被处理器执行时使得处理器执行上述显示方法的任一操作步骤以及其中任一可选的实施例。例如该处理器1110可以通过接口电路1120获取存储器中的计算机执行指令,以执行图4对应的实施例中的显示方法,在夜晚照明不佳的路面上行驶时,该处理器1110通过接口电路1120获取存储器中的补光指令,控制车辆的补光装置对车辆外的目标区域进行红外补光。该处理器1110通过接口电路1120获取存储器中的采集指令,控制车辆的采集装置获取目标区域的红外图像的信息。该处理器1110通过接口电路1120获取存储器中的处理指令,确定目标对象在所述红外图像中的位置,并根据该目标对象在红外图像中的位置确定标注信息的显示位置。该处理器1110通过接口电路1120获取存储器中的发送指令,控制车辆的发送装置在显示区域显示所述目标对象的标注信息。Fig. 11 is a structure diagram of an electronic device 1100 provided by an embodiment of the present application. The electronic device 1100 can be used as a display device to execute various optional embodiments of the above-mentioned display method. The electronic device can be a terminal or a terminal A chip or system-on-a-chip inside. As shown in Fig. 11, the electronic device 1100 includes: a processor 1110, and an interface circuit 1120, wherein the processor 1110 accesses the memory through the interface circuit 1120, and the memory stores program instructions, and when the program instructions are executed by the processor, the processor executes the above-mentioned Any operational steps of the method and any alternative embodiments thereof are shown. For example, the processor 1110 can acquire computer-executable instructions in the memory through the interface circuit 1120 to execute the display method in the embodiment corresponding to FIG. Acquire the supplementary light instruction in the memory, and control the supplementary light device of the vehicle to perform infrared supplementary light on the target area outside the vehicle. The processor 1110 acquires the acquisition instruction in the memory through the interface circuit 1120, and controls the acquisition device of the vehicle to acquire the infrared image information of the target area. The processor 1110 acquires processing instructions in the memory through the interface circuit 1120, determines the position of the target object in the infrared image, and determines the display position of the label information according to the position of the target object in the infrared image. The processor 1110 acquires the sending instruction in the memory through the interface circuit 1120, and controls the sending device of the vehicle to display the labeling information of the target object in the display area.
另外,该电子装置还可包括通信接口、总线等,具体可参见图10所示的实施例中的介绍,不再赘述。In addition, the electronic device may further include a communication interface, a bus, etc. For details, please refer to the introduction in the embodiment shown in FIG. 10 , and details are not repeated here.
图12是本申请实施例提供的一种显示系统1200的架构图,该显示系统1200可以作为显示装置,执行上述显示方法中的各可选实施例,该显示系统可以是终端,也可以是终端内部的芯片或芯片系统。如图12,该显示系统1200包括:车机装置1210,以及与该车机装置1210耦合的电子装置1220。该电子装置1220可以是如图1所示的处理装置130,还可以是如图9所示的处理模块930,还可以是如图10所示的计算设备1000,也可以是如图11所示的电子装置1100。该车机装置1210可以是图1所示的车辆上的补光装置110或者图9所示的补光模块910,例如车辆大灯、红外补光灯等。该车机装置1210还可以是图1所示的车辆上的采集装置120或者图9所示的获取模块920,例如车载摄像头、红外摄像头或雷达等。或者,该车机装置1210也可以是图1所示的车辆上的发送装置140或者图9所示的发送模块940,例如AR-HUD、HUD等,或者AR-HUD、HUD中的部分器件。本实施例中,该显示系统1200可以执行图4对应的实施例中的显示方法,例如,可以通过该车机装置1210对车辆外的目标区域进行红外补光,还可以通过该车机装置1210对车辆外的目标区域进行红外图像的采集。该电子装置1220可以根据获取的包含目标对象的红外图像,确定目标对 象在红外图像中的位置,并根据该目标对象在红外图像中的位置确定标注信息的显示位置。该车机装置1210可以将该标注信息发送到显示位置进行显示。其中,该车机装置1210和电子装置1220可以通过有线方式(例如接口电路)进行数据或指令的通信,还可以通过无线方式(例如蓝牙、wifi)进行数据或指令的通信。Fig. 12 is a structure diagram of a display system 1200 provided by the embodiment of the present application. The display system 1200 can be used as a display device to execute various optional embodiments of the above display methods. The display system can be a terminal or a terminal A chip or system-on-a-chip inside. As shown in FIG. 12 , the display system 1200 includes: an in-vehicle device 1210 , and an electronic device 1220 coupled with the in-vehicle device 1210 . The electronic device 1220 may be the processing device 130 as shown in FIG. 1 , may also be the processing module 930 as shown in FIG. 9 , may also be the computing device 1000 as shown in FIG. 10 , or may be the The electronic device 1100. The in-vehicle device 1210 may be the supplementary light device 110 on the vehicle shown in FIG. 1 or the supplementary light module 910 shown in FIG. 9 , such as a vehicle headlight, an infrared supplementary light, and the like. The vehicle-machine device 1210 may also be the collection device 120 on the vehicle shown in FIG. 1 or the acquisition module 920 shown in FIG. 9 , such as a vehicle-mounted camera, an infrared camera, or a radar. Alternatively, the vehicle-machine device 1210 may also be the sending device 140 on the vehicle shown in FIG. 1 or the sending module 940 shown in FIG. 9 , such as AR-HUD, HUD, etc., or some devices in the AR-HUD or HUD. In this embodiment, the display system 1200 can execute the display method in the embodiment corresponding to FIG. Infrared images are collected for the target area outside the vehicle. The electronic device 1220 can determine the position of the target object in the infrared image according to the acquired infrared image containing the target object, and determine the display position of the annotation information according to the position of the target object in the infrared image. The in-vehicle device 1210 may send the marked information to a display location for display. Wherein, the in-vehicle device 1210 and the electronic device 1220 can communicate data or instructions through a wired method (such as an interface circuit), and can also perform data or instruction communications through a wireless method (such as bluetooth, wifi).
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art can appreciate that the units and algorithm steps of the examples described in conjunction with the embodiments disclosed herein can be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the above-described system, device and unit can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理装置中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing device, each unit may exist separately physically, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时用于执行一种显示方法,该方法包括上述各个实施例所描述的方案中的至少之一。The embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored, and the program is used to execute a display method when executed by a processor, and the method includes the solutions described in the above-mentioned embodiments at least one.
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器 (RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。The computer storage medium in the embodiments of the present application may use any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this document, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中连接了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。A computer readable signal medium may include a data signal in baseband or propagated as part of a carrier wave with computer readable program code embodied therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括、但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing the operations of the present application may be written in one or more programming languages or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider). connect).
需要说明的是,本申请所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此,上述对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。It should be noted that the embodiments described in this application are only some of the embodiments of this application, not all of them. The components of the embodiments of the application generally described and shown in the drawings may be arranged and designed in a variety of different configurations. Accordingly, the above detailed description of the embodiments of the application provided in the accompanying drawings is not intended to limit the scope of the claimed application, but merely represents selected embodiments of the application. Based on the embodiments of the present application, all other embodiments obtained by those skilled in the art without making creative efforts belong to the scope of protection of the present application.
说明书和权利要求书中的词语“第一、第二、第三等”或模块A、模块B、模块C等类似用语,仅用于区别类似的对象,不代表针对对象的特定排序,可以理解地,在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。The words "first, second, third, etc." or similar terms such as module A, module B, and module C in the description and claims are only used to distinguish similar objects, and do not represent a specific ordering of objects. It can be understood that Obviously, where permitted, the specific order or sequence can be interchanged such that the embodiments of the application described herein can be practiced in other sequences than those illustrated or described herein.
在上述的描述中,所涉及的表示步骤的标号,如S410、S420……等,并不表示一定会按此步骤执行,还可以包括中间的步骤或者由其他的步骤代替,在允许的情况下可以互换前后步骤的顺序,或同时执行。In the above description, the referenced numbers representing the steps, such as S410, S420... etc., do not mean that this step must be executed, and may also include intermediate steps or be replaced by other steps, where permitted The order of the preceding and following steps can be interchanged, or performed simultaneously.
说明书和权利要求书中使用的术语“包括”不应解释为限制于其后列出的内容;它不排除其它的元件或步骤。因此,其应当诠释为指定所提到的所述特征、整体、步骤或部件的存在,但并不排除存在或添加一个或更多其它特征、整体、步骤或部件及其组群。因此,表述“包括装置A和B的设备”不应局限为仅由部件A和B组成的设备。The term "comprising" used in the description and claims should not be interpreted as being restricted to what is listed thereafter; it does not exclude other elements or steps. Therefore, it should be interpreted as specifying the presence of said features, integers, steps or components, but not excluding the presence or addition of one or more other features, integers, steps or components and groups thereof. Therefore, the expression "apparatus comprising means A and B" should not be limited to an apparatus consisting of parts A and B only.
本说明书中提到的“一个实施例”或“实施例”意味着与该实施例结合描述的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在本说明书各处出现的用 语“在一个实施例中”或“在实施例中”并不一定都指同一实施例,但可以指同一实施例。此外,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。Reference in this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places in this specification do not necessarily all refer to the same embodiment, but may. In addition, in each embodiment of the present application, if there is no special explanation and logical conflict, the terms and/or descriptions between different embodiments are consistent and can be referred to each other, and the technical features in different embodiments are based on their Inherent logical relationships can be combined to form new embodiments.
注意,上述仅为本申请的较佳实施例及所运用的技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明的构思的情况下,还可以包括更多其他等效实施例,均属于本发明的保护范畴。Note that the above are only preferred embodiments and technical principles used in this application. Those skilled in the art will understand that the present invention is not limited to the specific embodiments described herein, and that various obvious changes, readjustments and substitutions can be made by those skilled in the art without departing from the protection scope of the present invention. Therefore, although the present application has been described in detail through the above embodiments, the present invention is not limited to the above embodiments, and can also include more other equivalent embodiments without departing from the concept of the present invention, all of which belong to protection scope of the present invention.

Claims (21)

  1. 一种显示方法,其特征在于,包括:A display method, characterized by comprising:
    对车辆外的目标区域进行红外补光;Infrared supplementary light for the target area outside the vehicle;
    获取所述目标区域的红外图像的信息,其中,所述目标区域中包括目标对象;acquiring information of an infrared image of the target area, wherein the target area includes a target object;
    确定所述目标对象在所述红外图像中的位置;determining the location of the target object in the infrared image;
    根据所述目标对象在所述红外图像中的位置,在显示区域显示所述目标对象的标注信息。According to the position of the target object in the infrared image, the label information of the target object is displayed in a display area.
  2. 根据权利要求1所述的方法,其特征在于,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置有关。The method according to claim 1, wherein the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes.
  3. 根据权利要求2所述的方法,其特征在于,所述标注信息的显示尺寸与所述标注信息的显示位置、用户的人眼位置、所述目标对象的尺寸有关。The method according to claim 2, wherein the display size of the annotation information is related to the display position of the annotation information, the position of the user's eyes, and the size of the target object.
  4. 根据权利要求2所述的方法,其特征在于,所述显示区域为增强现实抬头显示器的显示区域,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置有关包括:The method according to claim 2, wherein the display area is a display area of an augmented reality head-up display, and the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes, including:
    所述标注信息的显示位置是通过所述用户的第一视线确定的;所述第一视线是从所述用户的人眼位置到所述目标对象的空间位置的视线。The display position of the annotation information is determined by the user's first line of sight; the first line of sight is the line of sight from the user's eye position to the spatial position of the target object.
  5. 根据权利要求4所述的方法,其特征在于,所述增强现实抬头显示器的显示区域位于所述车辆的前挡风玻璃,所述标注信息的显示位置是通过所述用户的第一视线确定的包括:The method according to claim 4, wherein the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the annotation information is determined by the user's first line of sight include:
    所述标注信息的显示位置是通过所述用户的第一视线与所述车辆的前挡风玻璃的交点确定的。The display position of the annotation information is determined by the intersection of the user's first line of sight and the front windshield of the vehicle.
  6. 根据权利要求2至5任意一项所述的方法,其特征在于,所述目标对象的空间位置与所述目标对象在所述红外图像中的位置、获取所述红外图像的图像采集装置的内参和外参有关。The method according to any one of claims 2 to 5, characterized in that the spatial position of the target object and the position of the target object in the infrared image, the internal reference of the image acquisition device for obtaining the infrared image related to extrinsics.
  7. 根据权利要求1所述的方法,其特征在于,所述目标对象包括其他车辆、行人、动物中的一个或多个对象。The method according to claim 1, wherein the target object includes one or more objects among other vehicles, pedestrians, and animals.
  8. 根据权利要求1所述的方法,其特征在于,所述确定所述目标对象在所述红外图像中的位置之前,还包括:The method according to claim 1, wherein before the determining the position of the target object in the infrared image, further comprising:
    对所述红外图像进行裁剪、降噪、增强、平滑、和锐化中的一个或多个处理。One or more of cropping, noise reduction, enhancement, smoothing, and sharpening are performed on the infrared image.
  9. 一种显示装置,其特征在于,包括:A display device, characterized in that it comprises:
    补光模块,用于对车辆外的目标区域进行红外补光;The supplementary light module is used to perform infrared supplementary light on the target area outside the vehicle;
    获取模块,用于获取所述目标区域的红外图像的信息,其中,所述目标区域中包括目标对象;An acquisition module, configured to acquire information of an infrared image of the target area, wherein the target area includes a target object;
    处理模块,用于确定所述目标对象在所述红外图像中的位置;a processing module, configured to determine the position of the target object in the infrared image;
    发送模块,用于根据所述目标对象在所述红外图像中的位置,在显示区域显示所述目标对象的标注信息。The sending module is configured to display the labeling information of the target object in the display area according to the position of the target object in the infrared image.
  10. 根据权利要求9所述的装置,其特征在于,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置、所述目标对象的尺寸有关。The device according to claim 9, wherein the display position of the annotation information is related to the spatial position of the target object, the position of the user's eyes, and the size of the target object.
  11. 根据权利要求10所述的装置,其特征在于,所述标注信息的显示尺寸与所述标注信息的显示位置、用户的人眼位置、所述目标对象的尺寸有关。The device according to claim 10, wherein the display size of the annotation information is related to the display position of the annotation information, the position of the user's eyes, and the size of the target object.
  12. 根据权利要求10所述的装置,其特征在于,所述显示区域为增强现实抬头显示器的显示区域,所述标注信息的显示位置与所述目标对象的空间位置、用户的人眼位置有关包括:The device according to claim 10, wherein the display area is a display area of an augmented reality head-up display, and the display position of the annotation information is related to the spatial position of the target object and the position of the user's eyes, including:
    所述标注信息的显示位置是通过所述用户的第一视线的第一视线确定的;所述第一视线是从所述用户的人眼位置到所述目标对象的空间位置的视线。The display position of the annotation information is determined by a first line of sight of the user's first line of sight; the first line of sight is a line of sight from the user's eye position to the spatial position of the target object.
  13. 根据权利要求12所述的装置,其特征在于,所述增强现实抬头显示器的显示区域位于所述车辆的前挡风玻璃,所述标注信息的显示位置是通过所述用户的第一视线的第一视线确定的包括:The device according to claim 12, wherein the display area of the augmented reality head-up display is located on the front windshield of the vehicle, and the display position of the annotation information is the first line of sight of the user. A line-of-sight determination includes:
    所述标注信息的显示位置是通过所述用户的第一视线与所述车辆的前挡风玻璃的交点确定的。The display position of the annotation information is determined by the intersection of the user's first line of sight and the front windshield of the vehicle.
  14. 根据权利要求10至13任意一项所述的装置,其特征在于,所述目标对象的空间位置与所述目标对象在所述红外图像中的位置、获取所述红外图像的图像采集装置的内参和外参有关。The device according to any one of claims 10 to 13, characterized in that the spatial position of the target object and the position of the target object in the infrared image, and the internal parameters of the image acquisition device for obtaining the infrared image related to extrinsics.
  15. 根据权利要求9所述的装置,其特征在于,所述目标对象包括其他车辆、行人、动物中的一个或多个对象。The device according to claim 9, wherein the target object includes one or more objects among other vehicles, pedestrians, and animals.
  16. 根据权利要求9所述的装置,其特征在于,所述处理模块在用于确定所述目标对象在所述红外图像中的位置之前,还用于:The device according to claim 9, wherein before the processing module is used to determine the position of the target object in the infrared image, it is further used to:
    对所述红外图像进行裁剪、降噪、增强、平滑、和锐化中的一个或多个处理。One or more of cropping, noise reduction, enhancement, smoothing, and sharpening are performed on the infrared image.
  17. 一种计算设备,其特征在于,包括A computing device, characterized in that, comprising
    处理器,以及processor, and
    存储器,其上存储有程序指令,所述程序指令当被所述处理器执行时使得所述处理器执行权利要求1至8任意一项所述的显示方法。A memory, on which program instructions are stored, and the program instructions, when executed by the processor, cause the processor to execute the display method according to any one of claims 1 to 8.
  18. 一种电子装置,其特征在于,包括:An electronic device, characterized in that it comprises:
    处理器,以及接口电路,其中,所述处理器通过所述接口电路访问存储器,所述存储器存储有程序指令,所述程序指令当被所述处理器执行时使得所述处理器执行权利要求1至8任意一项所述的显示方法。A processor, and an interface circuit, wherein the processor accesses a memory through the interface circuit, the memory storing program instructions that, when executed by the processor, cause the processor to perform claim 1 The display method described in any one of to 8.
  19. 一种显示系统,其特征在于,包括:A display system, characterized in that it comprises:
    车机装置,以及与所述车机装置耦合的权利要求17所述的计算设备,或与所述车机装置耦合的权利要求18所述的电子装置。A vehicle-machine device, and the computing device described in claim 17 coupled with the vehicle-machine device, or the electronic device described in claim 18 coupled with the vehicle-machine device.
  20. 一种计算机可读存储介质,其特征在于,其上存储有程序指令,所述程序指令当被计算机执行时使得所述计算机执行权利要求1至8任意一项所述的显示方法。A computer-readable storage medium, characterized in that program instructions are stored thereon, and when the program instructions are executed by a computer, the computer executes the display method described in any one of claims 1 to 8.
  21. 一种计算机程序产品,其特征在于,其包括有程序指令,所述程序指令当被计算机执行时使得所述计算机执行权利要求1至8任意一项所述的显示方法。A computer program product, characterized in that it includes program instructions, and when the program instructions are executed by a computer, the computer executes the display method described in any one of claims 1 to 8.
PCT/CN2021/101446 2021-06-22 2021-06-22 Display method and apparatus, device, and vehicle WO2022266829A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/101446 WO2022266829A1 (en) 2021-06-22 2021-06-22 Display method and apparatus, device, and vehicle
CN202180001862.4A CN113597617A (en) 2021-06-22 2021-06-22 Display method, display device, display equipment and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/101446 WO2022266829A1 (en) 2021-06-22 2021-06-22 Display method and apparatus, device, and vehicle

Publications (1)

Publication Number Publication Date
WO2022266829A1 true WO2022266829A1 (en) 2022-12-29

Family

ID=78242910

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/101446 WO2022266829A1 (en) 2021-06-22 2021-06-22 Display method and apparatus, device, and vehicle

Country Status (2)

Country Link
CN (1) CN113597617A (en)
WO (1) WO2022266829A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116800165A (en) * 2023-08-21 2023-09-22 江苏和亿智能科技有限公司 Automatic frequency conversion energy-saving speed regulation control system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114290989A (en) * 2021-12-21 2022-04-08 东软睿驰汽车技术(沈阳)有限公司 Prompting method, vehicle and computer readable storage medium
CN114296239A (en) * 2021-12-31 2022-04-08 合众新能源汽车有限公司 Image display method and device for vehicle window
CN114999225B (en) * 2022-05-13 2024-03-08 海信集团控股股份有限公司 Information display method of road object and vehicle
CN115065818A (en) * 2022-06-16 2022-09-16 南京地平线集成电路有限公司 Projection method and device of head-up display system
CN115767439A (en) * 2022-12-02 2023-03-07 东土科技(宜昌)有限公司 Object position display method and device, storage medium and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202713506U (en) * 2012-03-22 2013-01-30 奇瑞汽车股份有限公司 Anti-dazzling vehicle-mounted night vision system
CN105799593A (en) * 2016-03-18 2016-07-27 京东方科技集团股份有限公司 Auxiliary driving device for vehicle
WO2017209313A1 (en) * 2016-05-30 2017-12-07 엘지전자 주식회사 Vehicle display device and vehicle
CN110203140A (en) * 2019-06-28 2019-09-06 威马智慧出行科技(上海)有限公司 Automobile augmented reality display methods, electronic equipment, system and automobile
CN110304057A (en) * 2019-06-28 2019-10-08 威马智慧出行科技(上海)有限公司 Car crass early warning, air navigation aid, electronic equipment, system and automobile
CN210139817U (en) * 2019-06-28 2020-03-13 威马智慧出行科技(上海)有限公司 Automobile augmented reality display system and automobile
KR20200140110A (en) * 2019-06-05 2020-12-15 한국전자기술연구원 Method of determining augmented reality information in vehicle and apparatus thereof
CN112714266A (en) * 2020-12-18 2021-04-27 北京百度网讯科技有限公司 Method and device for displaying label information, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202713506U (en) * 2012-03-22 2013-01-30 奇瑞汽车股份有限公司 Anti-dazzling vehicle-mounted night vision system
CN105799593A (en) * 2016-03-18 2016-07-27 京东方科技集团股份有限公司 Auxiliary driving device for vehicle
WO2017209313A1 (en) * 2016-05-30 2017-12-07 엘지전자 주식회사 Vehicle display device and vehicle
KR20200140110A (en) * 2019-06-05 2020-12-15 한국전자기술연구원 Method of determining augmented reality information in vehicle and apparatus thereof
CN110203140A (en) * 2019-06-28 2019-09-06 威马智慧出行科技(上海)有限公司 Automobile augmented reality display methods, electronic equipment, system and automobile
CN110304057A (en) * 2019-06-28 2019-10-08 威马智慧出行科技(上海)有限公司 Car crass early warning, air navigation aid, electronic equipment, system and automobile
CN210139817U (en) * 2019-06-28 2020-03-13 威马智慧出行科技(上海)有限公司 Automobile augmented reality display system and automobile
CN112714266A (en) * 2020-12-18 2021-04-27 北京百度网讯科技有限公司 Method and device for displaying label information, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116800165A (en) * 2023-08-21 2023-09-22 江苏和亿智能科技有限公司 Automatic frequency conversion energy-saving speed regulation control system

Also Published As

Publication number Publication date
CN113597617A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2022266829A1 (en) Display method and apparatus, device, and vehicle
CN107499307B (en) Automatic parking assist apparatus and vehicle including the same
EP2860971B1 (en) Display control apparatus, method, recording medium, and vehicle
US10924679B2 (en) Display device for vehicle and control method thereof
US10699486B2 (en) Display system, information presentation system, control method of display system, storage medium, and mobile body
JPWO2018012299A1 (en) IMAGE GENERATION APPARATUS, IMAGE GENERATION METHOD, AND PROGRAM
US20200380257A1 (en) Autonomous vehicle object content presentation systems and methods
JP6451101B2 (en) Vehicle communication device
WO2022241638A1 (en) Projection method and apparatus, and vehicle and ar-hud
KR101976106B1 (en) Integrated head-up display device for vehicles for providing information
CN114373335B (en) Collision early warning method and device for vehicle, electronic equipment and storage medium
US20200385012A1 (en) Recognition device, recognition method, and storage medium
US20210256933A1 (en) Vehicle display system, vehicle system, and vehicle
CN115520100A (en) Automobile electronic rearview mirror system and vehicle
US20210268961A1 (en) Display method, display device, and display system
CN110891841A (en) Method and device for ascertaining the probability of an object being in the field of view of a vehicle driver
JP7127565B2 (en) Display control device and display control program
KR102446387B1 (en) Electronic apparatus and method for providing a text thereof
KR20170069096A (en) Driver Assistance Apparatus and Vehicle Having The Same
CN111241946B (en) Method and system for increasing FOV (field of view) based on single DLP (digital light processing) optical machine
Deng et al. Research on interface design of full windshield head-up display based on user experience
KR20220086043A (en) Smart Road Information System for Blind Spot Safety
JP2021037916A (en) Display control device and display control program
WO2024031709A1 (en) Display method and device
CN116605141B (en) Display method and device of electronic rearview mirror, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21946331

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE