WO2023010236A1 - 一种显示方法、装置和系统 - Google Patents

一种显示方法、装置和系统 Download PDF

Info

Publication number
WO2023010236A1
WO2023010236A1 PCT/CN2021/109949 CN2021109949W WO2023010236A1 WO 2023010236 A1 WO2023010236 A1 WO 2023010236A1 CN 2021109949 W CN2021109949 W CN 2021109949W WO 2023010236 A1 WO2023010236 A1 WO 2023010236A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
display
moment
electronic device
prompt information
Prior art date
Application number
PCT/CN2021/109949
Other languages
English (en)
French (fr)
Inventor
俞政杰
彭惠东
张宇腾
蔡立力
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2021/109949 priority Critical patent/WO2023010236A1/zh
Priority to CN202180005788.3A priority patent/CN115917254A/zh
Priority to EP21952146.5A priority patent/EP4369177A1/en
Publication of WO2023010236A1 publication Critical patent/WO2023010236A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/38Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory with means for controlling the display position

Definitions

  • the present application relates to the technical field of automobiles, in particular to a display method, device and system.
  • AR head up display (AR head up display, AR-HUD) is used in the vehicle , that is, the pedestrians, vehicles and other information detected by the camera are transformed into the real world coordinate system through coordinate transformation, and projected onto the electronic device in the driver's front view, for example, the windshield in the driving vehicle, so that the projected virtual information and The real road condition information is fused to realize functions such as pedestrian warning and vehicle warning.
  • AR head up display AR head up display, AR-HUD
  • the current display method using AR-HUD has large jitter and poor user experience.
  • the present application provides a display method, device and system, which are used to reduce image jitter during display and improve display effect.
  • the display method provided in this application can be executed by an electronic device.
  • An electronic device can be abstracted as a computer system.
  • the electronic device may be a whole machine, or a part of the whole machine, such as a system chip or an image chip.
  • the system chip may also include a system on chip (system on chip, SOC), or a SoC chip.
  • the electronic device may be a terminal device or an on-board device such as an on-board computer in a vehicle, a car machine, etc., or a system chip, an image processing chip, or other devices that can be installed in a computer system in a vehicle or a vehicle type of chip.
  • the embodiment of the present application provides a display method, including:
  • the sensing information of the target object may be acquired through the collection device in the electronic device, and the collection device may be connected to the processing device in the electronic device through an interface circuit,
  • the sensory information of the target object is sent to the processing device.
  • the processing device may obtain prompt information of the target object at the first moment by processing the acquired sensory information of the target object.
  • the processing device may also send the prompt information of the target object at the first moment to the display device for projection through an interface circuit connected to the display device in the electronic device.
  • the predicted position of the target object at the first moment is obtained through the collected sensor information of the target object, and the predicted position is combined with the measurement position obtained before the first moment, so that The electronic device can display the prompt information of the target object at the first moment, effectively reducing output jitter.
  • the display device when the measurement position of the target object cannot be obtained before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and display position, so The display position is related to the predicted position. Therefore, if the measured position of the target object is not obtained before the first moment, in order to better ensure the real-time performance of the output display position of the target object at the first moment, the predicted position can be directly determined as the target object at The display position at the first moment effectively reduces the delay.
  • the processing device in the electronic device it may be determined by the processing device in the electronic device whether the measurement position of the target object is acquired before the first time length, and whether the measurement position of the target object is not acquired before the first time length When measuring the position of the target object, the predicted position is determined as the display position.
  • the display device when the measurement position of the target object is acquired before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and display position, and the The displayed position is related to the predicted position and the measured position. Therefore, if the measurement position of the target object is obtained before the first moment, in order to better reduce the jitter of the output content, the display position of the target object at the first moment can be determined according to the predicted position and the measurement position .
  • the processing device in the electronic device may determine the average value of the predicted position and the measured position as the display position of the target object at the first moment.
  • the display device when multiple measurement positions of the target object are acquired before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and display position,
  • the displayed position is related to the average of the plurality of measured positions and the predicted position. Therefore, after obtaining multiple measurement positions of the target object before the first moment, it is necessary to determine the measurement position used for fusion with the predicted position, so as to obtain the display position according to the fusion between the selected measurement position and the predicted position, for example, An average value of a plurality of measurement positions may be determined as the measurement position selected for fusion.
  • the processing device in the electronic device may determine the average value of multiple measurement positions, and fuse the average value of the multiple measurement positions with the predicted position, The display position of the target object at the first moment is obtained.
  • the display device when multiple measurement positions of the target object are acquired before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and display position,
  • the displayed position is related to the last acquired measured position among the plurality of measured positions and the predicted position. Therefore, after obtaining multiple measurement positions of the target object before the first moment, it is necessary to determine the measurement position used for fusion with the predicted position, so as to obtain the display position according to the fusion between the selected measurement position and the predicted position, for example , the last measurement position among the plurality of measurement positions may be determined as the selected measurement position for fusion.
  • the processing device in the electronic device may determine the last measurement position among the plurality of measurement positions, and fuse the last measurement position with the predicted position to obtain The display position of the target object at the first moment.
  • the display position is also related to a preset correction value, and the correction value is used to reduce the error caused by the bumping and shaking of the vehicle during driving. Therefore, after acquiring the display position of the target object at the first moment, in order to further reduce the jitter, the display position of the target object at the first moment can be fused and corrected again before projection, for example, according to the prediction position, the measured position and the correction value to obtain the display position, so that the display position of the target object for projection display at the first moment is closer to the real position of the target object, thereby improving user experience.
  • the display position may be updated according to the correction value through the processing device in the electronic device.
  • the display position is also related to an average value of display positions corresponding to multiple adjacent moments before the first moment. Therefore, after acquiring the display position of the target object at the first moment, in order to further reduce the jitter, the display position of the target object at the first moment can be fused and corrected again before projection, for example, according to the first
  • the display positions corresponding to multiple adjacent moments before the time obtain the display position of the target object at the first moment, so that the display position of the target object for projection display at the first moment is closer to the real position of the target object, improving user experience.
  • the processing device in the electronic device may determine the average value of the display positions corresponding to a plurality of adjacent moments before the first moment, and calculate the first moment according to the average value. to update the displayed position.
  • the display device is enabled to project the prompt information of the target object in the world coordinate system at the first moment to the vehicle body coordinates Department. Therefore, in the display process, in order to better fit the vehicle body coordinate system, the position of the prompt information of the target object at the first moment in the vehicle body coordinate system can be determined according to the corresponding relationship between the world coordinate system and the vehicle body coordinate system .
  • the processing device in the electronic device may determine the position of the prompt information of the target object in the vehicle body coordinate system at the first moment according to the corresponding relationship.
  • the target object includes one or more of a vehicle, a person, an obstacle, and a traffic sign.
  • the sensory information includes one or more of an external feature of the target object, a position of the target object at a first moment, and a distance of the target object from a driving vehicle at a first moment. Therefore, according to the collected sensor information of the target object, the condition of the target object in front of the driving vehicle can be known.
  • the external characteristics of the target object include but are not limited to the model of the vehicle, the width of the vehicle, the length of the vehicle, the value of the vehicle, the color of the vehicle, and the like.
  • the external features of the target object include but are not limited to the height, gender, age group, clothing color, etc. of the pedestrian.
  • the present application provides an electronic device, which includes a processing module and a communication module.
  • the communication module is used to acquire the sensing information of the target object through the interface circuit.
  • the processing module can be used to obtain the predicted position of the target object at the first moment according to the sensing information of the target object; obtain the measured position of the target object within a preset time period before the first moment; according to the The predicted position and the measured position enable the display device to display the prompt information of the target object at the first moment.
  • the processing module can be used for:
  • the display device When the measurement position of the target object is acquired before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and the display position, the display position and the predicted position related to the measurement location.
  • the processing module can be used for:
  • the display device When the measurement position of the target object cannot be obtained before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and the display position, and the display position is consistent with the predicted Location related.
  • the processing module can be used for:
  • the display device When multiple measurement positions of the target object are obtained before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and the display position, and the display position is the same as the An average of a plurality of measured positions is related to the predicted position.
  • the processing module can be used for:
  • the display device When multiple measurement positions of the target object are obtained before the first moment, the display device is enabled to display the prompt information of the target object at the first moment and the display position, and the display position is the same as the The last acquired measurement position among the plurality of measurement positions is related to the predicted position.
  • the processing module is also used for:
  • the display position is updated according to a correction value; the correction value is preset and used to remove errors caused by bumps and shakes of the vehicle during driving.
  • the processing module is also used for:
  • the display position is updated according to an average value of display positions corresponding to multiple adjacent moments before the first moment.
  • the target object includes one or more of a vehicle, a person, an obstacle, and a traffic sign.
  • the present application provides a computing device, including a processor, the processor is connected to a memory, the memory stores computer programs or instructions, and the processor is used to execute the computer programs or instructions stored in the memory, so that the computing device performs the above-mentioned first Aspect or a method in any possible implementation of the first aspect.
  • the present application provides a computer-readable storage medium, on which a computer program or instruction is stored, and when the computer program or instruction is executed, the computer executes any one of the above-mentioned first aspect or the first aspect. method in the implementation of .
  • the present application provides a computer program product.
  • the computer executes the computer program product, the computer executes the method in the above-mentioned first aspect or any possible implementation manner of the first aspect.
  • the present application provides a chip connected to a memory for reading and executing computer programs or instructions stored in the memory, so as to realize the above-mentioned first aspect or any possible implementation of the first aspect Methods.
  • the present application provides a vehicle, which includes the vehicle-mounted control device and the execution device in any possible implementation manner of the above-mentioned second aspect or the second aspect, so as to realize the above-mentioned first aspect or the first aspect A method in any of the possible implementations.
  • the present application provides a vehicle, which includes the chip and the execution device in the sixth aspect above, so as to implement the method in the first aspect or any possible implementation manner of the first aspect above.
  • fusion correction is performed by combining the predicted position of the target object at the first moment and the measured position of the target object obtained before the first moment, Enabling the display device to display the prompt information of the target object at the first moment is closer to the real situation of the target object, and can effectively reduce the jitter of the content placed on the windshield of the driving vehicle.
  • the display content can also be updated through the correction value, so as to further realize the jitter optimization, effectively reduce the dizziness of the driver when the vehicle driver uses the AR head-up to display the road condition information, and ensure the driver's safe driving. .
  • FIG. 1 is a schematic diagram of a vehicle early warning display of an AR-HUD provided by the present application
  • FIG. 2 is a schematic diagram of a display shaking scene provided by the present application.
  • FIG. 3 is a schematic diagram of an electronic device scene provided by the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by the present application.
  • FIG. 5 is a schematic structural diagram of another electronic device provided by the present application.
  • FIG. 6 is a schematic structural diagram of another electronic device provided by the present application.
  • FIG. 7 is a schematic flow chart of the first display method provided by the present application.
  • FIG. 8 is a schematic flow chart of the second display method provided by the present application.
  • FIG. 9 is a schematic diagram of a collection scene provided by the present application.
  • FIG. 10 is a schematic diagram of acquiring a measurement position of a target object provided by the present application.
  • FIG. 11 is a schematic diagram of the first method of determining the display position of the target object provided by the present application.
  • Fig. 12 is a schematic diagram of fusion of a predicted position and a measured position into a displayed position provided by the present application;
  • FIG. 13 is a schematic diagram of the second method of determining the display position of the target object provided by the present application.
  • Fig. 14 is a schematic diagram of the first kind of determination measurement position provided by the present application.
  • Fig. 15 is a schematic diagram of the second determination of the measurement position provided by the present application.
  • FIG. 16 is a schematic diagram of a display location update scenario provided by the present application.
  • FIG. 17 is a schematic diagram of a scene for updating the display position through sliding window filtering provided by the present application.
  • FIG. 18 is a schematic flowchart of a third display method provided by the present application.
  • the present application provides a display method, device and system, which are used to reduce image jitter during display and improve display effect.
  • the method and the device are based on the same technical conception. Since the principle of solving the problem of the method and the device is similar, the implementation of the device and the method can be referred to each other, and the repetition will not be repeated.
  • the electronic device can obtain the predicted position of the target object at the first moment based on the acquired sensor information of the target object, and the The measurement position, through the predicted position and the measurement position, enables the display device to display the prompt information of the target object at the first moment, effectively reducing the jitter of the output image.
  • the electronic device in the embodiment of the present application can be used to support the vehicle to implement the method provided in the embodiment of the present application.
  • the electronic device can be integrally arranged with the vehicle, for example, the electronic device can be arranged inside the vehicle.
  • the electronic device and the vehicle may be arranged separately, for example, the electronic device may be implemented in the form of a terminal device or the like.
  • the terminal device here may be, for example, an AR-HUD or a vehicle-mounted device.
  • the vehicle-mounted device is a driving recorder with a projection function
  • the vehicle-mounting device can provide the following functions: obtain the target object's Sensing information, based on the sensing information of the target object, determine the predicted position of the target object at the first moment and the measured position before the first moment; through the predicted position and the measured position, display the prompt information for the target audience.
  • the vehicle in the embodiment of the present application may have an automatic driving function, especially a human machine interaction (human machine interaction, HMI) function and the like.
  • a human machine interaction human machine interaction, HMI
  • vehicle may also be replaced with other vehicles or vehicles such as trains, aircrafts, and mobile platforms according to actual needs. This application does not limit this.
  • FIG. 4 shows a schematic structural diagram of a possible electronic device, which may include a processing module 410 and an acquisition module 420 .
  • the structure shown in FIG. 4 may be a vehicle-mounted device, or have functional components of the electronic device shown in this application.
  • the collection module 420 can include devices such as camera devices and sensing devices to support the collection function of the target object, and the processing module 410 can be a processor, for example, a central processing unit (central processing unit) unit, CPU).
  • the acquisition module 420 can communicate with the processing module 410 through a Bluetooth connection, a network connection or an interface circuit.
  • the processing module 410 may display road condition information on the display screen 410 through one of projection, wired connection or wireless connection.
  • the collection module 420 may include a camera device, a sensor device, and other devices for supporting the collection function of the target object, and the processing module 410 may be a processor.
  • the acquisition module 420 may communicate with the processing module 410 through an interface circuit.
  • the processing module 410 may display the prompt information of the target object on the display screen 410 through one of projection, wired connection or wireless connection.
  • the acquisition module 420 can be one or more of the camera device and sensing device controlled by the chip, and the processing module 410 can be the processor of the chip, and can include one or more central processing unit.
  • the processing module 410 in the embodiment of the present application may be implemented by a processor or processor-related circuit components, and the acquisition module 420 may be implemented by a camera device, a sensing device, or a related acquisition device.
  • the processing module 410 can be used to perform all the operations performed by the electronic device in any embodiment of the present application except for the acquisition operation and the projection operation, for example, based on the sensor information of the target object, determine that the target object Predicted position at a moment: According to the predicted position and measured position of the target object, display the prompt information of the target object at the first moment.
  • the acquisition module 420 may be configured to perform the acquisition operation on the target object in any embodiment of the present application, for example, acquire the sensing information of the target object through one or more of a camera device and a sensor device.
  • the sensory information of the target object acquired by the processing device 410 may be generated from point cloud information of the target object collected by an external sensor or camera, sound, and one or more types of sensory data in pictures. Or, the sensory information of the target object acquired by the processing device 410 may be point cloud information, sound, and one or more sensory data of the target object collected by its own sensor or camera Generated.
  • the camera device in the embodiment of the present application may be a monocular camera, a binocular camera, or the like.
  • the shooting area of the camera device may be the external environment of the vehicle.
  • the sensing device is used to acquire the sensing data of the target object, so as to assist the processing device in the vehicle to analyze and determine the sensing information of the target object.
  • the sensing device described in the embodiments of the present application may include lidar, millimeter wave radar, ultrasonic radar, etc. for acquiring environmental information.
  • the processing module 410 can be a functional module, which can not only complete the analysis operation of collected information, but also complete the operation of displaying road condition information on the display screen.
  • the processing module 410 can be considered as an analysis module
  • the processing module 410 can be considered as a display module
  • the processing module 410 in the embodiment of the present application can be replaced by an AR-HUD. That is, the AR-HUD in the embodiment of the present application has the functions of the above-mentioned processing module 410; or, the processing module 410 may also include two functional modules, and the processing module 410 may be regarded as a general term for these two functional modules. They are the analysis module and the display module respectively.
  • the analysis module is used to analyze the road conditions according to the acquired sensor information of the target object, and determine the first moment of the target object according to the predicted position of the target object at the first moment and the measured position before the first moment.
  • the display module is used to display the prompt information of the target object determined by the analysis module on the display screen.
  • the electronic device may further include a storage module, configured to store one or more programs and data information; wherein the one or more programs include instructions.
  • the electronic device may also include a display screen, which may be a windshield in a vehicle or a display screen of other vehicle equipment.
  • FIG. 5 shows a schematic structural diagram of another electronic device, which is used to perform the actions performed by the electronic device provided in the embodiment of the present application. Easy to understand and easy to illustrate.
  • the electronic device may include a processor, a memory, and an interface circuit.
  • the electronic device may further include at least one component of a collection device, a processing device, a display device or a display screen.
  • the processor is mainly used to implement the processing operations provided by the embodiments of the present application, such as analyzing and processing the acquired sensory information of the target object, executing software programs, and processing data of the software programs.
  • Memory is primarily used to store software programs and data.
  • the collection device may be used to collect sensory information of the target object, and may include a camera, a millimeter-wave radar, or an ultrasonic sensor.
  • the interface circuit can be used to support the communication of the electronic device. For example, after the collection device collects the sensing information of the target object at the first moment, it can send the collected sensing information to the processor through the interface circuit.
  • Interface circuitry may include transceivers or input-output interfaces.
  • FIG. 5 In an actual product of an electronic device, there may be one or more processors and one or more memories.
  • a memory may also be called a storage medium or a storage device.
  • the memory may be set independently of the processor, or may be integrated with the processor, which is not limited in this embodiment of the present application.
  • FIG. 6 shows another electronic device provided by the embodiment of the present application. It can be seen that the electronic device may include a detection module, a tracking fusion module, a HUD anti-shake module, a HUD coordinate transformation module, a HUD engine rendering module, and the like.
  • the detection module is used to detect the sensing information of the target object by using the detection algorithm, for example, detect the pedestrian and vehicle information, and obtain the position and frame information of the pedestrian and vehicle.
  • the specific form is related to the detection device. For example, the photos of pedestrians and vehicles are collected through the camera, and the information of pedestrians and vehicles is obtained through the content of the photos; another example, the distance between pedestrians and vehicles and the self-driving vehicle, and the position of pedestrians and vehicles are obtained through sensors.
  • the detection module can also be used to detect the measurement position of the target object before the first moment, etc.
  • the tracking fusion module can be used to establish a 3D prediction model for the detected sensor information of the target object at the first moment, and use the model to predict and track the position of the target object at the first moment.
  • the detection uses the predicted output value of the model and the detection value of the target object before the first moment to fuse and update as output.
  • use the MQ mechanism to output the predicted value of the tracking algorithm.
  • the HUD anti-shake module is used to smoothly track the position output of the fusion module by means of sliding window filtering, reduce its shaking degree, and improve the stability of the 3D detection frame.
  • the HUD coordinate transformation module is used to transfer the position information in the camera coordinate system to the vehicle body coordinate system by using the camera attitude calibration algorithm of the large scene real vehicle in the real vehicle HUD scene.
  • the HUD engine rendering module is used to input the final output position information into the HUD rendering engine, render the corresponding pedestrian/vehicle warning warning information, and project it on the windshield through the optical machine, and present it in front of the driver's eyes, so that the driver can Get real-time early warning information for pedestrians and vehicles, and realize real-time early warning for pedestrians and vehicles.
  • FIGS. 4 to 6 are only simplified schematic diagrams for easy understanding, and the system architecture may also include other devices or other unit modules.
  • the method provided by the embodiment of the present application will be described below with reference to FIG. 7 .
  • the method can be executed by an electronic device.
  • the electronic device may include a processing device and a display device.
  • the processing device may be a car machine, a computer, or a processing device used in the HUD.
  • the electronic device may include any one or more structures shown in FIG. 4 to FIG. 6 .
  • the processing module 410 shown in Figure 4 or the processor shown in Figure 5, or the tracking fusion module shown in Figure 6, the HUD anti-shake module, the HUD coordinate transformation module, and the HUD engine rendering module can realize this application Processing actions in the methods provided by the embodiments.
  • the acquisition module 420 shown in FIG. 4 or the acquisition device shown in FIG. 5 , or the detection module shown in FIG. 6 can also be used to realize the collection of sensory information of the target object. These interactions include but are not limited to: acquiring the sensory information of the target object emotional information.
  • the electronic device acquires sensing information of a target object.
  • the electronic device obtains a predicted position of the target object at a first moment according to the sensing information of the target object.
  • the electronic device acquires the measurement position of the target object before the first moment.
  • the electronic device enables the display device to display the prompt information of the target object at the first moment according to the predicted position and the measured position.
  • the electronic device displays based on the measured position and predicted position of the target object (only based on the predicted position), which may be displayed at this position, or displayed at a nearby or related position.
  • frame the target object or display some prompt messages, such as a pedestrian next to the prompt.
  • the electronic device acquires sensing information of a target object.
  • the electronic device may obtain the sensing information of the target object through an interface circuit; or, the electronic device may obtain the sensing information of the target object through wireless communication, for example, through a Bluetooth connection, It is not limited here.
  • the sensory information of the target object acquired by the electronic device may be sensory information at a moment; or, the sensory information of the target object acquired by the electronic device may be sensory information within a time period, wherein, when After the electronic device acquires the sensing information within a period, it can filter the acquired sensing information within the period to filter out useful sensing information.
  • the target objects in the embodiment of the present application include one or more of vehicles, pedestrians, obstacles and traffic signs.
  • the traffic signs may also include one or more of traffic signs, traffic lights, and road traffic markings.
  • target objects in the embodiments of the present application are not limited to the above content, and any object applicable to the present application may be used as the target object of the present application.
  • the sensory information of the target object may be one or more of the target object's external features, current location, and distance from the driving vehicle.
  • the sensory information of the target object may include one or more of the external features of the vehicle, the current position of the vehicle, and the distance from the driving vehicle; the external features of the vehicle may include The model of the vehicle, the color of the vehicle, the length of the vehicle, and the width of the vehicle, etc.
  • the sensor information of the target object may include one or more of the pedestrian's external features, the pedestrian's current position, and the distance from the driving vehicle; the pedestrian's external features may include the identity of the pedestrian, etc. , for example, the identity of pedestrians can be adult, elderly, etc. Therefore, based on the pedestrian's identity, the electronic device can judge whether the pedestrian has a quick response ability, whether a warning needs to be strengthened, and the like.
  • the sensory information of the target object collected by the camera is taken as an example for introduction.
  • the camera can collect multiple consecutive images, determine the sensing information of the target object by identifying the feature pixels in the image, and then send it to the processor in the electronic device through the interface circuit to realize the electronic device Obtain the sensory information of the target object.
  • the camera can collect multiple consecutive images, and then directly send the collected multiple images to the processor in the electronic device through the interface circuit, and the processor will identify the feature pixels in the image and determine the target object The sensing information realizes that the electronic device obtains the sensing information of the target object.
  • the electronic device obtains a predicted position of the target object at a first moment according to the sensing information of the target object.
  • the electronic device may input the acquired sensory information of the target object into the prediction model used to determine the predicted position, so as to obtain the prediction of the target object at the first moment Location.
  • a prediction model may be established according to the sensory information of a threshold number of target objects acquired in the past.
  • the function of the prediction model is based on the sensing information of the current target object, such as the acceleration, direction, distance, position and other information of the target object, to predict the position of the target object in the future.
  • the electronic device may also perform feedback based on the error between the predicted position of the target object within a period of time and the corresponding actual position, and update and adjust the established prediction model.
  • the electronic device can also perform position prediction and the like according to the large database corresponding to the target object.
  • the embodiment of the present application does not limit the method of obtaining the predicted position of the target object at the first moment, and any method that can be applied to the present application is applicable to the present application.
  • S802. The electronic device tries to obtain the measurement position before the first moment. If it is obtained, execute S803. If not, execute S804.
  • Implementation way 1 The measurement position is acquired by the electronic device before the first moment.
  • the electronic device only needs to ensure that the measurement position is obtained before the first moment.
  • the electronic device needs to ensure that the measurement position is acquired before the first moment and within a preset time period before the first moment.
  • the preset duration is 1 ms.
  • the electronic device acquires the measurement position of the target object within 1 ms before the first moment.
  • the camera can continuously collect multiple images within the preset time length, and determine the target object’s time in the preset time length by identifying the feature pixels in the image.
  • the sensing information within the time period is then sent to the electronic device through the interface circuit, so that the electronic device determines the measurement position of the target object according to the sensing information within the preset time period.
  • the camera can continuously collect multiple images within the preset time period, and then directly send the collected multiple images to the electronic device through the interface circuit, The electronic device identifies the feature pixel points in the image, determines the sensing information of the target object, and then determines the measurement position of the target object.
  • the number of measurement positions of the target object acquired by the electronic device within a preset period of time before the first moment may be one, multiple, or zero.
  • the electronic device determines the display position of the target object at the first moment according to the predicted position and the measured position.
  • the electronic device determines that the measured position of the target object has been acquired within the preset time period, the electronic device fuses the measured position with the predicted position to obtain The display position of the moment.
  • the electronic device may determine an average value of the predicted position and the measured position as the display position.
  • the display position of the target object at the first moment is obtained through fusion correction based on the predicted position of the target object at the first moment and the measured position, so that the acquired display position is closer to the target
  • the real trajectory of the object effectively reduces the jitter and improves the user experience.
  • the measurement position for fusion with the predicted position may be determined in various ways, which are not limited to The following types:
  • Determination mode 1 the electronic device determines the average value of the plurality of measurement positions as the measurement position for fusion with the predicted position.
  • the electronic device may determine the average value of the three measurement positions, and then determine the average value of the three measurement positions as the measurement position for fusion with the predicted position.
  • Determination mode 2 the electronic device determines the last measurement position among the multiple acquired measurement positions as the measurement position for fusion with the predicted position.
  • the electronic device can determine the measurement position 3 among the three measurement positions as the measurement position for fusion with the predicted position.
  • the electronic device determines the display position of the target object at the first moment according to the predicted position.
  • the electronic device determines that the measured position of the target object is not acquired within the preset time period, the electronic device uses the predicted position as the display position of the target object at the first moment.
  • the electronic device enables the display device to display the prompt information of the target object at the first moment and display position.
  • the electronic device when the electronic device has a projection function and/or a display function.
  • the electronic device can put the prompt information of the target object on the windshield of the driving vehicle at the first moment and display position.
  • the electronic device can send a control command to a connected display screen, such as a vehicle display screen or AR-HUD, through an interface circuit, so that the user receiving the control command
  • a connected display screen such as a vehicle display screen or AR-HUD
  • the display screen displays the prompt information of the target object at the first moment, wherein the control instruction can be used to instruct the display screen to display corresponding content; or, the electronic device can connect to
  • the vehicle-mounted display screen and AR-HUD send a control instruction, so that the display screen that receives the control instruction displays the prompt information of the target object at the first moment, wherein the control instruction can be used to indicate the display The corresponding content is displayed on the screen.
  • the manner in which the electronic device enables the HUD to display the prompt information of the target object at the first moment is not limited to the following:
  • Display mode 1 the electronic device performs projection coordinate system conversion, and sends the converted display position of the target object to the HUD for projection.
  • the electronic device maps the display position in the world coordinate system to the vehicle body coordinate system according to the corresponding relationship between the world coordinate system where the display position is located and the vehicle body coordinate system.
  • the electronic device may determine and establish the correspondence between the world coordinate system where the display position is located and the vehicle body coordinate system in the following manner, so as to perform coordinate system conversion on the display position according to the correspondence.
  • the internal parameters of the electronic device include and are not limited to three attitudes of the electronic device, roll angle (roll), yaw angle (yaw), and pitch angle (pitch).
  • the checkerboard is placed in front of the electronic device multiple times to calibrate the internal parameters of the electronic device. Then, according to the internal parameters of the electronic device, place a checkerboard vertically in front of the HUD body, calibrate the external parameter rotation matrix R and offset vector T of the electronic device, and measure the offset of the checkerboard to the HUD body coordinate system.
  • the offset of the HUD body coordinate system can be determined according to the following formula 1.
  • ⁇ d represents the offset of the checkerboard to the HUD body coordinate system in the three directions of x, y, and z;
  • ⁇ x represents the offset in the x direction;
  • ⁇ y represents the offset in the y direction;
  • ⁇ z represents the offset in the z direction.
  • the corresponding HUD body coordinate system can be obtained according to the following formula 2.
  • x car represents the coordinates of the body coordinate system in the x direction
  • y car represents the coordinates in the body coordinate system in the y direction
  • z car represents the coordinates in the body coordinate system in the z direction
  • x w represents the coordinates in the world coordinate system in the x direction
  • y w represents y The coordinates in the world coordinate system in the direction
  • z w represents the coordinates in the world coordinate system in the z direction.
  • the electronic device sends the converted display position of the target object at the first moment to the HUD, and the HUD projects the prompt information of the target object on the windshield of the driving vehicle according to the display position.
  • Projection mode 2 the electronic device sends the display position of the target object at the first moment to the HUD, and the HUD performs projection coordinate system conversion by itself.
  • the HUD in the driving vehicle converts the obtained display position of the target object at the first moment into the HUD coordinate system, it can be realized in the following ways:
  • the HUD maps the display position in the world coordinate system to the vehicle body coordinate system.
  • the HUD determines and establishes the corresponding relationship between the world coordinate system where the display position is located and the vehicle body coordinate system, which can be referred to the introduction of the above-mentioned projection method 1. For the sake of concise description, details are not repeated here.
  • the HUD can also render the corresponding prompt information for the target object according to the display position of the target object at the first moment, and project it onto the windshield through the optical machine, and present it in front of the driver's eyes, so that the driver can get real-time Early warning information for pedestrians and vehicles, to achieve real-time early warning for pedestrians and vehicles.
  • the determination of the display position in the embodiment of this application may further have the following methods:
  • Mode 1 The display position is related to the preset correction value.
  • the correction value in the embodiment of the present application may be preset, and is used to remove the error caused by the bumping and shaking of the vehicle during driving.
  • the electronic device may use different correction values for correction according to different driving scenarios.
  • the electronic device can determine the driving scene according to different road conditions.
  • the driving scene on a flat asphalt road may be an urban area; the driving scene on a relatively narrow and steep road section may be a mountain road.
  • the vehicle bumps and shakes in different driving scenarios can be obtained, and the correction values corresponding to different scenarios can be determined according to the vehicle bumps and shakes.
  • the correction value in this scene is offset by 0.5 meters to the traveling direction of the target object.
  • the display position of the target object is as shown in (a) in FIG. 15 .
  • the display device further acquires the display position according to the correction value before projection, the display position of the target object at the first moment after projection is shown in (b) of FIG. 15 .
  • the display position of the target object at the first moment after projection is shown in (b) of FIG. 15 .
  • the display position in Fig. 15(a) it is shifted by 0.5 meters.
  • Mode 2 The display position is related to the average value of the display positions corresponding to multiple adjacent times before the first time.
  • the electronic device in the embodiment of the present application may update the display position of the first position of the target object through sliding window filtering.
  • the electronic device determines the mean value filter according to the display positions of at least two adjacent frames. Then, the electronic device performs sliding window filtering according to the mean value filtering and the preset step size, and updates the display position of the first position of the target object.
  • the mean value filter can be determined by the following formula 3:
  • k represents the position information of the target object in the kth frame
  • n represents the number of frames selected for mean filtering, Indicates mean filtering.
  • each Box represents the corresponding display position of each frame.
  • the electronic device may determine that the driving vehicle is in a driving state.
  • Condition 1 Whether the engine of the driving vehicle is running.
  • Condition 2 Whether the moving distance of the driving vehicle within a certain period of time is greater than a threshold distance.
  • Condition 3 Whether the gear used for driving the vehicle is P gear (parking gear) or D gear (forward gear).
  • an optional method in the embodiment of the present application may also use the MQ parallel mechanism in the process of presenting the road condition.
  • the modules of the entire system can be interdependent serial processes, and the message queue (MQ) mechanism can be used to decouple the modules of the entire system to reduce the coupling between modules, so that the modules of the entire system Parallelization.
  • MQ message queue
  • Other methods can also be used, which are not limited in this application.
  • the detection system when the target object on the road is detected, if the update position of the target object is not obtained within the threshold time (that is, after the detection is determined to be blocked), the detection system will directly obtain the next moment of the target object obtained through the prediction model The predicted value of the position is used as the output, which greatly improves the rate of tracking output.
  • the method provided by the embodiment of the present application may include the following steps :
  • the detection module detects the sensing information of the target object.
  • the tracking and fusion module acquires the sensing information of the target object.
  • the tracking and fusion module can obtain the sensing information of the target object detected by the detection module through the interface circuit; or the tracking and fusion module can obtain the connected information through a wireless connection, such as Bluetooth connection. Sensing information of the target object detected by the detection module.
  • the tracking and fusion module obtains the predicted position of the target object at the first moment according to the sensing information of the target object.
  • the tracking and fusion module may input the acquired sensory information of the target object into the prediction model used to determine the predicted position, so as to obtain the predicted position of the target object at the first moment .
  • the tracking and fusion module in this embodiment of the present application may establish a prediction model based on the sensory information of a threshold number of target objects acquired in the past.
  • tracking fusion module in the embodiment of the present application may update and adjust the established prediction model according to the acquired sensory information of the target object.
  • the tracking and fusion module determines whether the measurement position is obtained before the first moment, if yes, execute S1804, if not, execute S1805.
  • the measured position of the target object before the first moment may be notified to the tracking and fusion module after being detected by the detection module.
  • the measured position of the target object before the first moment may be determined by the tracking and fusion module through the sensing information of the target object notified by the detection module before the first moment.
  • the specific content of the S1803 is similar to the content of the above S802, which is a brief description, and for details, please refer to the content of the above S802.
  • the tracking and fusion module determines the display position of the target object at the first moment according to the predicted position and the measured position.
  • the tracking fusion module may determine the average value of the predicted position and the measured position as the display position of the target object at the first moment.
  • the measurement position for fusion with the predicted position may be determined in various ways, specifically not Limited to the following:
  • Determination mode 1 the tracking and fusion module determines the average value of the plurality of measurement positions as the measurement position for fusion with the predicted position.
  • Determination mode 2 the tracking and fusion module determines the last measurement position among the multiple acquired measurement positions as the measurement position used for fusion with the predicted position.
  • the specific content of the S1804 is similar to the content of the above S803, which is a brief description, for details, please refer to the content of the above S803.
  • the tracking and fusion module determines the display position of the target object at the first moment according to the predicted position.
  • the tracking and fusion module enables the HUD engine rendering module to display the prompt information of the target object at the first moment and display position.
  • the display position of the target object at the first moment can be adjusted through the HUD coordinate transformation module .
  • the HUD coordinate transformation module obtains the display position of the target object at the first moment through the interface circuit, and then maps the display position in the world coordinate system to In the vehicle body coordinate system, the adjusted display position of the target object at the first moment is obtained.
  • the HUD coordinate transformation module transmits the adjusted display position of the target object at the first moment to the HUD engine rendering module through the interface circuit.
  • the HUD anti-shake module which is not limited to the following: :
  • the HUD anti-shake module obtains the display position of the target object at the first moment according to the predicted position, the measured position and the correction value.
  • the HUD anti-shake module obtains the display position of the target object at the first moment according to the predicted position, the measured position, and the average value of the display positions corresponding to multiple adjacent moments before the first moment.
  • the electronic device shown in FIG. 6 may implement the display method provided by the embodiment of the present application. It should be understood that the steps shown in FIG. 18 implemented by the electronic device shown in FIG. 6 are exemplary. According to the display method provided by the embodiment of the present application, some steps shown in FIG. 18 can be omitted or replaced by other steps Some steps in FIG. 18 , or the electronic device may also execute some steps not shown in FIG. 18 .
  • the present application also provides an electronic device for implementing the functions of the electronic device in the display method introduced in the above method embodiments, thus possessing the beneficial effects of the above method embodiments.
  • the electronic device may include any structure in FIG. 4 to FIG. 6 , or be realized by a combination of any multiple structures in FIG. 4 to FIG. 6 .
  • the electronic device shown in FIG. 4 may be a terminal or a vehicle, or a chip inside the terminal or the vehicle.
  • the electronic device can implement the display method shown in FIG. 8 or FIG. 18 and the above optional embodiments.
  • the electronic device may include a processing module 410 and a collection module 420 .
  • the processing module 410 can be used to execute any of steps S800-S805 in the method shown in FIG. 8 and S1801-S1806 in the method shown in FIG. Any steps such as position determination, coordinate system transformation, judging whether the measurement position of the target object is obtained before the first moment.
  • the collecting module 420 is used for collecting the sensing information of the target object. For example, it can be used to execute S1800 in the method shown in FIG. 18 , or it can be used to execute any step related to the acquisition of target object information in the above optional embodiments. For details, refer to the detailed description in the method example, and details are not repeated here.
  • the processing module 410 may be used to acquire sensory information of a target object, where the target object includes one or more of vehicles, people, obstacles, and traffic signs; according to the sensory information of the target object, obtain the The predicted position at the first moment; acquiring the measured position of the target object before the first moment; according to the predicted position and the measured position, enabling the electronic device to display the target object's position at the first moment Prompt information.
  • the electronic device in the embodiment of the present application can be implemented by software, for example, a computer program or instruction having the above functions, and the corresponding computer program or instruction can be stored in the internal memory of the terminal, read by the processor
  • the corresponding computer programs or instructions in the memory implement the above-mentioned functions of the processing module 410 and/or the acquisition module 420 .
  • the electronic device in the embodiment of the present application may also be implemented by hardware.
  • the processing module 410 may be a processor (such as a processor in a CPU or a system chip), and the acquisition module 420 may include one or more of a camera device and a sensing device.
  • processing module 410 may be used to:
  • the electronic device When the measurement position of the target object is acquired before the first moment, the electronic device is enabled to display the prompt information of the target object at the first moment and the display position, the display position and the predicted position related to the measurement location.
  • processing module 410 may be used to:
  • the electronic device When the measurement position of the target object cannot be obtained before the first moment, the electronic device is enabled to display the prompt information of the target object at the first moment and the display position, and the display position is consistent with the predicted Location related.
  • processing module 410 may be used to:
  • the electronic device When multiple measurement positions of the target object are acquired before the first moment, the electronic device is enabled to display the prompt information of the target object at the first moment and the display position, and the display position is the same as the An average of a plurality of measured positions is related to the predicted position.
  • processing module 410 may be used to:
  • the electronic device When multiple measurement positions of the target object are acquired before the first moment, the electronic device is enabled to display the prompt information of the target object at the first moment and the display position, and the display position is the same as the The last acquired measurement position among the plurality of measurement positions is related to the predicted position.
  • the display position is also related to a preset correction value, and the correction value is used to reduce errors caused by bumping and shaking of the vehicle during driving.
  • the display position is also related to an average value of display positions corresponding to multiple adjacent moments before the first moment.
  • the target object includes one or more of a vehicle, a person, an obstacle, and a traffic sign.
  • the electronic device shown in FIG. 5 may be a terminal or a vehicle, or a chip inside the terminal or the vehicle.
  • the vehicle control device can implement the vehicle control method shown in FIG. 8 or FIG. 18 and the above optional embodiments.
  • the electronic device may include at least one of a processor, a memory, an interface circuit, or a human-computer interaction device. It should be understood that although only one processor, one memory, one interface circuit and one (or one) acquisition device are shown in FIG. 5 .
  • the electronic device may include other numbers of processors and interface circuits.
  • the interface circuit is used for the electronic device to communicate with the terminal or other components of the vehicle, such as a memory or other processors, or a projection device and the like.
  • the processor can be used for signal interaction with other components through interface circuits.
  • the interface circuit may be an input/output interface of the processor.
  • the processor can read computer programs or instructions in the memory coupled to it through the interface circuit, and decode and execute these computer programs or instructions.
  • these computer programs or instructions may include the above-mentioned function programs, and may also include the above-mentioned function programs of the electronic device.
  • the electronic device can realize the solution in the display method provided by the embodiment of the present application.
  • these functional programs are stored in a memory outside the electronic device, and the electronic device may not include a memory at this time.
  • the above functional program is decoded and executed by the processor, part or all of the content of the above functional program is temporarily stored in the memory.
  • these functional programs are stored in a memory inside the electronic device.
  • the electronic device may be provided in the electronic device in the embodiment of the present application.
  • these function programs are stored in a memory outside the traffic television device, and other parts of these function programs are stored in a memory inside the electronic device.
  • the above processor may be a chip.
  • the processor may be a field programmable gate array (field programmable gate array, FPGA), an application specific integrated circuit (ASIC), or a system chip (system on chip, SoC). It can be a central processor unit (CPU), a network processor (network processor, NP), a digital signal processing circuit (digital signal processor, DSP), or a microcontroller (micro controller unit) , MCU), can also be a programmable controller (programmable logic device, PLD) or other integrated chips.
  • CPU central processor unit
  • NP network processor
  • DSP digital signal processor
  • microcontroller micro controller unit
  • PLD programmable logic device
  • the processor in the embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above-mentioned method embodiments may be completed by an integrated logic circuit of hardware in a processor or instructions in the form of software.
  • the above-mentioned processor may be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field-programmable gate array
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a mature storage medium in the field such as random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, register.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory in the embodiments of the present application may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories.
  • the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory can be random access memory (RAM), which acts as external cache memory.
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM direct memory bus random access memory
  • direct rambus RAM direct rambus RAM
  • the computer program or instruction can be stored in the memory, and the computer program or instruction stored in the memory can be executed by the processor.
  • the electronic device is realized by the structure shown in FIG. Actions performed by the processing module 410 .
  • the collection device 420 may also perform the action of collecting the sensory information of the target object by the electronic device through the structure shown in FIG. 4 .
  • the processing module 410 shown in FIG. 4 may be implemented by the processor and memory shown in FIG. 5, or in other words, the processing module 410 shown in FIG.
  • the stored computer programs or instructions implement the actions performed by the processing module 410 shown in FIG. 4 above. And/or, the acquisition module 420 shown in FIG.
  • FIG. 4 can be realized by the acquisition device shown in FIG. 5, or in other words, the processing module 410 shown in FIG. 4 includes the acquisition device shown in FIG. 5, or in other words, the acquisition device executes the above FIG. 4 shows the actions performed by the acquisition device 420 .
  • one or more of the detection module, the tracking fusion module, the HUD anti-shake module, the HUD coordinate transformation module, and the HUD engine rendering module can be executed through the structure shown in Figure 4 Actions performed by the processing module 410 when implementing the electronic device.
  • the actions performed by the acquisition module 420 in the electronic device through the structure shown in FIG. 4 may also be performed by the detection module.
  • the actions performed by the detection module, the tracking fusion module, the HUD anti-shake module, the HUD coordinate transformation module, and the HUD engine rendering module can refer to the description in the process shown in Figure 17, I won't go into details here.
  • the present application provides a computing device, including a processor, the processor is connected to a memory, the memory is used to store computer programs or instructions, and the processor is used to execute the computer program stored in the memory, so that the computing device performs The method in the above-mentioned method embodiment.
  • the present application provides a computer-readable storage medium on which a computer program or instruction is stored, and when the computer program or instruction is executed, the computing device executes the method in the above method embodiment.
  • the present application provides a computer program product, which enables the computing device to execute the methods in the above method embodiments when the computer executes the computer program product.
  • the present application provides a chip, which is connected to a memory, and is used to read and execute computer programs or instructions stored in the memory, so that the computing device executes the methods in the above method embodiments.
  • an embodiment of the present application provides a device, the device includes a processor and an interface circuit, the interface circuit is used to receive computer programs or instructions and transmit them to the processor; the processor Execute the computer program or instructions to execute the methods in the above method embodiments.
  • each functional module in each embodiment of the present application may be integrated into one processor, or physically exist separately, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to operate in a specific manner, such that the instructions stored in the computer-readable memory produce an article of manufacture comprising instruction means, the instructions
  • the device realizes the function specified in one or more procedures of the flowchart and/or one or more blocks of the block diagram.

Abstract

本申请公开了一种显示方法、装置和系统,可基于获取到的目标对象第一时刻的预测位置,以及第一时刻前的测量位置,使能电子装置在所述第一时刻显示所述目标对象的提示信息,有效降低输出抖动性,提升用户体验。

Description

一种显示方法、装置和系统 技术领域
本申请涉及汽车技术领域,特别涉及一种显示方法、装置和系统。
背景技术
目前为了增强驾驶员对路面信息的获取,实现增强现实(augmented reality,AR)导航、AR预警等功能,如图1所示,在车辆内采用AR抬头显示(AR head up display,AR-HUD),即将相机检测到的行人、车辆等信息,通过坐标变换到真实世界坐标系下,并投影到驾驶员前方视野中的电子装置上,例如,驾驶车辆中的风挡玻璃,从而将投影虚拟信息和真实路况信息融合起来,实现行人警示、车辆警示等功能作用。
然而,由于驾驶车辆所提供的算力资源限制、车速太快、驾驶车辆中相机本身存在检测误差和驾驶过程中车身颠簸等原因,如图2所示,在采用AR-HUD进行显示过程中,驾驶车辆对检测到的行人、车辆的检测结果会存在较为明显的抖动现象,让人产生眩晕感,影响驾驶员的长期安全驾驶。
综上,目前采用AR-HUD进行显示的方法,抖动较大,用户体验较差。
发明内容
本申请提供一种显示方法、装置和系统,用以降低显示过程中的图像抖动,提升显示效果。
本申请提供的显示方法可以由电子装置执行。电子装置可以被抽象为计算机系统。电子装置可以是整机,也可以是整机中的部分器件,例如:系统芯片或图像芯片。其中,系统芯片也可以包括片上系统(system on chip,SOC),或SoC芯片。具体地,电子装置可以是诸如车辆中的车载电脑、车机等这样的终端装置或车载设备,也可以是能够被设置在车辆或车载设备中的计算机系统中的系统芯片、图像处理芯片或其他类型的芯片。
第一方面,本申请实施例提供一种显示方法,包括:
获取目标对象的传感信息;根据所述目标对象的传感信息,获得所述目标对象在第一时刻的预测位置;获取所述目标对象在所述第一时刻前的测量位置;根据所述预测位置和所述测量位置,使能显示装置电子装置在所述第一时刻显示所述目标对象的提示信息。
示例性的,当上述方法是由该电子装置执行时,可以通过该电子装置中的采集装置获取目标对象的传感信息,该采集装置可以通过与该电子装置中的处理装置连接的接口电路,将该目标对象的传感信息发送给该处理装置。所述处理装置可以通过对获取到的该目标对象的传感信息进行处理,得到该目标对象在第一时刻的提示信息。该处理装置还可以通过与该电子装置中的显示装置连接的接口电路,将该目标对象在第一时刻的提示信息发送给该显示装置进行投影。
采用该方法,在进行显示过程中,通过采集到的目标对象的传感信息,得到目标对象在第一时刻的预测位置,以及将该预测位置与第一时刻前获取到的测量位置结合,使能电子装置在所述第一时刻显示该目标对象的提示信息,有效减少输出抖动性。
一种可能的设计中,在所述第一时刻前获取不到所述目标对象的测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置有关。因此,若在该第一时刻前没有获取到目标对象的测量位置,为了更好的保证输出的目标对象在第一时刻的显示位置的实时性,可以直接将该预测位置确定为该目标对象在第一时刻的显示位置,有效降低时延。
示例性的,当上述方法是由该电子装置执行时,可以通过该电子装置中的处理装置确定是否在该第一时长前获取到目标对象的测量位置,以及在该第一时长前未获取到目标对象的测量位置时,将该预测位置确定为该显示位置。
一种可能的设计中,在所述第一时刻前获取到所述目标对象的测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置和所述测量位置有关。因此,若在该第一时刻前获取到目标对象的测量位置,为了更好的降低输出内容的抖动性,可以根据将该预测位置以及该测量位置,确定该目标对象在第一时刻的显示位置。
一种可能的设计中,该电子装置中的处理装置,可以将该预测位置以及该测量位置的平均值,确定为该目标对象在第一时刻的显示位置。
一种可能的设计中,在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置的平均值和所述预测位置有关。因此,在第一时刻前获取到目标对象的多个测量位置后,需要确定用于与该预测位置进行融合的测量位置,从而根据选取的测量位置与预测位置进行融合,得到显示位置,例如,可以将多个测量位置的平均值确定为选取的用于进行融合的测量位置。
示例性的,当上述方法是由该电子装置执行时,可以通过该电子装置中的处理装置确定多个测量位置的平均值,以及将该多个测量位置的平均值与该预测位置进行融合,得到该目标对象在第一时刻的显示位置。
一种可能的设计中,在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置中最后获取到的测量位置和所述预测位置有关。因此,在第一时刻前内获取到目标对象的多个测量位置后,需要确定用于与该预测位置进行融合的测量位置,从而根据选取的测量位置与预测位置进行融合,得到显示位置,例如,可以将多个测量位置中最后一个测量位置确定为选取的用于进行融合的测量位置。
示例性的,当上述方法是由该电子装置执行时,可以通过该电子装置中的处理装置确定多个测量位置中最后一个测量位置,以及将该最后一个测量位置与该预测位置进行融合,得到该目标对象在第一时刻的显示位置。
一种可能的设计中,所述显示位置还与预先设置的修正值有关,所述修正值用于降低车辆在驾驶过程中颠簸晃动产生的误差。因此,在获取到目标对象在第一时刻的显示位置后,为了进一步的降低抖动性,可以在进行投影前,再次对目标对象在第一时刻的显示位置进行融合校对,例如,可以根据该预测位置,该测量位置以及修正值来获取该显示位置,使得进行投影显示的目标对象在第一时刻的显示位置更贴近目标对象的真实位置,提高用户体验。
示例性的,当上述方法是由该电子装置执行时,可以通过该电子装置中的处理装置根 据修正值对该显示位置进行更新。
一种可能的设计中,所述显示位置还与所述第一时刻前多个相邻时刻对应的显示位置的均值有关。因此,在获取到目标对象在第一时刻的显示位置后,为了进一步的降低抖动性,可以在进行投影前,再次对目标对象在第一时刻的显示位置进行融合校对,例如,可以根据第一时刻前多个相邻时刻对应的显示位置获取该目标对象在第一时刻的显示位置,使得进行投影显示的目标对象在第一时刻的显示位置更贴近目标对象的真实位置,提高用户体验。
示例性的,当上述方法是由该电子装置执行时,可以通过该电子装置中的处理装置确定第一时刻前多个相邻时刻对应的显示位置的均值,以及根据该均值对该第一时刻的显示位置进行更新。
一种可能的设计中,根据显示位置所在世界坐标系与车身坐标系的对应关系,使能显示装置将所述世界坐标系下的该目标对象在第一时刻的提示信息投影到所述车身坐标系中。因此,在进行显示过程中,为了更好的贴合车身坐标系,可以根据世界坐标系与车身坐标系的对应关系,确定该目标对象在第一时刻的提示信息在该车身坐标系中的位置。
示例性的,当上述方法是由该电子装置执行时,可以通过该电子装置中的处理装置根据该对应关系,确定该目标对象在第一时刻的提示信息在车身坐标系中的位置。
一种可能的设计中,所述目标对象包括车辆、人、障碍物以及交通标识中的一个或多个。所述传感信息包括所述目标对象的外部特征、所述目标对象在第一时刻的位置,以及所述目标对象在第一时刻与驾驶车辆的距离中的一个或多个。因此,可根据采集到的目标对象的传感信息,知晓驾驶车辆前方的目标对象的状况等。
示例性的,本申请实施例中当目标对象包括车辆时,目标对象的外部特征包括并不限于车辆的型号、车辆宽度、车辆长度、车辆价值、车辆的颜色等。当目标对象包括行人时,目标对象的外部特征包括并不限于行人的身高、性别、年龄段、服饰颜色等。
第二方面,本申请提供一种电子装置,该装置包括处理模块和通信模块。通信模块用于通过接口电路获取目标对象的传感信息。处理模块可用于根据所述目标对象的传感信息,获得所述目标对象在第一时刻的预测位置;获取所述目标对象在所述第一时刻前预设时长内的测量位置;根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息。
一种可能的设计中,所述处理模块可用于:
在所述第一时刻前获取到所述目标对象的测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置和所述测量位置有关。
一种可能的设计中,所述处理模块可用于:
在所述第一时刻前获取不到所述目标对象的测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置有关。
一种可能的设计中,所述处理模块可用于:
在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置的平均值和所述预测位置有关。
一种可能的设计中,所述处理模块可用于:
在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置中最后获取到的测量位置和所述预测位置有关。
一种可能的设计中,所述处理模块还用于:
根据修正值更新所述显示位置;所述修正值是预先设置的,用于去除车辆在驾驶过程中颠簸晃动产生的误差。
一种可能的设计中,所述处理模块还用于:
根据所述第一时刻前多个相邻时刻对应的显示位置的均值,更新所述显示位置。
一种可能的设计中,所述目标对象包括车辆、人、障碍物以及交通标识中的一个或多个。
第三方面,本申请提供一种计算设备,包括处理器,处理器与存储器相连,存储器存储计算机程序或指令,处理器用于执行存储器中存储的计算机程序或指令,以使得计算设备执行上述第一方面或第一方面的任一种可能的实现方式中的方法。
第四方面,本申请提供一种计算机可读存储介质,其上存储有计算机程序或指令,当该计算机程序或指令被执行时,使得计算机执行上述第一方面或第一方面的任一种可能的实现方式中的方法。
第五方面,本申请提供一种计算机程序产品,当计算机执行计算机程序产品时,使得计算机执行上述第一方面或第一方面的任一种可能的实现方式中的方法。
第六方面,本申请提供一种芯片,芯片与存储器相连,用于读取并执行存储器中存储的计算机程序或指令,以实现上述第一方面或第一方面的任一种可能的实现方式中的方法。
第七方面,本申请提供一种车辆,该车辆包括上述第二方面或第二方面的任一种可能的实现方式中的车载控制装置和执行装置,以实现上述第一方面或第一方面的任一种可能的实现方式中的方法。
第八方面,本申请提供一种车辆,该车辆包括上述第六方面中的芯片和执行装置,以实现上述第一方面或第一方面的任一种可能的实现方式中的方法。
应理解,基于本申请所提供的技术方案,在驾驶车辆进行显示过程中,通过结合目标对象第一时刻的预测位置以及所述目标对象获取到的第一时刻前的测量位置,进行融合校正,使能显示装置在该第一时刻显示所述目标对象的提示信息更贴近目标对象在的真实情况,能够有效降低驾驶车辆风挡玻璃中投放内容的抖动性。另外,在进行显示过程中,还可以通过修正值等对显示内容进行更新,从而进一步实现抖动优化,有效降低车辆驾驶员采用AR抬头显示路况信息时产生的眩晕感,保障了驾驶员的安全驾驶。
附图说明
图1为本申请提供的一种AR-HUD的车辆预警显示示意图;
图2为本申请提供的一种显示抖动场景示意图;
图3为本申请提供的一种电子装置场景示意图;
图4为本申请提供的一种电子装置的结构示意图;
图5为本申请提供的另一种电子装置的结构示意图;
图6为本申请提供的另一种电子装置的结构示意图;
图7为本申请提供的第一种显示方法流程示意图;
图8为本申请提供的第二种显示方法流程示意图;
图9为本申请提供的一种采集场景示意图;
图10为本申请提供的一种获取目标对象测量位置示意图;
图11为本申请提供的第一种确定目标对象显示位置情况示意图;
图12为本申请提供的一种预测位置与测量位置融合成显示位置的示意图;
图13为本申请提供的第二种确定目标对象显示位置情况示意图;
图14为本申请提供的第一种确定测量位置示意图;
图15为本申请提供的第二种确定测量位置示意图;
图16为本申请提供的一种显示位置更新场景示意图;
图17为本申请提供的一种通过滑窗式滤波更新显示位置的场景示意图;
图18为本申请提供的第三种显示方法流程示意图。
具体实施方式
本申请提供一种显示方法、装置和系统,用以降低显示过程中的图像抖动,提升显示效果。其中,方法和装置是基于同一技术构思的,由于方法及装置解决问题的原理相似,因此装置与方法的实施可以相互参见,重复之处不再赘述。
在本申请实施例提供的方法中,在进行显示过程中,电子装置可以基于通过获取到的目标对象的传感信息,得到目标对象在第一时刻的预测位置,以及目标对象在第一时刻前的测量位置,通过该预测位置与该测量位置,使能显示装置在该第一时刻显示所述目标对象的提示信息,有效降低输出图像的抖动性。
其中,本申请实施例中的电子装置可用于支持车辆实现本申请实施例提供的方法。
可选的,电子装置可与车辆采用一体设置,比如,电子装置可设置于车辆内部。或者,电子装置与车辆可采用分离式设置,例如,电子装置可通过终端设备等形式实现。这里的终端设备例如可以是AR-HUD或者车载设备等。
示例性的,如图3所示,以电子装置是车载设备为例,例如,该车载设备为具有投影功能的行车记录仪,该车载设备可提供以下功能:通过该行车记录仪获取目标对象的传感信息,基于该目标对象的传感信息,确定该目标对象在第一时刻的预测位置以及在第一时刻前的测量位置;通过该预测位置以及该测量位置,在该第一时刻显示所述目标对象的提示信息。
其中,本申请实施例中的车辆可以具有自动驾驶功能,尤其是具有人机交互(human machine interaction,HMI)功能等。
此外,应理解,根据实际使用的需要,也可将车辆替换为火车、飞行器、移动平台等其他载具或交通工具。本申请对此不做限定。
示例性的,图4示出了一种可能的电子装置的结构示意图,该结构可包括处理模块410、采集模块420。示例性地,图4所示结构可以是车载设备,或具有本申请所示电子装置的功能部件。
当该结构是车载设备或其他电子设备时,采集模块420可包括摄像装置、传感装置等用于支持目标对象采集功能的装置,处理模块410可以是处理器,例如,中央处理单元(central processing unit,CPU)。采集模块420可以通过蓝牙连接,网络连接或者接口电路,与处理模块410进行通信。处理模块410可以通过投影方式、有线连接或无线连接方式中的一种,在显示屏幕410中显示路况信息。当该结构是具有本申请所示电子装置的功能部件时,采集模块420可包括摄像装置、传感装置等用于支持目标对象采集功能的装置,处理模块410可以是处理器。采集模块420可以通过接口电路,与处理模块410进行通信。处理模块410可以通过投影方式、有线连接或无线连接方式中的一种,在显示屏幕410中显示目标对象的提示信息。当该结构是芯片或芯片系统时,采集模块420可以是被芯片控制的摄像装置和传感装置中的一种或多种、处理模块410可以是芯片的处理器,可以包括一个或多个中央处理单元。应理解,本申请实施例中的处理模块410可以由处理器或处理器相关电路组件实现,采集模块420可以由摄像装置、传感装置或相关的采集装置实现。
例如,处理模块410可以用于执行本申请任一实施例中由电子装置所执行的除了采集操作以及投影操作之外的全部操作,例如基于该目标对象的传感信息,确定该目标对象在第一时刻的预测位置;根据该目标对象的预测位置以及测量位置,在第一时刻显示目标对象的提示信息等。采集模块420可以用于执行本申请任一实施例中对目标对象的采集操作,例如通过摄像装置和传感装置中的一种或多种,获取目标对象的传感信息。
该处理装置410获取到的目标对象的传感信息,可以是来自于外接的传感器或摄像器采集到的目标对象的点云信息,声音,以及图片中的一种或多种传感数据生成的;或者,该处理装置410获取到的目标对象的传感信息,可以是来自于自身传感器或摄像器采集到的目标对象的点云信息,声音,以及图片中的一种或多种传感数据生成的。
其中,本申请实施例中的摄像装置可以是单目摄像头、双目摄像头等。所述摄像装置的拍摄区域可以为所述车辆的外部环境。所述传感装置,用于获取目标对象的传感数据,从而辅助车辆中的处理装置分析确定该目标对象的传感信息。例如,本申请实施例中所述的传感装置可以包括用于获取环境信息的激光雷达、毫米波雷达、超声波雷达等。
另外,处理模块410可以是一个功能模块,该功能模块既能完成采集信息的分析操作,也能完成在显示屏幕中显示路况信息的操作,在执行处理操作时,可以认为处理模块410是分析模块,而在执行显示操作时,可以认为处理模块410是显示模块,例如,本申请实施例中的处理模块410可以使用AR-HUD代替。即本申请实施例中的AR-HUD具有上述处理模块410的功能;或者,处理模块410也可以包括两个功能模块,处理模块410可以视为这两个功能模块的统称,这两个功能模块分别为分析模块和显示模块,分析模块用于根据获取到的目标对象的传感信息,分析路况,根据目标对象第一时刻的预测位置以及第一时刻前的测量位置,确定目标对象第一时刻的提示信息,显示模块用于将分析模块确定的目标对象的提示信息显示到显示屏幕中。
此外,本申请实施例中该电子装置还可以包含存储模块,用于存储一个或多个程序以及数据信息;其中所述一个或多个程序包括指令。该电子装置还可以包括显示屏幕,该显示屏幕可以是车辆中的风挡玻璃或者其他车载设备显示屏幕。
图5示出了另一种电子装置的结构示意图,用于执行本申请实施例提供的由电子装置执行的动作。便于理解和图示方便。如图5所示,该电子装置可包括处理器、存储器、接口电路。此外,该电子装置还可以包括采集装置、处理装置、显示装置或显示屏幕中的至少一个组件。处理器主要用于实现本申请实施例提供的处理操作,例如对获取到的目标对象的传感信息进行分析处理,执行软件程序,处理软件程序的数据等。存储器主要用于存储软件程序和数据。采集装置可用于采集目标对象的传感信息,可包括摄像头、毫米波雷达或者超声波传感器等。接口电路可用于支持电子装置的通信,例如,当采集装置采集到目标对象在第一时刻的传感信息后,可以通过接口电路将采集到的传感信息发送给处理器。接口电路可包括收发器或输入输出接口。
应理解,为便于说明,图5中仅示出了一个存储器和处理器。在实际的电子装置的产品中,可以存在一个或多个处理器和一个或多个存储器。存储器也可以称为存储介质或者存储设备等。存储器可以是独立于处理器设置,也可以是与处理器集成在一起,本申请实施例对此不做限制。
图6所示为本申请实施例提供的另一种电子装置。可见,该电子装置可包括检测模块,跟踪融合模块,HUD防抖模块,HUD坐标变换模块,HUD引擎渲染模块等。
其中,检测模块用于使用检测算法进行目标对象的传感信息检测,例如,对行人和车辆信息进行检测,得到行人和车辆的位置、边框信息等。具体形式与检测装置有关。比如通过摄像头采集行人和车辆的照片,通过照片内容获取行人和车辆的信息;再比如,通过传感器获取行人和车辆与自身驾驶车辆的距离,行人和车辆的位置。此外,检测模块还可以用于检测第一时刻前该目标对象的测量位置等。
跟踪融合模块可用于对检测到的目标对象在第一时刻的传感信息建立3D预测模型,利用模型对该目标对象第一时刻的位置进行预测跟踪。当检测完成时,利用模型的预测输出值和目标对象在第一时刻前的检测值,融合更新作为输出,当检测未完成时,利用MQ机制输出跟踪算法的预测值。
HUD防抖模块,用于以滑窗式滤波的方式平滑跟踪融合模块的位置输出,降低其抖动程度,提升3D检测框的稳定性。
HUD坐标变换模块,用于在实车HUD场景下,利用大场景实车的相机姿态标定算法,将相机坐标系下的位置信息转移到车身坐标系下。
HUD引擎渲染模块,用于将最终的输出位置信息输入HUD渲染引擎,渲染出对应的行人/车辆预警的警示信息,并通过光机投射到风挡上,呈现在驾驶员的眼前,让驾驶员能实时得到对行人和车辆的预警信息,实现对行人、车辆的实时预警。
其中,本申请实施例描述的系统架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定。进一步的,本领域普通技术人员可知,随着车辆架构的演变和新业务场景的出现,本申请实施例提供的技术方 案对于类似的技术问题,同样适用。应理解,图4至图6仅为便于理解而示例的简化示意图,该系统架构中还可以包括其他设备或者还可以包括其他单元模块。
下面结合图7对本申请实施例提供的方法进行说明。该方法可由电子装置执行。其中,该电子装置可以包括处理装置,显示装置。处理装置可以是车机,电脑,也可以是用于HUD内的处理装置等。电子装置可包括图4至图6所示的任意一个或多个结构。在实现该显示方法时,可由图4所示处理模块410或图5所示处理器,或图6所示的跟踪融合模块,HUD防抖模块,HUD坐标变换模块,HUD引擎渲染模块实现本申请实施例提供的方法中的处理动作。还可由图4所示的采集模块420或图5所示的采集装置,或图6所示的检测模块,实现目标对象的传感信息的采集,这些交互包括但不限于:获取目标对象的传感信息。
S700,电子装置获取目标对象的传感信息。
S701,电子装置根据所述目标对象的传感信息,获得所述目标对象在第一时刻的预测位置。
S702,电子装置获取所述目标对象在所述第一时刻前的测量位置。
S703,电子装置根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息。
需要说明的是,本申请实施例中电子装置根据目标对象的测量位置和预测位置等(仅根据预测位置)显示,可以是在该位置显示,也可以是在附近或者相关位置显示。
例如,将目标对象框起来;或者,显示一些提示消息,比如提示旁边有行人等。
其中,为了更好的对本申请提供的显示方法进行介绍,如图8所示,基于图7所示的内容,进一步详细介绍:
S800、电子装置获取目标对象的传感信息。
本申请一种可选的方式,该电子装置可以通过接口电路获取目标对象的传感信息;或者,该电子装置可以通过无线通信的方式,例如通过蓝牙连接方式获取目标对象的传感信息等,在此不进行限定。
此外,该电子装置获取到的目标对象的传感信息可以是一个时刻的传感信息;或者,该电子装置获取到的目标对象的传感信息可以是一个时长内的传感信息,其中,当该电子装置获取到一个时长内的传感信息后,可以对获取到的该时长内的传感信息进行过滤,筛选出有用的传感信息。
其中,本申请实施例中的目标对象包括车辆、行人、障碍物以及交通标识中的一个或多个。所述交通标识还可以包括交通指示牌、交通信号灯以及道路交通标线中的一个或多个。
应理解,本申请实施例中的目标对象并不限于上述内容,任何可以适用本申请的对象,都可以作为本申请的目标对象。
进一步的,该目标对象的传感信息可以是该目标对象的外部特征、当前位置以及与该驾驶车辆的距离中的一个或多个。示例性的,如果目标对象包括车辆时,该目标对象的传感信息可以包括车辆的外部特征、车辆当前的位置,与驾驶车辆的距离中的一个或多个;所述车辆的外部特征可以包括车辆的型号、车辆的颜色、车辆的长度以及车辆的宽度等。
如果目标对象包括行人时,该目标对象的传感信息可以包括行人的外部特征、行人当前的位置、与驾驶车辆的距离中的一个或多个;所述行人的外部特征可以包括行人的身份 等,例如,行人的身份可以为成年人,老年人等。从而使该电子装置基于行人的身份,来判断该行人是否有很快的反映能力,是否需要加强预警等。
为了更好的理解该S800步骤,这里以通过摄像头采集目标对象的传感信息为例进行介绍。例如,如图9所示,摄像头可采集连续多张图像,通过对图像中特征像素点的识别,确定目标对象的传感信息,然后通过接口电路发送给电子装置中的处理器,实现电子装置获取目标对象的传感信息。再例如,摄像头可采集连续多张图像,然后直接将采集到的多张图像通过接口电路发送给电子装置中的处理器,由该处理器对图像中的特征像素点进行识别,确定目标对象的传感信息,实现电子装置获取目标对象的传感信息。
S801、电子装置根据该目标对象的传感信息,获得该目标对象在第一时刻的预测位置。
本申请实施例一种可选的方式,该电子装置可以通过将获取到的目标对象的传感信息,输入到用于确定预测位置的预测模型中,从而获取该目标对象在第一时刻的预测位置。
其中,本申请实施例可以根据以往获取到的阈值数量的目标对象的传感信息,建立预测模型。这里的预测模型功能是基于当前目标对象的传感信息,例如目标对象的加速度,方向,距离,位置等信息,来预测未来一段时间该目标对象的位置等。
本申请实施例中该电子装置还可以基于目标对象在一段时间内的预测位置与对应的实际位置的误差,进行反馈,对建立的预测模型进行更新调整等。
需要说明的是,本申请对预测模型的建立方式,预测方法等不进行限定。
此外,本申请中该电子装置还可以根据目标对象对应的大数据库来进行位置预测等。
应理解,本申请实施例并不限定获取目标对象第一时刻预测位置的方式,任何能够应用到本申请的方式,都适用于本申请。S802、该电子装置尝试在该第一时刻前获取测量位置,若获取到,执行S803,若没有获取到,执行S804。
其中,该电子装置在获取该测量位置时,可以有多种实现方式,具体并不限于下述几种:
实现方式1:该测量位置是该电子装置在第一时刻前获取到的。
也就是说,该电子装置只要保证在第一时刻前获取到测量位置即可。
实现方式2:该测量位置是该电子装置在第一时刻前预设时长内获取到的。
也就是说,该电子装置需要保证在第一时刻前,并且还需要在该第一时刻前预设时长内获取到测量位置。
示例性的,如图10所示,假设预设时长为1ms。电子装置在第一时刻前1ms内,获取该目标对象的测量位置。
本申请实施例一种可选的方式,本申请实施例中可以通过摄像头在该预设时长内,连续采集多张图像,通过对图像中特征像素点的识别,确定目标对象在该预设时长内的传感信息,然后通过接口电路发送给该电子装置,从而使该电子装置根据该预设时长内的传感信息,确定该目标对象的测量位置。
本申请实施例另一种可选的方式,本申请实施例中可以通过摄像头在该预设时长内,连续采集多张图像,然后直接将采集到的多张图像通过接口电路发送给电子装置,由该电子装置对图像中的特征像素点进行识别,确定目标对象的传感信息,进而确定该目标对象的测量位置。
需要说明的是,本申请实施例中该电子装置在第一时刻前预设时长内获取到的目标对 象的测量位置的数量可能为一个,多个,还可能为0个。
S803,该电子装置根据所述预测位置和所述测量位置,确定该目标对象在第一时刻的显示位置。
一种情况下,如图11所示,如果电子装置确定在该预设时长内获取到了该目标对象的测量位置,电子装置将该测量位置与该预测位置进行融合,得到该目标对象在第一时刻的显示位置。
示例性的,本申请实施例中该电子装置可以将该预测位置与该测量位置的平均值,确定为该显示位置。
应理解,如图12所示,本申请实施例根据目标对象第一时刻的预测位置以及该测量位置融合校正得到目标对象第一时刻的显示位置的方式,使得获取到的显示位置,更贴近目标对象真实轨迹,有效降低了抖动性,提升用户体验。
进一步的,在S803中,若电子装置在该预设时长内获取到该目标对象的多个测量位置,则可以通过多种方式确定用于与该预测位置进行融合的测量位置,具体并不限于下述几种:
确定方式1:电子装置将该多个测量位置的平均值确定为用于与该预测位置进行融合的测量位置。
示例性的,如图13所示,假设在第一时刻前预设时长内,获取到3个测量位置,例如测量位置1~3。电子装置可以确定该3个测量位置的平均值,然后将该3个测量位置的平均值确定为用于与该预测位置进行融合的测量位置。
确定方式2:电子装置将获取到的所述多个测量位置中最后一个测量位置确定为用于与该预测位置进行融合的测量位置。
示例性的,如图14所示,假设在第一时刻前预设时长内,获取到3个测量位置,例如测量位置1~3。应理解,越靠近第一时刻的测量位置,时效性越强。因此,电子装置可以将该3个测量位置中的测量位置3确定为用于与该预测位置进行融合的测量位置。
S804,该电子装置根据所述预测位置,确定该目标对象在第一时刻的显示位置。
其中,如图15所示,如果该电子装置确定在该预设时长内没有获取到该目标对象的测量位置,该电子装置将该预测位置作为该目标对象在第一时刻的显示位置。
应理解,当电子装置对目标对象的检测发生阻塞,未在预设时间内完成时,直接使用该目标对象第一时刻的预测位置作为输出,能够大幅提升系统显示输出速度,减少延迟。
S805,该电子装置使能显示装置在该第一时刻和显示位置,显示所述目标对象的提示信息。
一种情况下,当电子装置具有投影功能和/或显示功能时。该电子装置可以在第一时刻和显示位置,将该目标对象的提示信息投放到驾驶车辆的风挡玻璃上。
另种情况下,当电子装置不具备投影和/或显示功能时,该电子装置可以通过接口电路向连接的显示屏幕,例如车载显示屏、AR-HUD发送控制指令,使接收到该控制指令的显示屏幕在所述第一时刻显示所述目标对象的提示信息,其中,该控制指令可以用于指示显示屏幕显示对应内容;或者,该电子装置可以通过无线连接方式,例如蓝牙连接方式,向连接的显示屏幕,例如车载显示屏、AR-HUD发送控制指令,使接收到该控制指令的显示屏幕在所述第一时刻显示所述目标对象的提示信息,其中,该控制指令可以用于指示显示屏幕显示对应内容。
此外,当本申请实施例S805在第二种情况下,该电子装置使能HUD在该第一时刻显示所述目标对象的提示信息的方式具体并不限于下述几种:
显示方式1:电子装置进行投影坐标系转换,并将转换后的该目标对象的显示位置发送给HUD进行投影。
其中,电子装置将得到的目标对象第以时刻的显示位置进行HUD坐标系转换时,可以通过下列方式实现:
电子装置根据该显示位置所在世界坐标系与车身坐标系的对应关系,将所述世界坐标系下的显示位置映射到所述车身坐标系中。
本申请实施例一种可选的方式,电子装置可以通过下列方式确定并建立显示位置所在世界坐标系与车身坐标系的对应关系,从而根据该对应关系,对该显示位置进行坐标系转换。
示例性的,首先确定该电子装置的内部参数。
其中,电子装置的内部参数包括且并不限于该电子装置的翻滚角(roll)、偏航角(yaw)、俯仰角(pitch)三个姿态。
例如,多次放置棋盘格在电子装置前,标定该电子装置的内参。然后,根据该电子装置的内参,放置一块棋盘格垂直在HUD车身前面,标定该电子装置的外参旋转矩阵R和偏移向量T,测量棋盘格对HUD车身坐标系的偏移。
其中,HUD车身坐标系的偏移可以根据下述公式1确定。
Δd=(Δx,Δy,Δz)   公式1
其中,Δd表示棋盘格对HUD车身坐标系,在x,y,z三个方向上的偏移;Δx表示x方向偏移量;Δy表示y方向偏移量;Δz表示z方向偏移量。
进一步的,在确定电子装置的内参以及电子装置与HUD车身坐标系的偏移后,可以根据下述公式2得到对应的HUD车身坐标系。
Figure PCTCN2021109949-appb-000001
其中,x car表示x方向车身坐标系下坐标,y car表示y方向车身坐标系下坐标,z car表示z方向车身坐标系下坐标;x w表示x方向世界坐标系下坐标,y w表示y方向世界坐标系下坐标,z w表示z方向世界坐标系下坐标。
进一步的,电子装置将转换后的目标对象第一时刻的显示位置发送给HUD,该HUD根据该显示位置,在驾驶车辆的风挡玻璃上投影该目标对象的提示信息。
投影方式2:电子装置向HUD发送该目标对象第一时刻的显示位置,由HUD自行进行投影坐标系转换。
其中,驾驶车辆中的HUD将得到的该目标对象第一时刻的显示位置进行HUD坐标系转换时,可以通过下列方式实现:
HUD根据该对应关系,将该世界坐标系下的显示位置映射到该车身坐标系中。
其中,本申请实施例中HUD确定并建立显示位置所在世界坐标系与车身坐标系的对应关系,可以参见上述投影方式1的内容介绍,为简洁描述,在此不进行赘述。
进一步的,该HUD还可以根据目标对象第一时刻的显示位置,渲染出对应的针对目 标对象的提示信息,并通过光机投射到风挡上,呈现在驾驶员的眼前,让驾驶员能实时得到对行人和车辆的预警信息,实现对行人、车辆的实时预警。
进一步的,本申请实施例中显示位置的确定,进一步还有可能有以下方式:
方式1:该显示位置与预先设置的修正值有关。
其中,本申请实施例中的修正值可以是预先设置的,用于去除车辆在驾驶过程中颠簸晃动产生的误差。
本申请实施例一种可选的方式,电子装置可以根据不同的驾驶场景,采用不同的修正值进行修正。
其中,该电子装置可以根据不同路面情况,来确定驾驶场景。例如,较为平坦的柏油马路的驾驶场景可以为市区;较为狭窄陡峭的路段的驾驶场景可以为山路等。
进一步的,本申请实施例可以根据大数据分析,获取不同驾驶场景下车辆颠簸晃动情况,并根据车辆颠簸晃动情况,确定不同场景对应的修正值。
可以理解的,在平坦的马油路上行驶,车辆较为平稳,修正值较小;在山地行驶,车辆颠簸程度较大,修正值较大。
示例性的,假设当前驾驶场景为市区,根据驾驶场景与修正值的对应关系,可以得到该场景下修正值为向目标对象行进方向偏移0.5米。
例如,没有进一步根据修正值获取该显示位置时,该目标对象的显示位置如图15中的(a)所示。当显示装置在投影之前,进一步根据修正值获取该显示位置时,则投影后的该目标对象在第一时刻的显示位置如图15中的(b)所示。相比于图15(a)中的显示位置偏移了0.5米。
方式2:该显示位置与所述第一时刻前多个相邻时刻对应的显示位置的均值有关。
示例性的,本申请实施例中的电子装置可以通过滑窗式滤波,更新该目标对象第一位置的显示位置。
其中,电子装置根据相邻至少两帧的显示位置确定均值滤波。然后,电子装置根据该均值滤波以及预设步长,进行滑窗式滤波,更新该目标对象第一位置的显示位置。
本申请实施例可以通过下列公式3确定均值滤波:
Figure PCTCN2021109949-appb-000002
其中,k表示第k帧目标对象的位置信息,n表示选取做均值滤波的帧数,
Figure PCTCN2021109949-appb-000003
表示均值滤波。
假设,如图17所示,选取邻近3帧做均值滤波,以及预设步长为1进行滑窗式滤波,则电子装置可以获取到较为平滑、稳定的显示位置输出。其中,每个Box代表着每一帧对应的显示位置。
进一步的,本申请实施例电子装置在进行显示前,为了有效节省系统功耗,电子装置可以确定该驾驶车辆处于行驶状态。
本申请实施例可以通过下列一个或多个条件确定驾驶车辆是否处于行驶状态:
条件1:驾驶车辆发动机是否运转。
条件2:驾驶车辆在一定时间内的移动距离是否大于阈值距离。
条件3:驾驶车辆所采用的档位是P挡(驻车档)还是D档(前进挡)。
此外,为了能够有效提升电子装置对目标对象显示位置的输出速度,本申请实施例一种可选的方式,还可以在路况现在过程中,采用MQ并行机制。
具体的,整个系统各模块之间可以是相互依赖串行的过程,可以利用消息队列(message queue,MQ)机制对整个系统各模块进行解耦,降低各模块间的耦合,使整个系统各模块并行化。也可以利用其它方式,本申请不做限定,
例如,当对道路中的目标对象进行检测过程中,在阈值时长内未得到该目标对象的更新位置后(即确定检测阻塞后),该检测系统直接将通过预测模型得到的目标对象下一时刻位置的预测值作为输出,大幅提升跟踪输出的速率。
如图18所示,当电子装置包括图6所示的检测模块,跟踪融合模块,HUD防抖模块,HUD坐标变换模块,HUD引擎渲染模块等时,本申请实施例提供的方法可包括以下步骤:
S1800、检测模块检测目标对象的传感信息。
S1801、跟踪融合模块获取目标对象的传感信息。
本申请实施例一种可选的方式,跟踪融合模块可以通过接口电路,获取检测模块检测到的目标对象的传感信息;或跟踪融合模块可以通过无线连接方式,例如,蓝牙连接方式,获取相连检测模块检测到的目标对象的传感信息。
S1802、跟踪融合模块根据该目标对象的传感信息,获得该目标对象在第一时刻的预测位置。
本申请实施例一种可选的方式,该跟踪融合模块可以将获取到的目标对象的传感信息输入到用于确定预测位置的预测模型中,从而获取该目标对象在第一时刻的预测位置。
其中,本申请实施例跟踪融合模块可以根据以往获取到的阈值数量的目标对象的传感信息,建立预测模型。
应理解,本申请实施例中跟踪融合模块可以根据获取到的目标对象的传感信息,对建立的预测模型进行更新调整等。
S1803、跟踪融合模块确定在该第一时刻前是否获取到测量位置,若是,执行S1804,若否,执行S1805。
一种可选的方式,该目标对象第一时刻前的测量位置可以是检测模块检测到后通知给跟踪融合模块的。另一种可选的方式,该目标对象第一时刻前的测量位置可以是该跟踪融合模块通过检测模块通知的目标对象在第一时刻前的传感信息确定的。
其中,该S1803的具体内容,与上述S802的内容相似,为简洁描述,具体可参见上述S802的内容。
S1804、跟踪融合模块根据该预测位置和该测量位置,确定该目标对象在第一时刻的显示位置。
其中,该跟踪融合模块可以将该预测位置和该测量位置的平均值,确定为该目标对象在第一时刻的显示位置。
进一步的,在S1804中,若跟踪融合模块在该第一时刻前获取到该目标对象的多个测量位置,则可以通过多种方式确定用于与该预测位置进行融合的测量位置,具体并不限于下述几种:
确定方式1:跟踪融合模块将该多个测量位置的平均值确定为用于与该预测位置进行融合的测量位置。
确定方式2:跟踪融合模块将获取到的所述多个测量位置中最后一个测量位置确定为用于与该预测位置进行融合的测量位置。
其中,该S1804的具体内容,与上述S803的内容相似,为简洁描述,具体可参见上述S803的内容。
S1805、跟踪融合模块根据所述预测位置,确定该目标对象在第一时刻的显示位置。
一种可选的方式,当跟踪融合模块在该第一时刻前没有获取到该目标对象的测量位置,直接使用预测模型对该目标对象第一时刻的预测位置作为输出,能够大幅提升系统显示输出速度,减少延迟。
S1806、跟踪融合模块使能HUD引擎渲染模块在该第一时刻和显示位置,显示所述目标对象的提示信息。
其中,若驾驶车辆中检测模块进行信息采集的世界坐标系与车身坐标系不同,则本申请实施例中在进行投影前,可以通过HUD坐标变换模块对该目标对象第一时刻的显示位置进行调整。
示例性的,HUD坐标变换模块通过接口电路获取该目标对象第一时刻的显示位置,然后根据显示位置所在世界坐标系与车身坐标系的对应关系,将所述世界坐标系下的显示位置映射到所述车身坐标系中,得到调整后的目标对象第一时刻的显示位置。HUD坐标变换模块通过接口电路,将调整后的目标对象第一时刻的显示位置传输给HUD引擎渲染模块。
进一步的,本申请实施例中通过HUD引擎渲染模块投影所述目标对象的显示位置之前,为了更好的降低抖动性,进一步还可以通过HUD防抖模块进行确定,具体并不限于下述几种:
方式1:HUD防抖模块根据该预测位置,该测量位置以及修正值,获取该目标对象在第一时刻的显示位置。
方式2:HUD防抖模块根据该预测位置,该测量位置,以及第一时刻前多个相邻时刻对应的显示位置的均值,获取该目标对象在第一时刻的显示位置。
根据图18所示流程,可由图6所示电子装置实现本申请实施例提供的显示方法。应理解,图18中所示的由图6所示电子装置实现的步骤实施例性的,根据本申请实施例提供的显示方法,图18所示的一些步骤可以省略,也可以由其他步骤替换图18中的一些步骤,或者该电子装置还可执行图18未示出的一些步骤。
基于上述内容和相同构思,本申请还提供一种电子装置,用于实现以上方法实施例部分介绍的显示方法中电子装置的功能,因此具备上述方法实施例所具备的有益效果。该电子装置可包括图4至图6中任一结构,或由图4至图6中任意多个结构的组合实现。
如图4所示的电子装置可以是终端或车辆,也可以是终端或车辆内部的芯片。该电子装置可以实现如图8或图18所示的显示方法以及上述各可选实施例。其中,该电子装置可包括处理模块410和采集模块420。
其中,处理模块410可用于执行图8所示方法中的S800~S805、图18所述方法中S1801~S1806中的任一步骤,或可用于执行上述可选的实施例中涉及目标对象的测量位置确定、坐标系转换、判断是否在第一时刻前获取到目标对象测量位置等任一步骤。采集模 块420用于采集目标对象的传感信息。例如,可用于执行图18所示方法中的S1800等,或可用于执行上述可选的实施例中涉及目标对象信息获取的任一步骤。具体参见方法示例中的详细描述,此处不做赘述。
处理模块410可用于获取目标对象的传感信息,所述目标对象包括车辆、人、障碍物以及交通标识中的一个或多个;根据所述目标对象的传感信息,获得所述目标对象在第一时刻的预测位置;获取所述目标对象在所述第一时刻前的测量位置;根据所述预测位置和所述测量位置,使能电子装置在所述第一时刻显示所述目标对象的提示信息。
应理解的是,本申请实施例中的电子装置可以由软件实现,例如,具有上述功能的计算机程序或指令来实现,相应计算机程序或指令可以存储在终端内部的存储器中,通过处理器读取该存储器内部的相应计算机程序或指令来实现处理模块410和/或采集模块420的上述功能。或者,本申请实施例中的电子装置还可以由硬件来实现。其中,处理模块410可以是处理器(如CPU或系统芯片中的处理器),采集模块420可包括摄像装置以及传感装置中的一种或多种。
一种可选的方式中,处理模块410可用于:
在所述第一时刻前获取到所述目标对象的测量位置时,使能电子装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置和所述测量位置有关。
一种可选的方式中,处理模块410可用于:
在所述第一时刻前获取不到所述目标对象的测量位置时,使能电子装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置有关。
一种可选的方式中,处理模块410可用于:
在所述第一时刻前获取到所述目标对象的多个测量位置时,使能电子装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置的平均值和所述预测位置有关。
一种可选的方式中,处理模块410可用于:
在所述第一时刻前获取到所述目标对象的多个测量位置时,使能电子装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置中最后获取到的测量位置和所述预测位置有关。
一种可选的方式中,所述显示位置还与预先设置的修正值有关,所述修正值用于降低车辆在驾驶过程中颠簸晃动产生的误差。
一种可选的方式中,所述显示位置与还与所述第一时刻前多个相邻时刻对应的显示位置的均值有关。
一种可选的方式中,所述目标对象包括车辆、人、障碍物以及交通标识中的一个或多个。
应理解,本申请实施例中的电子装置的处理细节可以参考图8、图18及本申请方法实施例中的相关表述,这里不再重复赘述。
如图5所示的电子装置可以是终端或车辆,也可以是终端或车辆内部的芯片。该车辆控制装置可以实现如图8或图18所示的车辆控制方法以及上述各可选实施例。其中,该电子装置可包括处理器、存储器、接口电路或人机交互装置中的至少一个。应理解,虽然 图5中仅示出了一个处理器、一个存储器、一个接口电路和一个(或一种)采集装置。电子装置可以包括其他数目的处理器和接口电路。
其中,接口电路用于电子装置与终端或车辆的其他组件连通,例如存储器或其他处理器,或投影装置等。处理器可用于通过接口电路与其他组件进行信号交互。接口电路可以是处理器的输入/输出接口。
例如,处理器可通过接口电路读取与之耦合的存储器中的计算机程序或指令,并译码和执行这些计算机程序或指令。应理解,这些计算机程序或指令可包括上述功能程序,也可以包括上述电子装置的功能程序。当相应功能程序被处理器译码并执行时,可以使得电子装置实现本申请实施例所提供的显示方法中的方案。
可选的,这些功能程序存储在电子装置外部的存储器中,此时电子装置可以不包括存储器。当上述功能程序被处理器译码并执行时,存储器中临时存放上述功能程序的部分或全部内容。
可选的,这些功能程序存储在电子装置内部的存储器中。当电子装置内部的存储器中存储有上述功能程序时,电子装置可被设置在本申请实施例的电子装置中。
可选的,这些功能程序存储在路况电视装置外部的存储器中,这些功能程序的其他部分存储在电子装置内部的存储器中。
应理解,上述处理器可以是一个芯片。例如,该处理器可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
应注意,本申请实施例中的处理器可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存 取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
应理解,在通过图5所示结构实现该电子装置时,可由存储器存储计算机程序或指令,由处理器执行存储器中存储的计算机程序或指令,执行在通过图4所示结构实现该电子装置时由处理模块410执行的动作。还可由采集装置420执行在通过图4所示结构实现该电子装置采集目标对象传感信息动作。可选的,可由图5所示的处理器和存储器实现图4所示的处理模块410,或者说,图4所示的处理模块410包括处理器和存储器,或者说,由处理器执行存储器中存储的计算机程序或指令,实现由以上图4所示处理模块410执行的动作。和/或,可由图5所示的采集装置实现图4所示的采集模块420,或者说,图4所示的处理模块410包括图5所示的采集装置,或者说,由采集装置执行以上图4所示采集装置420执行的动作。
在通过图6所示结构实现该电子装置时,可由检测模块,跟踪融合模块,HUD防抖模块,HUD坐标变换模块,HUD引擎渲染模块中的一个或多个,执行在通过图4所示结构实现该电子装置时由处理模块410执行的动作。还可由检测模块执行在通过图4所示结构实现该电子装置由采集模块420执行的动作。在通过图6所示结构实现该电子装置时,由检测模块,跟踪融合模块,HUD防抖模块,HUD坐标变换模块,HUD引擎渲染模块分别执行的动作可参照图17所示流程中的说明,这里不再赘述。
应理解,图4至图6任一所示的电子装置的结构可以互相结合,图4至图6任一所示的电子装置以及各可选实施例相关设计细节可互相参考,也可以参考图4至图6任一所示的显示方法以及各可选实施例相关设计细节。此处不再重复赘述。
基于上述内容和相同构思,本申请提供一种计算设备,包括处理器,处理器与存储器相连,存储器用于存储计算机程序或指令,处理器用于执行存储器中存储的计算机程序,以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请提供一种计算机可读存储介质,其上存储有计算机程序或指令,当该计算机程序或指令被执行时,以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请提供一种计算机程序产品,当计算机执行计算机程序产品时,以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请提供一种芯片,芯片与存储器相连,用于读取并执行存储器中存储的计算机程序或指令,以使得计算设备执行上述方法实施例中的方法。
基于上述内容和相同构思,本申请实施例提供一种装置,所述装置包括处理器和接口电路,所述接口电路,用于接收计算机程序或指令并传输至所述处理器;所述处理器运行所述计算机程序或指令以执行上述方法实施例中的方法。
应理解,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。另外,在本申请各个实施例中的各功能模块可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产 品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的保护范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (19)

  1. 一种显示方法,其特征在于,包括:
    获取目标对象的传感信息;
    根据所述目标对象的传感信息,获得所述目标对象在第一时刻的预测位置;
    获取所述目标对象在所述第一时刻前的测量位置;
    根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息。
  2. 根据权利要求1所述的方法,其特征在于,根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息,包括:
    在所述第一时刻前获取到所述目标对象的测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置和所述测量位置有关。
  3. 根据权利要求1所述的方法,其特征在于,根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息,包括:
    在所述第一时刻前获取不到所述目标对象的测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置有关。
  4. 根据权利要求1或2所述的方法,其特征在于,根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息,包括:
    在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置的平均值和所述预测位置有关。
  5. 根据权利要求1或2所述的方法,其特征在于,根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息,包括:
    在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置中最后获取到的测量位置和所述预测位置有关。
  6. 根据权利要求2~5任一项所述的方法,其特征在于,所述显示位置还与预先设置的修正值有关,所述修正值用于降低车辆在驾驶过程中颠簸晃动产生的误差。
  7. 根据权利要求2~5所述的方法,其特征在于,所述显示位置还与所述第一时刻前多个相邻时刻对应的显示位置的均值有关。
  8. 根据权利要求1~7任一项所述的方法,其特征在于,所述目标对象包括车辆、人、障碍物以及交通标识中的一个或多个。
  9. 一种电子装置,其特征在于,包括处理模块和通信模块:
    所述通信模块,用于获取目标对象的传感信息;
    所述处理模块,用于根据所述目标对象的传感信息,获得所述目标对象在第一时刻的预测位置;获取所述目标对象在所述第一时刻前预设时长内的测量位置;根据所述预测位置和所述测量位置,使能显示装置在所述第一时刻显示所述目标对象的提示信息。
  10. 根据权利要求9所述的电子装置,其特征在于,所述处理模块具体用于:
    在所述第一时刻前获取到所述目标对象的测量位置时,使能显示装置在所述第一时刻 和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置和所述测量位置有关。
  11. 根据权利要求9所述的电子装置,其特征在于,所述处理模块具体用于:
    在所述第一时刻前获取不到所述目标对象的测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述预测位置有关。
  12. 根据权利要求9或10所述的电子装置,其特征在于,所述处理模块具体用于:
    在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置的平均值和所述预测位置有关。
  13. 根据权利要求9或10所述的电子装置,其特征在于,所述处理模块具体用于:
    在所述第一时刻前获取到所述目标对象的多个测量位置时,使能显示装置在所述第一时刻和显示位置,显示所述目标对象的提示信息,所述显示位置与所述多个测量位置中最后获取到的测量位置和所述预测位置有关。
  14. 根据利要求9~13任一项所述的电子装置,其特征在于,所述显示位置还与预先设置的修正值有关,所述修正值用于降低车辆在驾驶过程中颠簸晃动产生的误差。
  15. 根据权利要求9~13任一项所述的电子装置,其特征在于,所述显示位置还与所述第一时刻前多个相邻时刻对应的显示位置的均值有关。
  16. 根据权利要求9~15任一项所述的电子装置,其特征在于,所述目标对象包括车辆、人、障碍物以及交通标识中的一个或多个。
  17. 一种计算设备,其特征在于,包括处理器,所述处理器与存储器相连,所述存储器存储计算机程序或指令,所述处理器用于执行所述存储器中存储的计算机程序或指令,以使得所述计算设备执行如权利要求1至8中任一项所述的方法。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当所述计算机程序或指令被计算设备执行时,以使得所述计算设备执行如权利要求1至8中任一项所述的方法。
  19. 一种芯片,其特征在于,包括至少一个处理器和接口;
    所述接口,用于为所述至少一个处理器提供计算机程序、指令或者数据;
    所述至少一个处理器用于执行所述计算机程序或指令,以使得如权利要求1至8中任一项所述的方法被执行。
PCT/CN2021/109949 2021-07-31 2021-07-31 一种显示方法、装置和系统 WO2023010236A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2021/109949 WO2023010236A1 (zh) 2021-07-31 2021-07-31 一种显示方法、装置和系统
CN202180005788.3A CN115917254A (zh) 2021-07-31 2021-07-31 一种显示方法、装置和系统
EP21952146.5A EP4369177A1 (en) 2021-07-31 2021-07-31 Display method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/109949 WO2023010236A1 (zh) 2021-07-31 2021-07-31 一种显示方法、装置和系统

Publications (1)

Publication Number Publication Date
WO2023010236A1 true WO2023010236A1 (zh) 2023-02-09

Family

ID=85154027

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109949 WO2023010236A1 (zh) 2021-07-31 2021-07-31 一种显示方法、装置和系统

Country Status (3)

Country Link
EP (1) EP4369177A1 (zh)
CN (1) CN115917254A (zh)
WO (1) WO2023010236A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027652A (zh) * 2015-09-16 2018-05-11 索尼公司 信息处理设备、信息处理方法以及程序
CN109427199A (zh) * 2017-08-24 2019-03-05 北京三星通信技术研究有限公司 用于辅助驾驶的增强现实的方法及装置
WO2020158601A1 (ja) * 2019-01-29 2020-08-06 日本精機株式会社 表示制御装置、方法、及びコンピュータ・プログラム
CN111915900A (zh) * 2019-05-07 2020-11-10 株式会社电装 信息处理装置和方法及计算机可读存储介质
CN112904996A (zh) * 2019-12-04 2021-06-04 上海交通大学 车载抬头显示设备画面补偿方法及装置、存储介质和终端
CN113112413A (zh) * 2020-01-13 2021-07-13 北京地平线机器人技术研发有限公司 图像生成方法、图像生成装置和车载抬头显示系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027652A (zh) * 2015-09-16 2018-05-11 索尼公司 信息处理设备、信息处理方法以及程序
CN109427199A (zh) * 2017-08-24 2019-03-05 北京三星通信技术研究有限公司 用于辅助驾驶的增强现实的方法及装置
WO2020158601A1 (ja) * 2019-01-29 2020-08-06 日本精機株式会社 表示制御装置、方法、及びコンピュータ・プログラム
CN111915900A (zh) * 2019-05-07 2020-11-10 株式会社电装 信息处理装置和方法及计算机可读存储介质
CN112904996A (zh) * 2019-12-04 2021-06-04 上海交通大学 车载抬头显示设备画面补偿方法及装置、存储介质和终端
CN113112413A (zh) * 2020-01-13 2021-07-13 北京地平线机器人技术研发有限公司 图像生成方法、图像生成装置和车载抬头显示系统

Also Published As

Publication number Publication date
CN115917254A (zh) 2023-04-04
EP4369177A1 (en) 2024-05-15

Similar Documents

Publication Publication Date Title
JP7399164B2 (ja) 駐車スペース検出に適したスキューされたポリゴンを使用した物体検出
US11042157B2 (en) Lane/object detection and tracking perception system for autonomous vehicles
WO2021226776A1 (zh) 一种车辆可行驶区域检测方法、系统以及采用该系统的自动驾驶车辆
US10248196B2 (en) System for occlusion adjustment for in-vehicle augmented reality systems
US10816979B2 (en) Image data acquisition logic of an autonomous driving vehicle for capturing image data using cameras
JP2023507695A (ja) 自律運転アプリケーションのための3次元交差点構造予測
CN111133447A (zh) 适于自主驾驶的对象检测和检测置信度
JP6757442B2 (ja) 自動運転車における車線後処理
US20210042538A1 (en) Dynamic Driving Metric Output Generation Using Computer Vision Methods
WO2022134364A1 (zh) 车辆的控制方法、装置、系统、设备及存储介质
EP4089659A1 (en) Map updating method, apparatus and device
KR102610001B1 (ko) 자율 주행 차량 중의 센서 동기화 데이터 분석을 위한 시스템
US20220027684A1 (en) Generating fused sensor data through metadata association
CN114764782A (zh) 多视图汽车和机器人系统中的图像合成
CN113205088B (zh) 障碍物图像展示方法、电子设备和计算机可读介质
CN114694111A (zh) 车辆定位
CN114119955A (zh) 一种潜在危险目标检测方法及装置
WO2023010236A1 (zh) 一种显示方法、装置和系统
WO2022153896A1 (ja) 撮像装置、画像処理方法及び画像処理プログラム
CN113160550B (zh) 涉及交叉路口的自动驾驶场景的传感器覆盖范围分析
US20220198714A1 (en) Camera to camera calibration
US20220309693A1 (en) Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation
WO2023087182A1 (zh) 一种信息交互方法、装置及系统
WO2024062842A1 (ja) 固体撮像装置
WO2020073270A1 (en) Snapshot image of traffic scenario

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21952146

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021952146

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2021952146

Country of ref document: EP

Effective date: 20240208

NENP Non-entry into the national phase

Ref country code: DE